source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
26,549 | The observable universe is approximately 13.7 billion years old. But yet it is 80 billion light years across. Isn't this a contradiction? | This question implicitly refers to the visible universe, but we should state that explicitly, as otherwise the question doesn't make any sense. It may seem like we shouldn't be able to see more than 13.7 billion light-years (13.7 giga-light-years, or glyrs) away, but that reasoning omits the expansion of spacetime according to General Relativity. A photon emitted from somewhere near the beginning of the Universe would have traveled nearly 13.7 glyrs if you had measured each light-year just as the photon crossed it, but since those light-years that you measured have expanded since the photon passed through, that distance now adds up to about 80 glyrs. | {
"source": [
"https://physics.stackexchange.com/questions/26549",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/104/"
]
} |
26,643 | While reading with my son about how a Mars-like planet collided with the early Earth that resulted in our current moon, it said the initial debris also formed a ring, but that ring ended up getting absorbed by the Earth and the Moon. I couldn't answer his question then why Saturn still has rings. Shouldn't Saturn's rings be clumping into Moons or getting absorbed by Saturn's gravity? | There are a few things that keep Saturn's rings roughly the way they are. First, Saturn's D ring actually is "raining" down on Saturn currently. But, the phenomenon of shepherd moons prevents the vast majority of material from leaving the other rings: "The gravity of shepherd moons serves to maintain a sharply defined edge to the ring; material that drifts closer to the shepherd moon's orbit is either deflected back into the body of the ring, ejected from the system, or accreted onto the moon itself." (quote from Wikipedia ) Besides this, the majority of the particles within the ring system have almost no motion towards or away from Saturn; no motion towards the planet prevents them from being lost. Second, Saturn's rings cannot clump into "full-fledged" moons, but they can clump into moonlets up to several hundred meters to a few kilometers across. At last count, I think there were over 200 that had been found, and they also come out of numerical simulations. Beyond these larger moonlets, quasi-stable clumps and clusters of ring particles form with great frequency the farther you get from Saturn. These clusters of particles are constantly changing size, trading material, etc., and so there's no time for them to become solid and cohesive. This gets into the idea of the Roche Limit and Hill Spheres . The basic idea of the Roche Limit is that the closer you are to a massive object, the more tidal forces are going to tear you apart (or prevent you from forming to begin with). Hill spheres are related, where the idea is at what point you're gravitationally bound to one object or another. If you're within Saturn's Hill sphere versus a moon's Hill sphere, you're going to be pulled to Saturn. With both concepts, you'll need to have a moon forming farther away from Saturn than its rings are now to actually be stable. You can see the effects of these by looking at N-body dynamical simulations of the rings. This was my research for a year and a half, and it culminated in over a hundred simulations, many of which I made movies of, and then I posted them on one of my personal websites . If you go to it, scroll down and take a look at one of the C ring simulations, B ring simulations, and A ring simulations (warning - the movies are a bit big). You should choose ones with a large τ value and ρ of 0.85 because those will show clumping better. What you'll see is that, in the C ring, almost no clumping occurs. Go farther from Saturn into the B ring and you'll see a spider web start to happen of strands of clumps of particles. Then if you go to the farther away A ring the strands are fragmented more into clusters. (Note on the movies: The "L" value next to each one is how large the simulation cell is on a side, in meters. So you're just looking at a VERY small region of the ring. It's set so that the center of the cell doesn't move, so you'd imagine that whole thing orbiting around Saturn.) | {
"source": [
"https://physics.stackexchange.com/questions/26643",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5377/"
]
} |
26,892 | Steven Weinberg's book "The Quantum Theory of Fields", volume 3, page 46 gives the following argument against ${\cal N} = 3$ supersymmetry: "For global ${\cal N} = 4$ supersymmetry there is just one supermultiplet ... This is equivalent to the global supersymmetry theory with ${\cal N} = 3$, which has two supermultiplets: 1 supermultiplet... and the other the CPT conjugate supermultiplet... Adding the numbers of particles of each helicity in these two ${\cal N} = 3$ supermultiplets gives the same particle content as for ${\cal N}= 4$ global supersymmetry" However, this doesn't directly imply (as far as I can tell) that there is no ${\cal N} = 3$ QFT. Such a QFT would have the particle content of ${\cal N} = 4$ super-Yang-Mills but it wouldn't have the same symmetry. Is such a QFT known? If not, is it possible to prove it doesn't exist? I guess it might be possible to examine all possible Lagrangians that would give this particle content and show none of them has ${\cal N} = 3$ (but not ${\cal N} = 4$) supersymmetry. However, is it possible to give a more fundumental argument, relying only on general principles such as Lorentz invariance, cluster decomposition etc. that would rule out such a model? | Depending on what you mean by "exist", the answer to your question is Yes . There is an $N=3$ Poincaré supersymmetry algebra, and there are field-theoretic realisations. In particular there is a four-dimensional $N=3$ supergravity theory. A good modern reference for the diverse flavours of supergravity theories is Toine Van Proeyen's Structure of Supergravity Theories . Added Weinberg's argument is essentially the following observation. Take a massless unitary representation of the $N=3$ Poincaré superalgebra with helicity $|\lambda|\leq 1$. This representation is not stable under CPT, so the CPT theorem says that to realise that in a supersymmetric quantum field theory, you have to add the CPT-conjugate representation. Once you do that, though, the $\oplus$ representation admits in fact an action of the $N=4$ Poincaré superalgebra. The reason the supergravity theory exists (and is different from $N=4$ supergravity) is that the $N=3$ gravity multiplet, which is a massless helicity $|\lambda|=2$ unitary representation, is already CPT-self-conjugate. | {
"source": [
"https://physics.stackexchange.com/questions/26892",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
26,906 | I guess by now most people have heard about the new paper ( arXiv:1109.4897 ) by the OPERA collaboration which claims to have observed superluminal neutrinos with 6 $\sigma$ significance. Obviously this has been greeted with a great deal of skepticism, and there will no doubt be debate over systematic errors for a long time to come (and frankly I expect some unaccounted for systematic error to be the case here). Obviously theorists abhor superluminal travel, and I am well aware of many of the reasons for this. However, the paper has me wondering whether there have been any toy models put forward which would be both consistent with the OPERA paper, and with earlier bounds on neutrino velocity. In particular, if taken with other previous papers (from MINOS and from observations of the 1987 supernova) you have the following bounds on neutrino velocity in various average energy regimes: $>$ 30 GeV: $~\frac{|v-c|}{c} < 4\times 10^{-5}$ 17 GeV: $~~~~\frac{v-c}{c} = (2.48 \pm 0.28 (stat) \pm 0.30 (sys))\times 10^{-5}$ 3 GeV: $~~~~~\,\frac{v-c}{c} = (5.1 \pm 2.9) \times 10^{-5}$ 10 MeV: $~~~~\frac{|v-c|}{c} < 2\times 10^{-9}$ Is there any proposed model which is actually consistent with such results? It seems that there has been a lot of pointing to the supernova bound (the 10MeV scale) as being inconsistent with the reported findings. However if there was a mechanism whereby the velocity were a monotonic function of energy (or depended on flavor), this argument would be negated. Do there exist any such proposed mechanisms? | I am afraid that one has to go to a "very unusual segment" of theoretical literature if he wants any papers about superluminal neutrinos. Guang-jiong Ni has been authoring many papers about superluminal neutrinos a decade ago: http://arxiv.org/abs/hep-ph/0103051 http://arxiv.org/abs/hep-th/0201077 http://arxiv.org/abs/hep-ph/0203060 http://arxiv.org/abs/hep-ph/0306028 and probably others. They are pretty much cited by the same author only so you may become the second person in the world who has read them. For somewhat more well-known papers on tachyonic neutrinos, see http://arxiv.org/abs/hep-ph/9810355 http://arxiv.org/abs/hep-ph/9607477 http://arxiv.org/abs/hep-th/9411230 which were raised by the observations of apparently superluminal neutrinos in the decay of the tritium atoms. Well, the older ones were written before the tritium atom decay anomaly. An even older paper is http://www.sciencedirect.com/science/article/pii/0370269385904605 which reviewed the experimental situation of tachyonic neutrinos as of 1985. You may want to check many more papers by Alan Kostelecky because he's been working on similar possible ways how the Lorentz symmetry could be broken for decades and he is a rather serious researcher. See also http://www.sciencedirect.com/science/article/pii/0370269386904806 A paper that actually claimed to have a model of superluminal neutrinos is http://arxiv.org/abs/hep-ph/0009291 where two Weyl equations were joined into a twisted Dirac equation of a sort. Not sure whether it made any sense. On Sunday, I will post an article on my blog about a vague way to get different speeds of light from noncommutative geometry (in string theory or otherwise): http://motls.blogspot.com/2011/09/superluminal-neutrinos-from.html As you noted as well, the functional dependence of the velocity on the neutrino energy would have to be an extremely unusual function which de facto invalidates the Opera results without any loopholes. However, there could be a loophole: the neutrino could become highly tachyonic only while it moves through the rocks. "Index refraction for neutrinos" could be smaller than one for common materials such as rocks. It sounds of course as incompatible with relativity as the tachyonic neutrinos in the vacuum but by splitting the experimental data into the vacuum data and rocks data, you could get more sensible velocity dependence on energy in both cases. | {
"source": [
"https://physics.stackexchange.com/questions/26906",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/163/"
]
} |
26,912 | I would be interested in a good mathematician-friendly introduction to integrable models in physics, either a book or expository article. Related MathOverflow question: what-is-an-integrable-system . | I take "integrable models" to mean "exactly solvable models in statistical physics". You can take a look at the classic book R. J. Baxter - Exactly Solved Models in Statistical mechanics (You can download it for free) Otherwise this new book is quit readable and covers more than just solvable models G. Mussardo - Statistical field theory: an introduction to exactly solved models in statistical physics Others can probably give you more mathematician-friendly references, but I think it would be good if you could be more specific about what you are looking for. | {
"source": [
"https://physics.stackexchange.com/questions/26912",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3415/"
]
} |
26,918 | At the seminar where the talk was about quasicrystals, I mentioned that some results on their properties remind the fractals. The person who gave the talk was not too fluent in a rigor mathematics behind those properties, and I was not able to find any clues to this area myself. There are some papers where fractals are mentioned in application to quasicrystals, but I did not find any introductory level paper with careful mathematics. Probably there is such an introduction somewhere? | I take "integrable models" to mean "exactly solvable models in statistical physics". You can take a look at the classic book R. J. Baxter - Exactly Solved Models in Statistical mechanics (You can download it for free) Otherwise this new book is quit readable and covers more than just solvable models G. Mussardo - Statistical field theory: an introduction to exactly solved models in statistical physics Others can probably give you more mathematician-friendly references, but I think it would be good if you could be more specific about what you are looking for. | {
"source": [
"https://physics.stackexchange.com/questions/26918",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4657/"
]
} |
27,119 | Given a symmetric (densely defined) operator in a Hilbert space, there might be quite a lot of selfadjoint extensions to it. This might be the case for a Schrödinger operator with a "bad" potential. There is a "smallest" one (Friedrichs) and a largest one (Krein), and all others are in some sense in between.
Considering the corresponding Schrödinger equations, to each of these extensions there is a (completely different) unitary group solving it.
My question is: what is the physical meaning of these extensions? How do you distinguish between the different unitary groups? Is there one which is physically "relevant"?
Why is the Friedrichs extension chosen so often? | The differential operator itself (defined on some domain) encodes local information about the dynamics of the quantum system . Its self-adjoint extensions depend precisely on choices of boundary conditions of the states that the operator acts on, hence on global information about the kinematics of the physical system. This is even true fully abstractly, mathematically: in a precise sense the self-adjoint extensions of symmetric operators (under mild conditions) are classified by choices of boundary data. More information on this is collected here http://ncatlab.org/nlab/show/self-adjoint+extension See the references on applications in physics there for examples of choices of boundary conditions in physics and how they lead to self-adjoint extensions of symmetric Hamiltonians. And see the article by Wei-Jiang there for the fully general notion of boundary conditions. | {
"source": [
"https://physics.stackexchange.com/questions/27119",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11512/"
]
} |
27,176 | Using a form of the Haag-Kastler axioms for quantum field theory (see AQFT on the nLab for more details), it is possible in quite general contexts to prove that all local algebras are isomorphic to the hyperfinite $III_1$ factor or to the tensor product of the $III_1$ factor with the center of the given local algebra. (A local algebra is the algebra of observables that is associated to an open bounded subset of Minkowski space. The term $III_1$ factor refers to the Murray-von Neumann classification of factors of von Neumann algebras). Also see this question on math overflow for more details. So one could say that quantum mechanics has the $I_n$ and $I_{\infty}$ factors as playground, while QFT has the hyperfinite $III_1$ factor as playground. My questions has two parts: 1) I would like to know about a concrete physical system where it is possible to show that the local algebras are hyperfinite $III_1$ factors, if there is one where this is possible. 2) Is there an interpretation in physical terms of the presence of the hyperfinite $III_1$ factor in QFT? | This article by Yngvason is probably a good start: Yngvason, J. (2005). The role of type III factors in quantum field theory. Reports on Mathematical Physics, 55(1), 135–147. ( arxiv ) The Type III property says something about statistical independence. Let $\mathcal{O}$ be a double cone, and let $\mathfrak{A}(\mathcal{O})$ be the associated algebra of observables. Assuming Haag duality, we have $\mathfrak{A}(\mathcal{O}')'' = \mathfrak{A}(\mathcal{O})$. If $\mathfrak{A}(\mathcal{O})$ is not of Type I, the Hilbert space $\mathcal{H}$ of the system does not decompose as $\mathcal{H} = \mathcal{H}_1 \otimes \mathcal{H}_2$ in such a way that $\mathfrak{A}(\mathcal{O})$ acts on the first tensor factor, and $\mathfrak{A}(\mathcal{O}')$ on the second. This implies that one cannot prepare the system in a certain state when restricted to measurements in $\mathcal{O}$ regardless of the state in the causal complement. It should be noted that if the split property holds, that is there is a Type I factor $\mathfrak{N}$ such that $\mathfrak{A}(\mathcal{O}) \subset \mathfrak{N} \subset \mathfrak{A}(\widehat{\mathcal{O}})$ for some region $\mathcal{O} \subset \widehat{\mathcal{O}}$, a slightly weaker property is available: a state can be prepared in $\mathcal{O}$ irregardless of the state in $\widehat{\mathcal{O}}'$. An illustration of the consequences can be found in the article above. Another consequence is that the Borchers property B automatically holds: if $P$ is some projection in $\mathfrak{A}(\mathcal{O})$, then there is some isometry $W$ in the same algebra such that $W^*W = I$ and $W W^* = P$. This implies that we can modify the state locally to be an eigenstate of $P$, by doing the modification $\omega(A) \to \omega_W(A) = \omega(W^*AW)$. Note that $\omega_W(P) = 1$ and $\omega_W(A) = \omega(A)$ for $A$ localised in the causal complement of $\mathcal{O}$. Type III$_1$ implies something slightly stronger, see the article cited for more details. As to the first question, one can prove that the local algebras of free field theories are Type III. This was done by Araki in the 1960's. You can find references in the article mentioned above. In general, the Type III condition follows from natural assumptions on the observable algebras. Non-trivial examples probably have to be found in conformal field theory, but I do not know any references on the top of my head. | {
"source": [
"https://physics.stackexchange.com/questions/27176",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1560/"
]
} |
27,195 | Take the Poincaré group for example. The conservation of rest-mass $m_0$ is generated by the invariance with respect to $p^2 = -\partial_\mu\partial^\mu$. Now if one simply claims The state where the expectation value of a symmetry generator equals the conserved quantity must be stationary one obtains $$\begin{array}{rl} 0 &\stackrel!=\delta\langle\psi|p^2-m_0^2|\psi\rangle
\\ \Rightarrow 0 &\stackrel!= (\square+m_0^2)\psi(x),\end{array}$$ that is, the Klein-Gordon equation. Now I wonder, is this generally a possible quantization? Does this e.g. yield the Dirac-equation for $s=\frac12$ when applied to the Pauli-Lubanski pseudo-vector $W_{\mu}=\frac{1}{2}\epsilon_{\mu \nu \rho \sigma} M^{\nu \rho} P^{\sigma}$ squared (which has the expectation value $-m_0^2 s(s+1)$)? | What you observe is the general phenomenon that in relativistic theories time translation is replaced by "affine-parameter-translation" or "wordline translation symmetry" and hence the corresponding Hamiltonian becomes a constraint, the constraint that states must be invariant under this symmetry. Yes, this works for the relativistic spinning particle and the Dirac equation, too. Here the translation symmetry on the worldline is refined to translation supersymmetry (for ordinary spinors even, this has nothing a priori to do with spacetime supersymmetry). The odd generator of the worldline supersymmetry turns out to be the Dirac operator. Again, states are required to be annihilated by it and this gives the Dirac eqation. Plenty of pointers to details about how this works are here: http://ncatlab.org/nlab/show/spinning+particle | {
"source": [
"https://physics.stackexchange.com/questions/27195",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/97/"
]
} |
27,388 | It is well known that scattering cross-sections computed at tree level correspond to cross-sections in the classical theory. For example the tree-level cross-section for electron-electron scattering in QED corresponds to scattering of classical point charges. The naive explanation for this is that the power of $\hbar$ in a term of the perturbative expansion is the number of loops in the diagram. However, it is not clear to me how to state this correspondence in general. In the above example the classical theory regards electrons as particles and photons as a field. This seems arbitrary. Moreover, if we consider for example $\phi^4$ theory than the interaction of the $\phi$ -quanta is mediated by nothing except the $\phi$ -field itself. What is the corresponding classical theory? Does it contain both $\phi$ -particles and a $\phi$ -field? Also, does this correspondence extend to anything besides scattering theory? Summing up, my question is: What is the precise statement of the correspondence between tree-level QFT and behavior of classical fields and particles? | This was something that confused me for awhile as well until I found this great set of notes: homepages.physik.uni-muenchen.de/~helling/classical_fields.pdf Let me just briefly summarize what's in there. The free Klein-Gordon field satisfies the field equation $$(\partial_{\mu} \partial^{\mu} +m^2) \phi(x) = 0$$ the most general solution to this equation is $$\phi(t, \vec{x}) = \int_{-\infty}^{\infty} \frac{d^3k}{(2\pi)^3} \; \frac{1}{2E_{\vec{k}}} \left( a(\vec{k}) e^{- i( E_{\vec{k}} t -\vec{k} \cdot \vec{x})} + a^{*}(\vec{k}) e^{ i (E_{\vec{k}} t- \vec{k} \cdot \vec{x})} \right)$$ where $$\frac{a(\vec{k}) + a^{*}(-\vec{k})}{2E_{\vec{k}}} = \int_{-\infty}^{\infty} d^3x \; \phi(0,\vec{x}) e^{-i \vec{k} \cdot \vec{x}} $$ and $$\frac{a(\vec{k}) - a^{*}(-\vec{k})}{2i} = \int_{-\infty}^{\infty} d^3x \; \dot{\phi}(0,\vec{x}) e^{-i \vec{k} \cdot \vec{x}}$$ Introducing an interaction potential into the Lagrangian results in the field equation $$(\partial^{\mu} \partial_{\mu} + m^2) \phi = -V'(\phi)$$ choosing a phi-4 theory $V(\phi) = \frac{g}{4} \phi^4$ this results in $$(\partial^{\mu} \partial_{\mu} + m^2) \phi = -g \phi^3$$ Introduce a Green's function for the operator $$(\partial^{\mu} \partial_{\mu} + m^2) G(x) = -\delta(x)$$ which is given by $$G(x) = \int \frac{d^4k}{(2\pi)^4} \; \frac{-e^{-i k \cdot x}}{-k^2 + m^2}$$ now solve the full theory perturabtively by substituting $$\phi(x) = \sum_{n} g^n \phi_{n}(x)$$ into the differential equation and identifying powers of $g$ to get the following equations $$(\partial^{\mu} \partial_{\mu} + m^2) \phi_0 (x) = 0$$ $$(\partial^{\mu} \partial_{\mu} + m^2) \phi_1(x) = -\phi_0(x)^3$$ $$(\partial^{\mu} \partial_{\mu} + m^2) \phi_2 (x) = -3 \phi_0(x)^2 \phi_1(x)$$ the first equation is just the free field equation which has the general solution above. The rest are then solved recursively using $\phi_0(x)$ . So the solution for $\phi_1$ is $$\phi_1(x) = \int d^4y\; \phi_0(y)^3 \, G(x-y)$$ and so on. As is shown in the notes this perturbative expansion generates all no-loop Feynman diagrams and this is the origin of the claim that the tree level diagrams are the classical contributions... | {
"source": [
"https://physics.stackexchange.com/questions/27388",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
27,402 | This question was listed as one of the questions in the proposal (see here ), and I didn't know the answer. I don't know the ethics on blatantly stealing such a question, so if it should be deleted or be changed to CW then I'll let the mods change it. Most foundations of statistical mechanics appeal to the ergodic hypothesis . However, this is a fairly strong assumption from a mathematical perspective. There are a number of results frequently used in statistical mechanics that are based on Ergodic theory. In every statistical mechanics class I've taken and nearly every book I've read, the assumption was made based solely on the justification that without it calculations become virtually impossible. Hence, I was surprised to see that it is claimed (in the first link) that the ergodic hypothesis is "absolutely unnecessary". The question is fairly self-explanatory, but for a full answer I'd be looking for a reference containing development of statistical mechanics without appealing to the ergodic hypothesis, and in particular some discussion about what assuming the ergodic hypothesis does give you over other foundational schemes. | The ergodic hypothesis is not part of the foundations of statistical mechanics. In fact, it only becomes relevant when you want to use statistical mechanics to make statements about time averages. Without the ergodic hypothesis statistical mechanics makes statements about ensembles, not about one particular system. To understand this answer you have to understand what a physicist means by an ensemble. It is the same thing as what a mathematician calls a probability space. The “Statistical ensemble” wikipedia article explains the concept quite well. It even has a paragraph explaining the role of the ergodic hypothesis. The reason why some authors make it look as if the ergodic hypothesis was central to statistical mechanics is that they want to give you a justification for why they are so interested in the microcanonical ensemble. And the reason they give is that the ergodic hypothesis holds for that ensemble when you have a system for which the time it spends in a particular region of the accessible phase space is proportional to the volume of that region. But that is not central to statistical mechanics. Statistical mechanics can be done with other ensembles and furthermore there are other ways to justify the canonical ensemble, for example it is the ensemble that maximises entropy. A physical theory is only useful if it can be compared to experiments. Statistical mechanics without the ergodic hypothesis, which makes statements only about ensembles, is only useful if you can make measurements on the ensemble. This means that it must be possible to repeat an experiment again and again and the frequency of getting particular members of the ensemble should be determined by the probability distribution of the ensemble that you used as the starting point of your statistical mechanics calculations. Sometimes however you can only experiment on one single sample from the ensemble. In that case statistical mechanics without an ergodic hypothesis is not very useful because, while it can tell you what a typical sample from the ensemble would look like, you do not know whether your particular sample is typical. This is where the ergodic hypothesis helps. It states that the time average taken in any particular sample is equal to the ensemble average. Statistical mechanics allows you to calculate the ensemble average. If you can make measurements on your one sample over a sufficiently long time you can take the average and compare it to the predicted ensemble average and hence test the theory. So in many practial applications of statistical mechanics, the ergodic hypothesis is very important, but it is not fundamental to statistical mechanics, only to its application to certain sorts of experiments. In this answer I took the ergodic hypothesis to be the statement that ensemble averages are equal to time averages. To add to the confusion, some people say that the ergodic hypothesis is the statement that the time a system spends in a region of phase space is proportional to the volume of that region. These two are the same when the ensemble chosen is the microcanonical ensemble. So, to summarise: the ergodic hypothesis is used in two places: To justify the use of the microcanonical ensemble. To make predictions about the time average of observables. Neither is central to statistical mechanics, as 1) statistical mechanics can and is done for other ensembles (for example those determined by stochastic processes) and 2) often one does experiments with many samples from the ensemble rather than with time averages of a single sample. | {
"source": [
"https://physics.stackexchange.com/questions/27402",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4463/"
]
} |
27,425 | Whenever I have encountered the rotating wave approximation, I have seen "the terms that we are neglecting correspond to rapid oscillations in the interaction Hamiltonian, so they will average to 0 in a reasonable time scale" as the justification for its use. However, it is not completely clear to me why does this justify that the Hamiltonian that we obtain is a good approximation of the original one, and I was wondering if there is a more rigorous version of the justification, whether it is for a particular system, or in a more general case. As an example, something that would be a satisfying answer would be a result of the form "If you consider an arbitrary state of the system and any time t large enough, and evolve the system according to the RWA Hamiltonian, we obtain with high probability a state close to the one we would obtain under evolution of the original Hamiltonian". "t large enough", "close" and "high probability" would preferably have some good quantitative description. | The rotating wave approximation (RWA) is well justified in a regime of a small perturbation. In this limit you can neglect the so-called Bloch-Siegert and Stark shifts. You can find an explanation in this paper . But, in order to make this explanation self-contained, I will give an idea with the following model $$H=\Delta\sigma_3+V_0\sin(\omega t)\sigma_1$$ being, as usual $\sigma_i$ the Pauli matrices. You can easily work out a small perturbation series for this Hamiltonian working in the interaction picture with $$H_I=e^{-\frac{i}{\hbar}\sigma_3t}V_0\sin(\omega t)\sigma_1e^{\frac{i}{\hbar}\sigma_3t}$$ producing, with a Dyson series, the following next-to-leading order correction $${\cal T}\exp\left[-\frac{i}{\hbar}\int_0^tH_I(t')dt'\right]=I-\frac{i}{\hbar}\int_0^t dt' V_0\sin(\omega t')e^{-\frac{i}{\hbar}\Delta\sigma_3t'}\sigma_1e^{\frac{i}{\hbar}\Delta\sigma_3t'}+\ldots.$$ Now, let us suppose that your system is in the eignstate $|0\rangle$ of the unperturbed Hamiltonian. You will get $$|\psi(t)\rangle=|0\rangle-\frac{i}{\hbar}\int_0^t dt' V_0\sin(\omega t')e^{-\frac{2i}{\hbar}\Delta t'}\sigma_+|0\rangle+\ldots$$ $$=|0\rangle-\frac{1}{2\hbar}\int_0^t dt' V_0\left(e^{i\omega t'-\frac{2i}{\hbar}\Delta t'}-e^{-i\omega t'-\frac{2i}{\hbar}\Delta t'}\right)\sigma_+|0\rangle$$ Now, very near the resonance $\omega\approx2\Delta$ , one term is overwhelming large with respect to the other and one can write down $$|\psi\rangle\approx|0\rangle-\frac{V_0}{2\hbar}t\sigma_+|0\rangle+\ldots.$$ but in the original Hamiltonian this boils down to $$H_I=V_0\sigma_1\sin(\omega t)\left(\cos(2\Delta t)+i\sigma_3\sin(2\Delta t)\right)$$ $$=\frac{V_0}{2}\sigma_1\left(\sin((\omega-2\Delta)t)+\sin((\omega+2\Delta)t)\right)$$ $$+\frac{V_0}{2}\sigma_2\left(\cos((\omega-2\Delta)t)-\cos((\omega+2\Delta)t)\right)$$ $$\approx \frac{V_0}{2}\sigma_2$$ with all the counter-rotating terms properly neglected with the condition $\omega\approx 2\Delta$ applied. It is essential to emphasize that, as the applied field increases, this approximation becomes even less reliable and it is just the leading order of a perturbation series in a near-resonance regime. | {
"source": [
"https://physics.stackexchange.com/questions/27425",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1773/"
]
} |
27,473 | I am looking for references on how to obtain continuum theories from lattice theories. There are basically a few questions that I am interested in, but any references are welcome. For example, you can obtain the Ising chiral CFT from a lattice theory. How does this work exactly? Intuitively is is clear that one should do something like taking the lattice spacing to zero. Is this worked out somewhere in detail for this example? One can also image, say, quantum spin models with sites on the edges of some graph, such that the interactions do not depend on the distance between the sites. One can imagine subdividing this graph further, to obtain an inclusion of the associated algebras of observables. This leads to an increasing sequence of algebras, and one can take the direct limit of this. Can one in this way obtain a continuum theory? I suppose that one might have to impose some conditions on the dynamics of the system at each step. Is something like this done in the literature? I'm mainly interested in a mathematical treatment of these topics. | To take a meaningful continuum limit, essentially, you need to be in regime where your field is smooth enough that a gradient expansion is possible. This is usually acheived by associating a very high energy cost to field configurations that take different values on nearest neigbours in the lattice. The continuum limit of $O(n)$ models is worked out in Fradkin's book, Field Theories of Condensed Matter Systems . For the Ising model a direct continuum limit is problematic because the discrete values of the spin make it impossible to directly elevate the Ising spin to a continuum field. Usually, any continuum limits have to defined by some sort of coarse graining and working with the resulting mean magnetization. For the Ising model, this is worked out by Milchev, A., Heermann, D.W. & Binder, K. in J. Stat. Phys. 44 , 749 (1986) Hope that helps. | {
"source": [
"https://physics.stackexchange.com/questions/27473",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5192/"
]
} |
27,662 | The AdS/CFT correspondence states that string theory in an asymptotically anti-De Sitter spacetime can be exactly described as a CFT on the boundary of this spacetime. Is the converse true? Does any CFT in a suitable number of spacetime dimensions have an AdS/CFT dual? If no, can we characterize the CFTs which have such a dual? | The answer is not known, but many believe it is: "Yes, every CFT has an AdS dual." However, whether the AdS dual is weakly-coupled and has low curvature -- in other words whether it's easy to do calculations with it -- is a different question entirely. We expect, based on well-understood examples (like $\mathcal N=4$ SYM dual to Type IIB strings on $\mathrm{AdS}_5 \times S^5$), that the following is true: For the AdS dual to be weakly-coupled, the CFT must have a large gauge group. For the AdS curvature scale to be small (so that effective field theory is a good approximation), the CFT must be strongly-coupled. In well-understood examples, the CFT has an exactly marginal coupling which when taken to infinity decouples stringy states from the bulk spectrum. By contrast, at weak CFT coupling, the AdS dual description would involve an infinite number of fields and standard EFT methods would not apply. (This doesn't necessarily mean calculations are impossible: we would just need to better understand string theories in AdS -- something which is actively being worked on.) As far as I know, appropriate conditions for CFTs without exactly marginal couplings to have good AdS EFTs are not known. Also, well-understood AdS/CFT dual pairs where the CFT violates one or both of the above conditions are scarce. | {
"source": [
"https://physics.stackexchange.com/questions/27662",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
27,665 | The purpose of this question is to ask about the role of mathematical rigor in physics. In order to formulate a question that can be answered, and not just discussed, I divided this large issue into five specific questions. Update February, 12, 2018: Since the question was put yesterday on hold as too board, I ask future to refer only to questions one and two listed below. I will ask a separate questions on item 3 and 4. Any information on question 5 can be added as a remark. What are the most important and the oldest insights (notions, results) from physics that are still lacking rigorous mathematical
formulation/proofs. The endeavor of rigorous mathematical explanations, formulations, and proofs for notions and results from physics is mainly taken by
mathematicians. What are examples that this endeavor was beneficial to
physics itself. What are examples that insisting on rigour delayed progress in physics. What are examples that solid mathematical understanding of certain issues from physics came from further developments in physics itself. (In particular, I am interested in cases where mathematical rigorous understanding of issues from classical mechanics required quantum mechanics, and also in cases where progress in physics was crucial to rigorous mathematical solutions of questions in mathematics not originated in physics.) The role of rigor is intensely discussed in popular books and blogs. Please supply references (or better annotated references) to academic studies of the role of mathematical rigour in modern physics. (Of course, I will be also thankful to answers which elaborate on a single item related to a single question out of these five questions. See update ) Related Math Overflow questions: Examples-of-non-rigorous-but-efficient-mathematical-methods-in-physics (related to question 1); Examples-of-using-physical-intuition-to-solve-math-problems ; Demonstrating-that-rigour-is-important . | Rigor is clarity of concepts and precision of arguments. Therefore in the end there is no question that we want rigor. To get there we need freedom for speculation, first, but for good speculation we need... ...solid ground, which is the only ground that serves as a good jumping-off point for further speculation. in the words of our review , which is all about this issue. Sometimes physicists behave is if rigor is all about replacing an obvious but non-precise argument with a tedious and boring proof. But more often than not rigor is about identifying the precise and clear definitions such that the obvious argument becomes also undoubtly correct. There are many historical examples. For instance the simple notion of differential forms and exterior derivatives. It's not a big deal in the end, but when they were introduced into physics they not only provided rigor for a multitude of vague arguments about infinitesimal variation and extended quantity. Maybe more importantly, they clarified structure. Maxwell still filled two pages with the equations of electromagnetism at a time when even the concepts of linear algebra were an arcane mystery. Today we say just $d \star d A = j_{el}$ and see much further, for instance derive the charge quantization law rigorously with child's ease. The clear and precise concept is what does this for us. And while probably engineers could (and maybe do?) work using Maxwell's original concepts, the theoreticians would have been stuck. One can't see the subtleties of self-dual higher gauge theory, for instance, without the rigorous concept of de Rham theory. There are many more examples like this. Here is another one: rational CFT was "fully understood" and declared solved at a non-rigorous level for a long time. When the rigorous FRS-classification of full rational CFT was established, it not onyl turned out that some of the supposed rational CFT construction in the literature did not actually exist, while other existed that had been missed, more importantly was: suddenly it was very clear why and which of these examples exist. Based on the solid ground of this new rigor, it is now much easier to base new non-rigorous arguments that go much further than one could do before. For instance about the behaviour of rational CFT in holography . Rigor is about clarity and precision, which is needed for seeing further. As Ellis Cooper just said elsewhere: Rigor cleans the window through which intuition shines. | {
"source": [
"https://physics.stackexchange.com/questions/27665",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1360/"
]
} |
27,700 | I'm a student of mathematics with not much background in physics. I'm interested in learning Quantum field theory from a mathematical point of view. Are there any good books or other reference material which can help in learning about quantum field theory?
What areas of mathematics should I be familiar with before reading about Quantum field theory? | Let me add just a couple of things to what was already mentioned. I do think that the best source for QFT for mathematicians is the the two IAS volumes. But since those are fairly long
and some parts are not easy for mathematicians (I participated a little in writing those down, and I know that largely it was written by people who at the time didn't understand well what
they were writing about), so if you really want to understand the subject in the mathematical way, I would suggest the following order: 1) Make sure you understand quantum mechanics well (there are many mathematical introductions to quantum mechanics; the one I particularly like is the book by Faddeev and Yakubovsky http://www.amazon.com/Lectures-Mechanics-Mathematics-Students-Mathematical/dp/082184699X ) 2) Get some understanding what quantum field theory (mathematically) is about. The source which I like here is the Wightman axioms (as something you might wish for in QFT, but which almost never holds) as presented in the 2nd volume of the book by Reed and Simon on functional analysis; for a little bit more thorough discussion look at Kazhdan's lectures in the IAS volumes. 3) Understand how 2-dimensional conformal field theory works.
If you want a more elementary and more analytic (and more "physical") introduction - look at Gawedzki's lectures in the IAS volumes. If you want something more algebraic, look at Gaitsgory's notes in the same place. 4) Study perturbative QFT (Feynmann diagrams): this is well-covered in IAS volumes
(for a mathematician; a physicist would need a lot more practice than what is done there), but on the spot I don't remember exactly where (but should be easy to find). 5) Try to understand how super-symmetric quantum field theories work. This subject is the hardest for mathematicians but it is also the source of most applications to mathematics.
This is discussed in Witten's lectures in the 2nd IAS volume (there are about 20 of those, I think) and this is really not easy - for example it requires good working knowledge of some aspects of super-differential geometry (also disccussed there), which is a purely mathematical subject but there are very few mathematicians who know it. There are not many mathematicians who went through all of this, but if you really want
to be able to talk to physicists, I think something like the above scheme is necessary
(by the way: I didn't include string theory in my list - this is an extra subject; there is a good introduction to it in D'Hoker's lectures in the IAS volumes). Edit: In addition, if you want a purely mathematical introduction to Topological Field Theory, then you can read Segal's notes http://web.archive.org/web/20000901075112/http://www.cgtp.duke.edu/ITP99/segal/ ;
this is a very accessible (and pleasant) reading! A modern (and technically much harder) mathematical approach to the same subject is developed by Jacob Lurie http://www.math.harvard.edu/~lurie/papers/cobordism.pdf (there is no physical motivation in that paper, but mathematically this is probably the right way to think about topological field theories). | {
"source": [
"https://physics.stackexchange.com/questions/27700",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6068/"
]
} |
28,069 | Water in my electric kettle makes the most noise sixty to ninety seconds before the water comes to a full boil. I have been fooled many times by the noisy kettle, only to discover that the water was not yet hot enough for tea. The kettle is only at a full boil after the noise has subsided . I have noticed the same phenomenon with many other kettles, including conventional kettles on kitchen ranges; it is not a peculiarity of this electric kettle. Why does the boiling become quieter as the water reaches full boil? | There are three phenomena that occur before vigorous boiling of water that produce sound. 1) Air dissolved in water on heating forms small air bubbles at the bottom of the container. These air bubbles get released from the bottom of the container on reaching a sufficient size. The process of release produces a sound of frequency ~ 100 Hz. 2) On boiling, small vapor bubbles get produced at the bottom of the container and also produce sound of ~ 100 Hz on release. However, they cool down before they reach the surface of water and collapse. This collapsing produces a sound of frequency ~ 1 kHz. 3) Collapsing vapor bubbles agitate the water to release small micro air bubbles from water and also from the air trapped in the vapor bubble. This production of micro air bubbles produces a sound of ~ 35–60 kHz. I guess you were talking about either the first or the second case. Both of them occur before you observe vigorous boiling of water and you can hear them. There was an interesting problem posed at APHO 2008 which is the same as your question which estimates the frequencies I just quoted. It also contains references to the experimental measurement of these frequencies. I hope you will find it interesting to solve it than me answering your question: Theoretical Problem 1. Tea Ceremony and Physics of Bubbles You can also find the solutions here in case you need help on this: Theoretical Solution 1, 9 th Asian Physics Olympiad | {
"source": [
"https://physics.stackexchange.com/questions/28069",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9062/"
]
} |
28,111 | A bit of background helps frame this question. The question itself is in the last sentence. For his PhD thesis, Richard Feynman and his thesis adviser John Archibald Wheeler devised an astonishingly strange approach to explaining electron-electron interactions without using a field. Their solution was to take the everyday retarded wave solution to Maxwell's equations and mix it 50/50 with the advanced (backwards in time) solution that up until then had always been discarded as "obviously" violating temporal causality. In a series of papers they showed that this was not the case, and that the recoil of an electron when it emits a photon could self-consistently be explained as the result of an advanced photon traveling backwards in time and impacting the electron at the same instant in which the electron emits a forward-in-time photon. While Feynman's thesis ideas deeply influenced his subsequent development of QED, e.g. in QED's backwards-in-time interpretation of antimatter electrons (positrons). Feynman in a letter to Wheeler later famously retracted (well, famously for some of us) the specific idea of paired photons traveling forwards and backwards in time. Feynman's specific reason for abandoning his thesis premise was vacuum polarization, which cannot be explained by direct electron-to-electron interactions. (Vacuum polarization is easily accommodated by QED, however.) Feynman's abandonment of the original Feynman/Wheeler retarded/advanced photon scheme has always troubled me. The reason is this: If their original idea was completely invalid, the probability from an information correlation perspective of the idea leading to accurate predictions of how physics operates should have been vanishingly small. Instead, their oddly arbitrary 50/50 mix of real waves and hypothesized backwards-in-time waves almost works, all the way down to the minute length scales at which vacuum polarization becomes significant. One analogy is that the Feynman/Wheeler scheme behaves like a slightly broken mathematical symmetry, one that correctly describes reality over almost the entire range of phenomena to which it applies, but then breaks down at one of the extrema of its range. My question, finally, is this: Does there exist a clear conceptual explanation, perhaps in the QED description of vacuum polarization for example, of why the Feynman/Wheeler retarded/advanced model of paired photons traveling two directions in time provides an accurate model of reality overall, despite its being incorrect when applied at very short distances? Addendum 2012-05-30 If I've understood @RonMaimon correctly -- and I certainly still do not fully understand the S-matrix part of his answer -- his central answer to my question is both simple and highly satisfying: Feynman did not abandon the backward-forward scheme at all, but instead abandoned the experimentally incorrect idea that an electron cannot interact with itself. So, his objection to Wheeler could perhaps be paraphrased in a more upbeat form into something more like this: "Vacuum polarization shows that the electron does indeed interact with itself, so I was wrong about that. But your whole backwards-and-forwards in time idea works very well indeed -- I got a Nobel Prize for it -- so thank for pointing me in that direction!" Answer to Ron, and my thanks. | The main important idea of Feynman Wheeler theory is to use propagators which are non-causal, that can go forward and backward in time. This makes no sense in the Hamiltonian framework, since the backward in time business requires a formalism that is not rigidly stepping from timestep to timestep. Once you give up on a Hamiltonian, you can also ask that the formalism be manifestly relativistically invariant. This led Feynman to the Lagrangian formalism, and the path integral. The only reason the Feynman Wheeler idea doesn't work is simply because of the arbitrary idea that an electron doesn't act on itself , and this is silly. Why can't an electron emit and later absorb the same photon? Forbidding this is ridiculous, and creates a nonsense theory. This is why Feynman says he abandons the theory. But this was the motivating idea--- to get rid of the classical infinity by forbidding self-interaction. But the result was much deeper than the motivating idea. Feynman never abandons the non-causal propagator, this is essential to the invariant particle picture that he creates later. But later, he makes a similar non-causal propagator for electrons, and figures out how to couple the quantum electrons to the photon without using local fields explicitly, beyond getting the classical limit right. This is a major tour-de-force, since he is essentially deriving QED from the requirement of relativistic invariance, unitarity, the spin of the photon and electron, plus gauge-invariance/minimal coupling (what we would call today the requirement of renormalizability). These arguments have been streamlined and extended since by Weiberg, you derive a quantum field theory from unitarity, relativistic invariance, plus a postulate on a small number of fundamental particles with a given spin<1. In Feynman's full modern formalism, the propagators still go forward and backward in time just like the photon in Wheeler-Feynman, the antiparticle goes backward, and the particle forward (the photon is its own antiparticle). The original motivation for these discoveries is glossed over by Feynman a little, they come from Wheeler's focus on the S-matrix as the correct physical observable. Wheeler discovered the S-matrix in 1938, and always emphasized S-matrix centered computations. Feynman never was so gung-ho on S-matrix, and became an advocate of Schwinger style local fields, once he understood that the particle and field picture are complementary. He felt that the focus on S-matrix made him work much harder than he had to, he could have gotten the same results much easier (as Schwinger and Dyson did) using the extra physics of local fields. So the only part of Wheeler-Feynman that Feynman abandoned is the idea that particles don't interact with themselves. Other than that, the Feynman formalism for QED is pretty much mathematically identical to the Wheeler-Feynman formalism for classical electrodynamics, except greatly expanded and correctly quantum. If Feynman hadn't started with backward in time propagation, it isn't clear the rest would have been so easy to formulate. The mathematical mucking around with non-causal propagators did produce the requisite breakthrough. It must be noted that Schwinger also had the same non-causal propagators, which he explicitly parametrized by the particle proper time. He arrived at it by a different path, from local fields. However they were both scooped by Stueckelberg, who was the true father of the modern methods, and who was neglected for no good reason. Stueckelberg was also working with local fields. It was only Feynman, following Wheeler, who derived this essentially from a pure S-matrix picture, and the equivalence of the result to local fields made him and many others sure that S-matrix and local fields are simply two complementary ways to describe relativistic quantum physics. This is not true, as string theory shows. There are pure S-matrix theories that are not equivalent to local quantum fields. Feynman was skeptical of strings, because they were S-matrix, and he didn't like S-matrix, having been burned by it in this way. | {
"source": [
"https://physics.stackexchange.com/questions/28111",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7670/"
]
} |
28,297 | In general relativity (ignoring Hawking radiation), why is a black hole black? Why nothing, not even light, can escape from inside a black hole? To make the question simpler, say, why is a Schwarzschild black hole black? | It's surprisingly hard to explain in simple terms why nothing, not even light, can escape from a black hole once it has passed the event horizon. I'll try and explain with the minimum of maths, but it will be hard going. The first point to make is that nothing can travel faster than light, so if light can't escape then nothing can. So far so good. Now, we normally describe the spacetime around a black hole using the Schwarzschild metric: $$ds^2 = -\left(1-\frac{2M}{r}\right)dt^2 + \left(1-\frac{2M}{r}\right)^{-1}dr^2 + r^2 d\Omega^2$$ but the trouble is that the Schwarzschild time, $t$, isn't a good coordinate to use at the event horizon because there is infinite time dilation. You might want to look at my recent post Why is matter drawn into a black hole not condensed into a single point within the singularity? for some background on this. Now, we're free to express the metric in any coordinates we want, because it's coordinate independent, and it turns out the best (well, simplest anyway!) coordinates to use for this problem are the Gullstrand–Painlevé coordinates . In these coordinates $r$ is still the good old radial distance, but $t$ is now the time measured by an observer falling towards the black hole from infinity. This free falling coordinate system is known as the "rainfall" coordinates and we call the time $t_r$ to distinguish it from the Schwarzschild time. Anyhow, I'm going to gloss over how we convert the Schwarzschild metric to Gullstrand–Painlevé coordinates and just quote the result: $$ds^2 = \left(1-\frac{2M}{r}\right)dt_r^2 - 2\sqrt{\frac{2M}{r}}dt_rdr - dr^2 -r^2d\theta^2 - r^2sin^2\theta d\phi^2$$ This looks utterly hideous, but we can simplify it a lot. We're going to consider the motion of light rays, and we know that for light rays $ds^2$ is always zero. Also we're only going to consider light moving radially outwards so $d\theta$ and $d\phi$ are zero. So we're left with a much simpler equation: $$0 = \left(1-\frac{2M}{r}\right)dt_r^2 - 2\sqrt{\frac{2M}{r}}dt_rdr - dr^2$$ You may think this is a funny definition of simple, but actually the equation is just a quadratic. I can make this clear by dividing through by $dt_r^2$ and rearranging slightly to give: $$ - \left(\frac{dr}{dt_r}\right)^2 - 2\sqrt{\frac{2M}{r}}\frac{dr}{dt_r} + \left(1-\frac{2M}{r}\right) = 0$$ and just using the equation for solving a quadratic gives: $$ \frac{dr}{dt_r} = -\sqrt{\frac{2M}{r}} \pm 1 $$ And we're there! The quantity $dr/dt_r$ is the radial velocity (in these slightly odd coordinates). There's a $\pm$ in the equation, as there is for all quadratics, and the -1 gives us the velocity of the inbound light beam while the +1 gives us the outbound velocity. If we're at the event horizon $r = 2M$, so just substituting this into the equation above for the outbound light beam gives us: $$ \frac{dr}{dt_r} = 0 $$ Tada! At the event horizon the velocity of the outbound light beam is zero so light can't escape from the black hole. In fact for $r < 2M$ the outbound velocity is negative, so not only can light not escape but the best it can do is move towards the singularity. | {
"source": [
"https://physics.stackexchange.com/questions/28297",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
28,505 | This question is based on problem II.3.1 in Anthony Zee's book Quantum Field Theory in a Nutshell Show, by explicit calculation, that $(1/2,1/2)$ is the Lorentz Vector. I see that the generators of $SU(2)$ are the Pauli matrices and the generators of $SO(3,1)$ is a matrix composed of two Pauli matrices along the diagonal. Is it always the case that the direct product of two groups is formed from the generators like this? I ask this because I'm trying to write a Lorentz boost as two simultaneous quaternion rotations [unit quaternions rotations are isomorphic to $SU(2)$ ] and tranform between the two methods. Is this possible? In other words, How do I construct the $SU(2)$ representation of the Lorentz Group using the fact that $SU(2)\times SU(2) \sim SO(3,1)$ ? Here is some background information: Zee has shown that the algebra of the Lorentz group is formed from two separate $SU(2)$ algebras [ $SO(3,1)$ is isomorphic to $SU(2)\times SU(2)$ ] because the Lorentz algebra satisfies: $$\begin{align}[J_{+i},J_{+j}] &= ie_{ijk}J_{k+} &
[J_{-i},J_{-j}] &= ie_{ijk} J_{k-} &
[J_{+i},J_{-j}] &= 0\end{align}$$ The representations of $SU(2)$ are labeled by $j=0,\frac{1}{2},1,\ldots$ so the $SU(2)\times SU(2)$ rep is labelled by $(j_+,j_-)$ with the $(1/2,1/2)$ being the Lorentz 4-vector because and each representation contains $(2j+1)$ elements so $(1/2,1/2)$ contains 4 elements. | Here is a mathematical derivation. We use the sign convention $(+,-,-,-)$ for the Minkowski metric $\eta_{\mu\nu}$. I) First recall the fact that $SL(2,\mathbb{C})$ is (the double cover of) the restricted Lorentz group $SO^+(1,3;\mathbb{R})$. This follows partly because: There is a bijective isometry from the Minkowski space $(\mathbb{R}^{1,3},||\cdot||^2)$ to the space of $2\times2 $ Hermitian matrices $(u(2),\det(\cdot))$,
$$\mathbb{R}^{1,3} ~\cong ~ u(2)
~:=~\{\sigma\in {\rm Mat}_{2\times 2}(\mathbb{C}) \mid \sigma^{\dagger}=\sigma \}
~=~ {\rm span}_{\mathbb{R}} \{\sigma_{\mu} \mid \mu=0,1,2,3\}, $$
$$\mathbb{R}^{1,3}~\ni~\tilde{x}~=~(x^0,x^1,x^2,x^3) \quad\mapsto \quad\sigma~=~x^{\mu}\sigma_{\mu}~\in~ u(2), $$
$$ ||\tilde{x}||^2 ~=~x^{\mu} \eta_{\mu\nu}x^{\nu} ~=~\det(\sigma), \qquad \sigma_{0}~:=~{\bf 1}_{2 \times 2}.\tag{1}$$ There is a group action $\rho: SL(2,\mathbb{C})\times u(2) \to u(2)$ given by
$$g\quad \mapsto\quad\rho(g)\sigma~:= ~g\sigma g^{\dagger},
\qquad g\in SL(2,\mathbb{C}),\qquad\sigma\in u(2), \tag{2}$$
which is length preserving, i.e. $g$ is a pseudo-orthogonal (or Lorentz) transformation.
In other words, there is a Lie group homomorphism $$\rho: SL(2,\mathbb{C}) \quad\to\quad O(u(2),\mathbb{R})~\cong~ O(1,3;\mathbb{R}) .\tag{3}$$ Since $\rho$ is a continuous map and $SL(2,\mathbb{C})$ is a connected set, the image $\rho(SL(2,\mathbb{C}))$ must again be a connected set. In fact, one may show so there is a surjective Lie group homomorphism $^1$ $$\rho: SL(2,\mathbb{C}) \quad\to\quad SO^+(u(2),\mathbb{R})~\cong~ SO^+(1,3;\mathbb{R}) , $$
$$\rho(\pm {\bf 1}_{2 \times 2})~=~{\bf 1}_{u(2)}.\tag{4}$$ The Lie group $SL(2,\mathbb{C})=\pm e^{sl(2,\mathbb{C})}$ has Lie algebra $$ sl(2,\mathbb{C})
~=~ \{\tau\in{\rm Mat}_{2\times 2}(\mathbb{C}) \mid {\rm tr}(\tau)~=~0 \}
~=~{\rm span}_{\mathbb{C}} \{\sigma_{i} \mid i=1,2,3\}.\tag{5}$$ The Lie group homomorphism $\rho: SL(2,\mathbb{C}) \to O(u(2),\mathbb{R})$ induces a Lie algebra homomorphism
$$\rho: sl(2,\mathbb{C})\to o(u(2),\mathbb{R})\tag{6}$$
given by
$$ \rho(\tau)\sigma ~=~ \tau \sigma +\sigma \tau^{\dagger},
\qquad \tau\in sl(2,\mathbb{C}),\qquad\sigma\in u(2), $$
$$ \rho(\tau) ~=~ L_{\tau} +R_{\tau^{\dagger}},\tag{7}$$
where we have defined left and right multiplication of $2\times 2$ matrices
$$L_{\sigma}(\tau)~:=~\sigma \tau~=:~ R_{\tau}(\sigma),
\qquad \sigma,\tau ~\in~ {\rm Mat}_{2\times 2}(\mathbb{C}).\tag{8}$$ II) Note that the Lorentz Lie algebra $so(1,3;\mathbb{R}) \cong sl(2,\mathbb{C})$ does not$^2$ contain two perpendicular copies of, say, the real Lie algebra $su(2)$ or $sl(2,\mathbb{R})$. For comparison and completeness, let us mention that for other signatures in $4$ dimensions, one has $$SO(4;\mathbb{R})~\cong~[SU(2)\times SU(2)]/\mathbb{Z}_2,
\qquad\text{(compact form)}\tag{9}$$ $$SO^+(2,2;\mathbb{R})~\cong~[SL(2,\mathbb{R})\times SL(2,\mathbb{R})]/\mathbb{Z}_2.\qquad\text{(split form)}\tag{10}$$ The compact form (9) has a nice proof using quaternions $$(\mathbb{R}^4,||\cdot||^2) ~\cong~ (\mathbb{H},|\cdot|^2)\quad\text{and}\quad SU(2)~\cong~ U(1,\mathbb{H}),\tag{11}$$ see also this Math.SE post and this Phys.SE post. The split form (10) uses a bijective isometry $$(\mathbb{R}^{2,2},||\cdot||^2) ~\cong~({\rm Mat}_{2\times 2}(\mathbb{R}),\det(\cdot)).\tag{12}$$ To decompose Minkowski space into left- and right-handed Weyl spinor representations, one must go to the complexification , i.e. one must use the fact that $SL(2,\mathbb{C})\times SL(2,\mathbb{C})$ is (the double cover of) the complexified proper Lorentz group $SO(1,3;\mathbb{C})$. Note that Refs. 1-2 do not discuss complexification$^2$. One can more or less repeat the construction from section I with the real numbers $\mathbb{R}$ replaced by complex numbers $\mathbb{C}$, however with some important caveats. There is a bijective isometry from the complexified Minkowski space $(\mathbb{C}^{1,3},||\cdot||^2)$ to the space of $2\times2 $ matrices $({\rm Mat}_{2\times 2}(\mathbb{C}),\det(\cdot))$,
$$\mathbb{C}^{1,3} ~\cong ~ {\rm Mat}_{2\times 2}(\mathbb{C})
~=~ {\rm span}_{\mathbb{C}} \{\sigma_{\mu} \mid \mu=0,1,2,3\}, $$
$$ M(1,3;\mathbb{C})~\ni~\tilde{x}~=~(x^0,x^1,x^2,x^3) \quad\mapsto \quad\sigma~=~x^{\mu}\sigma_{\mu}~\in~ {\rm Mat}_{2\times 2}(\mathbb{C}) , $$
$$ ||\tilde{x}||^2 ~=~x^{\mu} \eta_{\mu\nu}x^{\nu} ~=~\det(\sigma).\tag{13}$$
Note that forms are taken to be bilinear rather than sesquilinear . There is a surjective Lie group homomorphism$^3$ $$\rho: SL(2,\mathbb{C}) \times SL(2,\mathbb{C}) \quad\to\quad
SO({\rm Mat}_{2\times 2}(\mathbb{C}),\mathbb{C})~\cong~ SO(1,3;\mathbb{C})\tag{14}$$
given by
$$(g_L, g_R)\quad \mapsto\quad\rho(g_L, g_R)\sigma~:= ~g_L\sigma g^{\dagger}_R, $$
$$ g_L, g_R\in SL(2,\mathbb{C}),\qquad\sigma~\in~ {\rm Mat}_{2\times 2}(\mathbb{C}).\tag{15} $$ The Lie group
$SL(2,\mathbb{C})\times SL(2,\mathbb{C})$
has Lie algebra $sl(2,\mathbb{C})\oplus sl(2,\mathbb{C})$. The Lie group homomorphism $$\rho: SL(2,\mathbb{C})\times SL(2,\mathbb{C})
\quad\to\quad SO({\rm Mat}_{2\times 2}(\mathbb{C}),\mathbb{C})\tag{16}$$
induces a Lie algebra homomorphism
$$\rho: sl(2,\mathbb{C})\oplus sl(2,\mathbb{C})\quad\to\quad
so({\rm Mat}_{2\times 2}(\mathbb{C}),\mathbb{C})\tag{17}$$
given by
$$ \rho(\tau_L\oplus\tau_R)\sigma ~=~ \tau_L \sigma +\sigma \tau^{\dagger}_R,
\qquad \tau_L,\tau_R\in sl(2,\mathbb{C}),\qquad
\sigma\in {\rm Mat}_{2\times 2}(\mathbb{C}), $$
$$ \rho(\tau_L\oplus\tau_R) ~=~ L_{\tau_L} +R_{\tau^{\dagger}_R}.\tag{18}$$ The left action (acting from left on a two-dimensional complex column vector) yields by definition the (left-handed Weyl) spinor representation $(\frac{1}{2},0)$, while the right action (acting from right on a two-dimensional complex row vector) yields by definition the right-handed Weyl/complex conjugate spinor representation $(0,\frac{1}{2})$. The above shows that The complexified Minkowski space $\mathbb{C}^{1,3}$ is a $(\frac{1}{2},\frac{1}{2})$ representation of the Lie group $SL(2,\mathbb{C}) \times SL(2,\mathbb{C})$, whose action respects the Minkowski metric. References: Anthony Zee, Quantum Field Theory in a Nutshell, 1st edition, 2003. Anthony Zee, Quantum Field Theory in a Nutshell, 2nd edition, 2010. $^1$ It is easy to check that it is not possible to describe discrete Lorentz transformations, such as, e.g. parity $P$, time-reversal $T$, or $PT$ with a group element $g\in GL(2,\mathbb{C})$ and formula (2). $^2$ For a laugh, check out the (in several ways) wrong second sentence on p.113 in Ref. 1: "The mathematically sophisticated say that the algebra $SO(3,1)$ is isomorphic to $SU(2)\otimes SU(2)$." The corrected statement would e.g. be "The mathematically sophisticated say that the group $SO(3,1;\mathbb{C})$ is locally isomorphic to $SL(2,\mathbb{C})\times SL(2,\mathbb{C})$." Nevertheless, let me rush to add that Zee's book is overall a very nice book. In Ref. 2, the above sentence is removed, and a subsection called "More on $SO(4)$, $SO(3,1)$, and $SO(2,2)$" is added on page 531-532. $^3$ It is not possible to mimic an improper Lorentz transformations $\Lambda\in O(1,3;\mathbb{C})$ [i.e. with negative determinant $\det (\Lambda)=-1$] with the help of two matrices $g_L, g_R\in GL(2,\mathbb{C})$ in formula (15); such as, e.g., the spatial parity transformation
$$P:~~(x^0,x^1,x^2,x^3) ~\mapsto~ (x^0,-x^1,-x^2,-x^3).\tag{19}$$
Similarly, the Weyl spinor representations are representations of (the double cover of) $SO(1,3;\mathbb{C})$ but not of (the double cover of) $O(1,3;\mathbb{C})$. E.g. the spatial parity transformation (19) intertwine between left-handed and right-handed Weyl spinor representations. | {
"source": [
"https://physics.stackexchange.com/questions/28505",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9267/"
]
} |
28,720 | I know that what Planck length equals to. The first question is, how do you get the formula
$$\ell_P~=~\sqrt\frac{\hbar G}{c^3}$$ that describes the Planck length? The second question is, will any length shorter than the Planck length be inaccessible? If so, what is the reason behind this? | The expression $(\hbar G/c^3)^{1/2}$ is the unique product of powers of $\hbar, G,c$, three most universal dimensionful constants, that has the unit of length. Because the constants $\hbar, G,c$ describe the fundamental processes of quantum mechanics, gravity, and special relativity, respectively, the length scale obtained in this way expresses the typical length scale of processes that depend on relativistic quantum gravity. The formula and the value were already known to Max Planck more than 100 years ago, that's why they're called Planck units. Unless there are very large or strangely warped extra dimensions in our spacetime, the Planck length is the minimum length scale that may be assigned the usual physical and geometric interpretation. (And even if there are subtleties coming from large or warped extra dimensions, the minimum length scale that makes sense – which could be different from $10^{-35}$ meters, however – may still be called a higher-dimensional Planck length and is calculated by analogous formulae which must, however, use the relevant Newton's constant that applies to a higher-dimensional world.) The Planck length's special role may be expressed by many related definitions, for example: The Planck length is the radius of the smallest black hole that (marginally) obeys the laws of general relativity. Note that if the black hole radius is $R=(\hbar G/c^3)^{1/2}$, the black hole mass is obtained from $R=2GM/c^2$ i.e. $M=c^2/G\cdot (\hbar G/c^3)^{1/2} = (\hbar c/G)^{1/2}$ which is the same thing as the Compton wavelength $\lambda = h/Mc = hG/c^3 (\hbar G/c^3)^{-1/2}$ of the same object, up to numerical factors such as $2$ and $\pi$. The time it takes for such a black hole to evaporate by the Hawking radiation is also equal to the Planck time i.e. Planck length divided by the speed of light. Smaller (lighter) black holes don't behave as black holes at all; they are elementary particles (and the lifetime shorter than the Planck time is a sign that you can't trust general relativity for such supertiny objects). Larger black holes than the Planck length increasingly behave as long-lived black holes that we know from astrophysics. The Planck length is the distance at which the quantum uncertainty of the distance becomes of order 100 percent, up to a coefficient of order one. This may be calculated by various approximate calculations rooted in quantum field theory – expectation values of $(\delta x)^2$ coming from quantum fluctuations of the metric tensor; higher-derivative corrections to the Einstein-Hilbert action; nonlocal phenomena, and so on. The unusual corrections to the geometry, including nonlocal phenomena, become so strong at distances that are formally shorter than the Planck length that it doesn't make sense to consider any shorter distances. The usual rules of geometry would break down over there. The Planck length or so is also the shortest distance scale that can be probed by accelerators, even in principle. If one were increasing the energy of protons at the LHC and picked a collider of the radius comparable to the Universe, the wavelength of the protons would be getting shorter inversely proportionally to the protons' energy. However, once the protons' center-of-mass energy reaches the Planck scale, one starts to produce the "minimal black holes" mentioned above. A subsequent increase of the energy will end up with larger black holes that have a worse resolution, not better. So the Planck length is the minimum distance one may probe. It's important to mention that we're talking about the internal architecture of particles and objects. Many other quantities that have units of length may be much shorter than the Planck length. For example, the photon's wavelength may obviously be arbitrarily short: any photon may always be boosted, as special relativity guarantees, so that its wavelength gets even shorter. Lots of things (insights from thousands of papers by some of the world's best physicists) are known about the Planck scale physics, especially some qualitative features of it, regardless of the experimental inaccessibility of that realm. | {
"source": [
"https://physics.stackexchange.com/questions/28720",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9142/"
]
} |
28,895 | I realise the question of why this sky is blue is considered reasonably often here, one way or another. You can take that knowledge as given. What I'm wondering is, given that the spectrum of Rayleigh scattering goes like $\omega^4$, why is the sky not purple, rather than blue ? I think this is a reasonable question because we do see purple (or, strictly, violet or indigo) in rainbows, so why not across the whole sky if that's the strongest part of the spectrum? There are two possible lines of argument I've seen elsewhere and I'm not sure which (if not both) is correct. Firstly, the Sun's thermal emission peaks in the visible range, so we do actually receive less purple than blue. Secondly, the receptor's in our eye are balanced so that we are most sensitive to (roughly) the middle of the visible spectrum . Our eyes are simply less sensitive to the purple light than to the blue. | This is from the Physics FAQ article that I wrote 15 years ago: If shorter wavelengths are scattered most strongly, then there is a puzzle as to why the sky does not appear violet, the colour with the shortest visible wavelength. The spectrum of light emission from the sun is not constant at all wavelengths, and additionally is absorbed by the high atmosphere, so there is less violet in the light. Our eyes are also less sensitive to violet. That's part of the answer; yet a rainbow shows that there remains a significant amount of visible light coloured indigo and violet beyond the blue. The rest of the answer to this puzzle lies in the way our vision works. We have three types of colour receptors, or cones, in our retina. They are called red, blue and green because they respond most strongly to light at those wavelengths. As they are stimulated in different proportions, our visual system constructs the colours we see. When we look up at the sky, the red cones respond to the small amount of scattered red light, but also less strongly to orange and yellow wavelengths. The green cones respond to yellow and the more strongly scattered green and green-blue wavelengths. The blue cones are stimulated by colours near blue wavelengths, which are very strongly scattered. If there were no indigo and violet in the spectrum, the sky would appear blue with a slight green tinge. However, the most strongly scattered indigo and violet wavelengths stimulate the red cones slightly as well as the blue, which is why these colours appear blue with an added red tinge. The net effect is that the red and green cones are stimulated about equally by the light from the sky, while the blue is stimulated more strongly. This combination accounts for the pale sky blue colour. It may not be a coincidence that our vision is adjusted to see the sky as a pure hue. We have evolved to fit in with our environment; and the ability to separate natural colours most clearly is probably a survival advantage. | {
"source": [
"https://physics.stackexchange.com/questions/28895",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4551/"
]
} |
28,931 | This recent news article ( here is the original, in German ) says that Shouryya Ray, who moved to Germany from India with his family at the age of 12, has baffled scientists and mathematicians by solving two fundamental particle dynamics problems posed by Sir Isaac Newton over 350 years ago, Die Welt newspaper reported on Monday. Ray’s solutions make it possible to now calculate not only the flight path of a ball, but also predict how it will hit and bounce off a wall. Previously it had only been possible to estimate this using a computer, wrote the paper. What are the problems from this description? What is their precise formulation? Also, is there anywhere I can read the details of this person's proposed solutions? | This thread (physicsforums.com) contains a link to Shouryya Ray's poster , in which he presents his results. So the problem is to find the trajectory of a particle under influence of gravity and quadratic air resistance. The governing equations, as they appear on the poster: $$
\dot u(t) + \alpha u(t) \sqrt{u(t)^2+v(t)^2} = 0 \\
\dot v(t) + \alpha v(t) \sqrt{u(t)^2 + v(t)^2} = -g\text,
$$ subject to initial conditions $v(0) = v_0 > 0$ and $u(0) = u_0 \neq 0$. Thus (it is easily inferred), in his notation, $u(t)$ is the horizontal velocity, $v(t)$ is the vertical velocity, $g$ is the gravitational acceleration, and $\alpha$ is a drag coefficient. He then writes down the solutions $$
u(t) = \frac{u_0}{1 + \alpha V_0 t - \tfrac{1}{2!}\alpha gt^2 \sin \theta + \tfrac{1}{3!}\left(\alpha g^2 V_0 \cos^2 \theta - \alpha^2 g V_0 \sin \theta\right) t^3 + \cdots} \\
v(t) = \frac{v_0 - g\left[t + \tfrac{1}{2!} \alpha V_0 t^2 - \tfrac{1}{3!} \alpha gt^3 \sin \theta + \tfrac{1}{4!}\left(\alpha g^2 V_0 \cos^2 \theta - \alpha^2 g V_0 \sin \theta\right)t^4 + \cdots\right]}{1 + \alpha V_0 t - \tfrac{1}{2!}\alpha gt^2 \sin \theta + \tfrac{1}{3!}\left(\alpha g^2 V_0 \cos^2 \theta - \alpha^2 g V_0 \sin \theta\right) t^3 + \cdots}\text.$$ From the diagram below the photo of Newton, one sees that $V_0$ is the inital speed, and $\theta$ is the initial elevation angle. The poster (or at least the part that is visible) does not give details on the derivation of the solution. But some things can be seen: He uses, right in the beginning, the substitution $\psi(t) = u(t)/v(t)$. There is a section called "...öße der Bewegung". The first word is obscured, but a qualified guess would be "Erhaltungsgröße der Bewegung", which would translate as "conserved quantity of the motion". Here, the conserved quantity described by David Zaslavsky appears, modulo some sign issues. However, this section seems to be a subsection to the bigger section "Aus der Lösung ablesbare Eigenschaften", or "Properties that can seen from the solution". That seems to imply that the solution implies the conservation law, rather than the solution being derived from the conservation law. The text in that section probably provides some clue, but it's only partly visible, and, well, my German is rusty. I welcome someone else to try to make sense of it. Also part of the bigger section are subsections where he derives from his solution (a) the trajectory for classical, drag-free projectiles, (b) some "Lamb-Näherung", or "Lamb approximation". The next section is called "Verallgemeneirungen", or "Generalizations". Here, he seems to consider two other problems, with drag of the form $\alpha V^2 + \beta$, in the presence of altitude-dependent horizontal wind. I'm not sure what the results here are. The diagrams to the left seem to demonstrate the accuracy and convergence of his series solution by comparing them to Runge-Kutta. Though the text is kind of blurry, and, again, my German is rusty, so I'm not too sure. Here's a rough translation of the first part of the "Zusammanfassung und Ausblick" (Summary and outlook), with suitable disclaimers as to the accuracy: For the first time, a fully analytical solution of a long unsolved problem Various excellent properties; in particular, conserved quantity $\Rightarrow$ fundamental [...] extraction of deep new insights using the complete analytical solutions (above all [...] perspectives and approximations are to be gained) Convergence of the solution numerically demonstrated Solution sketch for two generalizations EDIT: Two professors at TU Dresden, who have seen Mr Ray's work, have written some comments: Comments on some recent work by Shouryya Ray There, the questions he solved are unambiguously stated, so that should answer any outstanding questions. EDIT2: I should add: I do not doubt that Shouryya Ray is a very intelligent young man. The solution he gave can, perhaps, be obtained using standard methods. I believe, however, that he discovered the solution without being aware that the methods were standard, a very remarkable achievement indeed. I hope that this event has not discouraged him; no doubt, he'll be a successful physicist or mathematician one day, should he choose that path. | {
"source": [
"https://physics.stackexchange.com/questions/28931",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5878/"
]
} |
29,016 | It's been happening to me for years. I finally decided to ask users who are better with "practical physics" when I was told that my experience – that I am going to describe momentarily – prove that I am a diviner, a psychic, a "sensibil" as we call it. The right explanation clearly needs some electrodynamics although it's "everyday electrodynamics" and theoretical physicists are not trained to quickly answer such questions although each of us has probably solved many exercises that depend on the same principles. When I am biking under the power lines – which probably have a high voltage in them – I feel a clear tingling shocks near my buttocks and related parts of the body for a second or so when I am under a critical point of the power lines. It is a strong feeling, not a marginal one: it feels like a dozen of ants that are stinging me at the same moment. It seems almost clear that some currents are running through my skins at 50 Hz. I would like to know the estimate (and calculation or justification) of the voltage, currents etc. that are going through my skin and some comparison with the shock one gets when he touches the power outlet. Now, my bike that makes this effect particularly strong is a mountain bike, Merida ; the speed is about 20 km/h and the velocity is perpendicular to the direction of the current in the power line; the seat has a hole in it and there is some metal – probably a conducting one – just a few centimeters away from the center of my buttocks. It's plausible that I am in touch with the metal – or near touch; my skin is kind of sweating during these events and the liquid isn't pure water so it's probably much more conductive than pure water; the temperature was 22 °C today, the humidity around 35%, clear skies, 10 km/h wind; the power lines may be between 22 kV and 1 MV and at 50 Hz, the altitude is tens of meters but I don't really know exactly. What kind of approximation for the electromagnetic waves are relevant? What is the strength? How high currents one needs? Does one need some amplification from interference etc. (special places) to make the effect detectable? (I only remember experiencing this effect at two places around Pilsen; the most frequent place where I feel it is near Druztová, Greater Pilsen, Czechia.) Is the motion of the wheels or even its frequency important? Is there some resonance? Does the hole in the seat and the metal play any role? Just if you think that I am crazy, other people are experience the effect (although with different body parts), see e.g. here and here . This PDF file seems to suggest that the metals and electromagnetic induction is essential for the effect but the presentation looks neither particularly comprehensive nor impartial enough. An extra blog discussion on this topic is here: http://motls.blogspot.com/2012/05/electric-shocks-under-high-voltage.html | First, Field strength. This calculation is strictly an electric potential calculation; radiation and induction are safely ignored at 50Hz.* For a 200kV transmission line 20m above ground, the max electric field at ground level is about 1.2 kV/m.** This number is reduced from the naive 200kV/20m=10 kV/m calculation by two effects: 1) The ~1/r variation in the electric field (reduction to 3 kV/m). I used the method of images to calculate this field, with a 10 cm conductor diameter to keep the peak field below the 1MV/m breakdown field. 2) Cancellation from the other two power lines in this 3-phase system, which are at +/-120 degree electrical phases with respect to the first, and are physically offset in a horizontal line per the photo. I estimated 7m spacings between adjacent lines. The maximum E-field actually occurs roughly twice as far out as the outermost line; the field under the center conductor is lower. Next, Can you feel it? 1) The human body circuit model for electrostatic discharge is 100pF+1.5kohm; that's a gross simplification but better than nothing. If one imagined a 2m high network, the applied voltage results in a 50Hz current of about 70uA ($C \omega V$). Very small. 2) There will be an AC voltage difference between the (insulated) human and (insulated) bicycle. A 1m vertical separation between their centers of gravity would yield roughly 1200V. This voltage is rather small compared to some car-door-type static discharges, but it would still be sufficient to break down a short air gap (but not a couple cm), and would repeat at 100Hz. I imagine it would be noticeable in a sensitive part of the anatomy. If the transmission voltage is actually 400 kV, all the field strengths and voltages would of course double. (*) In response to a comment, here's an estimate of the neglected induction and radiation effects, courtesy of Maxwell 4 and 3: Induction: Suppose a power line is carrying a healthy 1000A AC current (f=50 Hz). Then by Ampere's law, there is a circumferential AC magnetic field; at the wire-to-ground distance of 20 meters that field's amplitude is $10 \mu T$. (Compare with the earth's DC field of approximately 0.5 gauss, or $50 \mu T$.) The flux of this magnetic field through a $1 m^2$ area loop (with normal parallel to the ground and perpendicular to the wire) is $\Phi = 10 \mu Wb$ AC. Then from Faraday's law, the voltage around the loop is $d \Phi /dt = 2 \pi f \Phi = 3 mV$ (millivolts). So much for induction. One can also estimate the magnetic field resulting from the $1200 V/m$ ground-level AC electric field, which has an electric flux density $D =\epsilon_0 E = 10.6 nC/m^2$ and a displacement current density $\partial D / \partial t = 2 \pi f D = 3.3 \mu A/m^2$. The flux of this field through a $1 m$ square loop (parallel to the ground) is $3.3 \mu A$, so the average magnetic field around the square is $0.8 \mu A/m$, for a ridiculously small magnetic flux density of $1 pT$. (**) 1 Sep 2014 update. Dmytry very astutely points out in a comment that there will be local electric field intensification effects from conductive irregularities in the otherwise flat ground surface, such as our cyclist (who, being somewhat sweaty, will have a conductive surface). The same principle applies to lightning rods. For the proverbial spherical cyclist, the local field will be increased by a factor of 3, independent of the sphere's size, as long as it's much less than the distance to the power line. It turns out that it doesn't matter whether the sphere is grounded or insulated, since its total charge remains 0. For more elongated shapes the intensification can be much higher: for a grounded prolate spheroid with 10:1 dimensions, the multiplication factor is 50. This intensification of course enhances any sensation one might feel. | {
"source": [
"https://physics.stackexchange.com/questions/29016",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1236/"
]
} |
29,082 | I read with interest about Einstein's Theory of Relativity and his proposition about the speed of light being the universal speed limit. So, if I were to travel in a spacecraft at (practically) the speed of light, would I freeze and stop moving? Would the universe around me freeze and stop moving? Who would the time stop for? | Yes, I agree with David. If somehow, you were able to travel at the speed of light, it would seem that 'your time' would not have progressed in comparison to your reference time once you returned to 'normal' speeds. This can be modeled by the Lorentz time dilation equation: $$T=\frac{T_0}{\sqrt{1 - (v^2 / c^2)}}$$ When traveling at the speed of light ($v=c$), left under the radical you would have 0. This answer would be undefined or infinity if you will (let's go with infinity). The reference time ($T_0$) divided by zero would be infinity; therefore, you could infer that time is 'frozen' to an object traveling at the speed of light. | {
"source": [
"https://physics.stackexchange.com/questions/29082",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9460/"
]
} |
29,175 | I really can't understand what Leonard Susskind means when he says in the video Leonard Susskind on The World As Hologram that information is indestructible. Is that information that is lost, through the increase of entropy really recoverable? He himself said that entropy is hidden information. Then, although the information hidden has measurable effects, I think information lost in an irreversible process cannot be retrieved. However, Susskind's claim is quite the opposite. How does one understand the loss of information by an entropy increasing process, and its connection to the statement “information is indestructible”. Black hole physics can be used in answers, but, as he proposes a general law of physics, I would prefer an answer not involving black holes. | How is the claim "information is indestructible" compatible with "information is lost in entropy"? Let's make things as specific and as simple as possible. Let's forget about quantum physics and unitary dynamics, let's toy with utterly simple reversible cellular automata. Consider a spacetime consisting of a square lattice of cells with a trinary (3-valued) field defined on it. The values are color-coded such that cells can be yellow, orange, and red. The 'field equations' consist of a set of allowed colorings for each 2x2 block of cells: A total of 27 local color patterns are allowed. These are defined such that when three of the four squares are colored, the color of the fourth cell is uniquely defined. (Check this!) The field equations don't contain a 'direction of evolution'. So how to define a timelike direction? Suppose that when looking "North" or "West" along the lattice directions, you hit a horizon beyond which an infinite sea of yellow squares stretches: "North" and "West" we label as 'light rays from the past'. These two rays of cells constitute the 'snapshot' of the universe taken from the spacetime point defined by the intersection of the two rays. Given this 'snapshot', and using the field equations (the allowed 2x2 colorings), we can start reconstructing the past: Here, the rule applied to color the cell follows from the square at the bottom of the center column in the overview of the 27 allowed 2x2 squares. This is the only 2x2 pattern out of the 27 that fits the given colors at the right, the bottom, and the bottom-right of the cell being colored. Identifying this 2x2 pattern as uniquely fitting the cell colors provided, the top-left color becomes fixed. Continuing like this, we obtain the full past of the universe up to any point we desire: We notice that we constructed the full past knowing the colorings of 'light ray cells' in the 'snapshot' that, excluding the uniform sea beyond the horizons, count no more than 25 cells. We identify this count as the entropy (number of trits ) as observed from the point where the two light rays meet. Notice that at later times the entropy is larger: the second law of thermodynamics is honored by this simple model. Now we reverse the dynamics, and an interesting thing happens: knowing only 9 color values of light rays to the future (again excluding the uniform sea beyond the horizon): We can reconstruct the full future: We refer to these 9 trits that define the full evolution of this cellular automata universe as the 'information content' of the universe. Obviously, the 25 trits of entropy do contain the 9 trits of information. This information is present but 'hidden' in the entropy trits. The entropy in this model will keep growing. The 9 trits of information remains constant and hidden in (but recoverable from) an ever larger number of entropy trits. Note that none of the observations made depend on the details of the 'field equations'. In fact, any set of allowed 2x2 colorings that uniquely define the color of the remaining cell given the colors of three cells, will produce the same observations. Many more observations can be made based on this toy model. One obvious feature being that the model does not sport a 'big bang' but rather a 'big bounce'. Furthermore, the information content (9 trits in the above example) defining this universe is significantly smaller than the later entropy (which grows without bound). This is a direct consequence of a 'past horizon' being present in the model. Also, despite the 'field equations' in this model being fully reversible, the 'snapshot' taken allows you to reconstruct the full past, but not the future. This 'arrow of time' can be be circumvented by reconstructing the past beyond the 'big bounce' where past and future change roles and a new snapshot can be derived from the reconstruction taken. This snapshot is future oriented and allows you to construct the future beyond the original snapshot. These observations, however, go well beyond the questions asked. | {
"source": [
"https://physics.stackexchange.com/questions/29175",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1916/"
]
} |
29,177 | I'm trying to determine how much dry ice or liquid nitrogen I would need to cool 3300 cubic feet, about 90,000 liters of air, from about 100F (37.78C or 310K) to about 90F (26.67C or 299.81K). I'm assuming that the air is comprised of about 21% oxygen. How would I go about doing this? | How is the claim "information is indestructible" compatible with "information is lost in entropy"? Let's make things as specific and as simple as possible. Let's forget about quantum physics and unitary dynamics, let's toy with utterly simple reversible cellular automata. Consider a spacetime consisting of a square lattice of cells with a trinary (3-valued) field defined on it. The values are color-coded such that cells can be yellow, orange, and red. The 'field equations' consist of a set of allowed colorings for each 2x2 block of cells: A total of 27 local color patterns are allowed. These are defined such that when three of the four squares are colored, the color of the fourth cell is uniquely defined. (Check this!) The field equations don't contain a 'direction of evolution'. So how to define a timelike direction? Suppose that when looking "North" or "West" along the lattice directions, you hit a horizon beyond which an infinite sea of yellow squares stretches: "North" and "West" we label as 'light rays from the past'. These two rays of cells constitute the 'snapshot' of the universe taken from the spacetime point defined by the intersection of the two rays. Given this 'snapshot', and using the field equations (the allowed 2x2 colorings), we can start reconstructing the past: Here, the rule applied to color the cell follows from the square at the bottom of the center column in the overview of the 27 allowed 2x2 squares. This is the only 2x2 pattern out of the 27 that fits the given colors at the right, the bottom, and the bottom-right of the cell being colored. Identifying this 2x2 pattern as uniquely fitting the cell colors provided, the top-left color becomes fixed. Continuing like this, we obtain the full past of the universe up to any point we desire: We notice that we constructed the full past knowing the colorings of 'light ray cells' in the 'snapshot' that, excluding the uniform sea beyond the horizons, count no more than 25 cells. We identify this count as the entropy (number of trits ) as observed from the point where the two light rays meet. Notice that at later times the entropy is larger: the second law of thermodynamics is honored by this simple model. Now we reverse the dynamics, and an interesting thing happens: knowing only 9 color values of light rays to the future (again excluding the uniform sea beyond the horizon): We can reconstruct the full future: We refer to these 9 trits that define the full evolution of this cellular automata universe as the 'information content' of the universe. Obviously, the 25 trits of entropy do contain the 9 trits of information. This information is present but 'hidden' in the entropy trits. The entropy in this model will keep growing. The 9 trits of information remains constant and hidden in (but recoverable from) an ever larger number of entropy trits. Note that none of the observations made depend on the details of the 'field equations'. In fact, any set of allowed 2x2 colorings that uniquely define the color of the remaining cell given the colors of three cells, will produce the same observations. Many more observations can be made based on this toy model. One obvious feature being that the model does not sport a 'big bang' but rather a 'big bounce'. Furthermore, the information content (9 trits in the above example) defining this universe is significantly smaller than the later entropy (which grows without bound). This is a direct consequence of a 'past horizon' being present in the model. Also, despite the 'field equations' in this model being fully reversible, the 'snapshot' taken allows you to reconstruct the full past, but not the future. This 'arrow of time' can be be circumvented by reconstructing the past beyond the 'big bounce' where past and future change roles and a new snapshot can be derived from the reconstruction taken. This snapshot is future oriented and allows you to construct the future beyond the original snapshot. These observations, however, go well beyond the questions asked. | {
"source": [
"https://physics.stackexchange.com/questions/29177",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5245/"
]
} |
29,235 | In the Futurama episode " That Darn Katz! " they save the world by rotating the Earth backwards saying it shouldn't matter (which direction Earth rotates). If Earth rotated clockwise and remained in it's current counter-clockwise orbit around the Sun, would it affect anything beyond the direction the constellations move across the sky? | Barring whatever fantastic energies would be required to stop the mass of the Earth from rotating and then changing the direction of the rotation, one of the major things I can see changing would be the expectations of weather patterns. Part of what affects our weather is known as the Coriolis Effect . While there would certainly be effects from the change of the weather from the Coriolis effect how this would change the weather could only be guessed. It is certain to be transformative, and if it were an abrupt change, possibly catastrophic as deserts might move, food crops might be affected, and likely expected storm patterns would be changed. Edit: I neglected to think about the effect this would have on our structures which would collapse due to shearing action unless the stopping action included a planetary stasis field. Objects in motion tending to stay in motion and rotation speeds of over 1000 mph and the inconvenient theory of conservation of momentum. I forgot to think about our oceans as well, since the Coriolis effect would change them, too. I also neglected to consider the molten core of the Earth (pesky spinning, geo-magnetic iron). Stopping the rotation of the molten core might cause the magnetic field of the Earth to collapse, allowing the world to be bathed in cosmic radiation (killing every living thing). My assumption was when they stopped the rotation of the Earth and reset it, they would consider having a magnetic field a good thing and would be sure to stop reset the whole thing. Granted, the Earth of the year 3000 may have other advantages which might offset any changes from the altered rotation of the Earth . As a funny idea, it has potential, but the serious ramifications of such a feat boggle the imagination. | {
"source": [
"https://physics.stackexchange.com/questions/29235",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9529/"
]
} |
29,355 | The Feynman lectures are universally admired, it seems, but also a half-century old.
Taking them as a source for self-study, what compensation for their age, if any, should today's reader undertake? I'm interested both in pointers to particular topics where the physics itself is out-of-date, or topics where the pedagogical approach now admits attestable improvements. | The Feynman Lectures need only a little amending, but it's a relatively small amount compared to any other textbook. The great advantage of the Feynman Lectures is that everything is worked out from scratch Feynman's way, so that it is taught with the maximum insight, something that you can only do after you sit down and redo the old calculations from scratch. This makes them very interesting, because you learn from Feynman how the discovering gets done, the type of reasoning, the physical intuition, and so on. The original presentation also makes it that Feynman says all sorts of things in a slightly different way than other books. This is good to test your understanding, because if you only know something in a half-assed way, Feynman sounds wrong. I remember that when I first read it a million years ago, a large fraction of the things he said sounded completely wrong. This original presentation is a very important component: it teaches you what originality sounds like, and knowing how to be original is the most important thing. I think Vol. I is pretty much OK as an intro, although it should be supplemented at least with this stuff: Computational integration: Feynman does something marvellous at the start of Volume I (something unheard of in 1964), he describes how to Euler time-step a differential equation forward in time. Nowadays, it is a simple thing to numerically integrate any mechanical problem, and experience with numerical integration is essential for students. The integration removes the student's paralysis: when you are staring at an equation and don't know what to do. If you have a computer, you know exactly what to do! Integrating reveals many interesting qualitative things, and shows you just how soon the analytical knowledge painstakingly acquired over 4 centuries craps out. For example, even if you didn't know it, you can see the KAM stability appears spontaneously in self-gravitating clusters at a surprisingly large number of particles. You might expect chaotic motion until you reach 2 particles, which then orbit in an ellipse. But clusters with random masses and velocities of some hundreds of particles eject out particles like crazy, until they get to one or two dozen particles, and then they settle down into a mess of orbits, but this mess must be integrable, because nothing else is ejected out anymore! You discover many things like this from piddling around with particle simulations, and this is something which is missing from Volume I, since computers were not available at the time it was written. It's not completely missing, however, and it's much worse elsewhere. The Kepler problem: Feynman has an interesting point of view regarding this which is published in the "Lost Lecture" book and audio-book. But I think the standard methods are better here, because the 17th century things Feynman redoes are too specific to this one problem. This can be supplemented in any book on analytical mechanics. Thermodynamics: The section on thermodynamics does everything through statistical mechanics and intuition. This begins with the density of the atmosphere, which motivates the Boltzmann distribution, which is then used to derive all sorts of things, culminating in the Clausius-Clayperon equation. This is a great boon when thinking about atoms, but it doesn't teach you the classical thermodynamics, which is really simple starting from modern stat-mech. The position is that the Boltzmann distribution is all you need to know, and that's a little backwards from my perspective. The maximum entropy arguments are better--- they motivate the Boltzmann distribution. The heat-engine he uses is based on rubber-bands too, and yet there is no discussion of why rubber bands are entropic, or of free-energies in the rubber band, or the dependence of stiffness on temperature. Monte-Carlo simulation: This is essential, but it obviously requires computers. With Monte-Carlo you can make snapshots of classical statistical systems quickly on a computer and build up intuition. You can make simulations of liquids, and see how the atoms knock around classically. You can simulate rubber-band polymers, and see the stiffness dependence on temperature. All these things are clearly there in Feynman's head, but without a computer, it's hard to transmit it into any of the students' heads. For Volume II, the most serious problem is that the foundations are off. Feynman said he wanted to redo the classical textbook point of view on E&M, but he wasn't sure how to do it. The Feynman Lectures were written at a time just before modern gauge theory took off, and while they emphasize the vector potential a lot compared to other treatments of the time, they don't make the vector potential the main object. Feynman wanted to redo Volume II to make it completely vector-potential-centered, but he didn't get to do it. Somebody else did a vector-potential based discussion of E&M based on this recommendation, but the results were not so great. The major things I don't like in Vol. II: The derivation of the index of refraction is done by a complicated rescattering calculation which is based on plum-pudding-style electron oscillators. This is essentially just the forward-phase index-of-refraction argument Feynman gives to motivate unitarity in the 1963 ghost paper in Acta Physica Polonika. It is not so interesting or useful in my opinion in Vol. II, but it is the most involved calculation in the series. No special functionology: While the subject is covered with a layer of 19th-century mildew, it is useful to know some special functions, especially Bessel functions and spherical harmonics. Feynman always chooses ultra special forms which give elementary functions, and he knows all the cases which are elementary, so he gets a lot of mileage out of this, but it's not general enough. The fluid section is a little thin--- you will learn how the basic equations work, but no major results. The treatment of fluid flow could have been supplemented with He4 flows, where the potential flow description is correct (it is clear that this is Feynman's motivation for the strange treatment of the subject, but this isn't explicit). Numerical methods in field simulation: Here if one wants to write an introductory textbook, one needs to be completely original, because the numerical methods people use today are not so good for field equations of any sort. Vol. III is extremely good because it is so brief. The introduction to quantum mechanics there gets you to a good intuitive understanding quickly, and this is the goal. It probably could use the following: A discussion of diffusion, and the relation between Schrödinger operators and diffusion operators: This is obvious from the path integral, but it was also clear to Schrödinger. It also allows you to quickly motivate the exact solutions to Schrodinger's equation, like the $1/r$ potential, something which Feynman just gives you without motivation. A proper motivation can be given by using SUSY QM (without calling it that, just a continued stochastic equation) and trying out different ground state ansatzes. Galilean invariance of the Schrödinger equation: This part is not done in any book, I think only because Dirac omitted it from his. It is essential to know how to boost wavefunctions. Since Feynman derives the Schrödinger equation from a tight-binding model (a lattice approximation), the galilean invariance is not obvious at all. Since the lectures are introductory, everything in there just becomes second nature, so it doesn't matter that they are old. The old books should just be easier, because the old stuff is already floating in the air. If you find something in the Feynman Lectures which isn't completely obvious, you should study it until it is obvious--- there's no barrier, the things are self-contained. | {
"source": [
"https://physics.stackexchange.com/questions/29355",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6307/"
]
} |
29,469 | It is known that you can break P spontaneously--- look at any chiral molecule for an example. Spontaneous T breaking is harder for me to visualize. Is there a well known condensed matter system which is uncontroversial example where T is broken spontaneously? I remember vaguely articles of Wen, Wilczek, and Zee from 1989 or so on standard High Tc hopping models, electrons which singly-occupy lattice sites, double-occupation repulsion, small amount of p-doping (holes running around), where they made the claim that T is spontaneously broken. Unfortunately I didn't understand how this happened or if it actually happened. If somebody understands the Zee example, that's good, but I would be happy with any example. I am not looking for explicit T breaking, only spontaneous T breaking. I would also like an example where the breaking is thermodynamically significant in the large system limit, so mesoscopic rings with permanent currents caused by electron discreteness is not a good example. | The simplest example in condensed matter physics that spontaneously breaks time reversal symmetry is a ferromagnet. Because spins (angular momentum) change sign under time reversal, the spontaneous magnetization in the ferromagnet breaks the symmetry. This is a macroscopic example. The chiral spin liquid (Wen-Wilczek-Zee) mentioned in the question is a non-trivial example that breaks time reversal but with out any spontaneous magnetization. Its order parameter is the spin chirality $E_{123}=\mathbf{S}_1\cdot(\mathbf{S}_2\times\mathbf{S}_3)$, which measures the Berry curvature (effective magnetic field) in the spin texture. Because $E_{123}$ also changes sign under time reversal, so the T symmetry is broken by spontaneous development of the spin chirality. Chiral spin liquid can be consider as a condensation of the skyrmion which carries the quantum of spin chirality but is spin neutral as a whole. In fact, within the spin system, one can cook up any order parameter consisting of odd number of spin operators ($\mathbf{S}_1$ for ferromagnets and $E_{123}$ for chiral spin liquid are both examples of such constructions). Then by ordering such order parameter, the time reversal symmetry can be broken spontaneously. Beyond the spin system, it is still possible to break time reversal symmetry by the development of orbital angular momentum (loop current) ordering. Just think of spins and loop currents are both angular momenta, what can be done with spins can also be done with loop currents. Indeed, the spinless fermion system can break the time reversal symmetry using the loop current (Note the word "spinless", so there is no spin SU(2) nor spin-orbit coupling involved in the following discussion). Simply consider the spinless fermion $c_i$ on a square lattice coupling to a U(1) gauge field $a_{ij}$, the Hamiltonian reads
$$H=-t\sum_{\langle ij\rangle}e^{ia_{ij}}c_i^\dagger c_j+g\sum_\square \prod_{\langle ij\rangle\in\partial\square}e^{ia_{ij}}+h.c.$$
With zero flux per plaquette and with the filling of 1/2 fermion per site, the system has a fermi surface and the fermi level rest on a Van Hove singularity, which is very unstable energetically. The fermions wish to develop any kind of order as long as a it helps to open a gap at the fermi level, such that the fermi energy can be reduced. It is found that the stagger flux is a solution, in which the U(1) flux $\pm\phi$ goes through the plaquette alternately following the checkboard pattern. The corresponding gauge connection is $a_{i,i+x}=0, a_{i,i+y}=(\phi/2)(-)^{i_x+i_y}$. One can show that the energy dispersion for the fermion is given by
$$E=\pm\sqrt{\cos^2k_x+\cos^2k_y+2\cos\frac{\phi}{2}\cos k_x\cos k_y},$$
which removes the Van Hove singularity and opens up a pseudo gap (like Dirac cones) as long as $\phi\neq 0$. Therefore driven by the fermi energy, $\phi$ wishes to grow toward the maximum flux $\pi$. However due to the $g$ term in the Hamiltonian, the development of stagger flux consumes magnetic energy (the energy of orbital angular momentum), which grows as $\phi^2$ for small $\phi$. The competition between the fermi energy $t$ and the magnetic energy $g$ will eventually agree on a saddle point value for $\phi$ which is between 0 and $\pi$ and its specific value can be tuned by the $t/g$ ratio. In terms of fermions, the stagger flux $\phi$ is interpreted as loop currents alternating between clockwise and counterclockwise around each plaquette following the check board pattern. Such a state is also call the orbital antiferromagnet (an antiferromagnetic arrangement of orbital angular momentum) or d-wave density wave (DDW) in high-Tc context. Here $\phi$ serves as the order parameter of the stagger flux state. Because $\phi$ changes sign under time reversal symmetry (like any other magnetic flux), the spontaneous development of the stagger flux pattern in the spinless fermion system will break the time reversal symmetry. In solid-state materials, such phenomenon has not been observed due to the too small $t/g$ ratio which is unable to drive $\phi$ away from 0. However considering the fast development of cold atom physics, the spontaneous time reversal symmetry broken in spinless fermion system may be realized in the future in the optical lattice. | {
"source": [
"https://physics.stackexchange.com/questions/29469",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4864/"
]
} |
29,583 | Is there any speed difference between blue or red light? Is there ever a speed difference? Or do all types of light move at the same speed? | The speed of light in vacuum is constant and does not depend on characteristics of the wave (e.g. its frequency, polarization, etc). In other words, in vacuum blue and red colored light travel at the same speed c . The propagation of light in a medium involves complex interactions between the wave and the material through which it travels. This makes the speed of light through the medium dependent on multiple factors which include the frequency (other example factors being refraction index of the material, polarization of the wave, its intensity and direction). The phenomenon due to which the speed of a wave depends on its frequency is known as dispersion and is the reason why prism and water droplets separate white light into a rainbow. | {
"source": [
"https://physics.stackexchange.com/questions/29583",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8607/"
]
} |
29,696 | I'm not very knowledgeable about physics generally, but know that nothing can escape a black hole's gravitational pull, not even light (making them nearly invisible?). My question is: What has been obtained from direct observation of a black hole to prove their existence? Please note that I'm not questioning the existence of black holes, but am just curious as to how we have come to realize it. | Good question! As you guessed, nothing can escape from a black hole, so it is impossible to see one directly. (Quantum field theory does predict that black holes give off an extremely tiny amount of thermal radiation, but it's so little that it we can't detect it from Earth.) Scientists assume that black holes exist based mainly on the predictions of general relativity. In particular, general relativity tells us that if an amount of mass $M$ is contained in a spherical volume of radius $r_s = \frac{2GM}{c^2}$, space will be warped so drastically that all possible paths within that sphere lead inward toward the central point. The surface of that sphere is the event horizon, the boundary of the black hole. Now, you might wonder how we can be so certain that general relativity works for such strong gravitational fields and thus that event horizons exist. Well, we can't directly confirm this, but GR does work for everything else we've tested it against, so there's really no reason to doubt the prediction of event horizons. Having established that event horizons (and, thus, black holes) are allowed by the theory, how do we go about actually detecting one? The original method of detecting a black hole is by looking for very intense X-ray and gamma ray emissions. These come not from the black hole itself, but from the accretion disc , the dust and gas particles that have become trapped in the black hole's gravity well and are circling it in preparation to fall in. When the particles get very close to the event horizon, they bump into each other very energetically, and those collisions emit high-energy radiation which we can detect. Obviously this only occurs if there is enough gas and dust in the vicinity of the black hole to form an accretion disc. It's possible for other very dense objects to have accretion discs, but based on the properties of the radiation, we can tell how quickly the particles are moving, and thus put some limits on the size and mass of the object they are orbiting. If its radius is less than $r_s$ for its mass, then we assume it's a black hole. More recently, similar observations have been made for stars orbiting the centers of our galaxy and other galaxies. By observing the positions of the stars over time, we can analyze their orbits to determine the characteristics (size and mass) of what they are orbiting. If the size is smaller than $r_s$, then again, general relativity tells us the object is a black hole. | {
"source": [
"https://physics.stackexchange.com/questions/29696",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9502/"
]
} |
29,903 | Tyre companies boast of their wider tires for better grip on road. Also, the F1 cars have broad tires for better grip. But as far as I know Friction does not depend on the surface area of contact between the materials. Even the formula says so..
$F=\mu mg$ (where $F$ = Force of friction, $\mu$ = coefficient of friction, $m$ = mass, $g$ = gravity) Can anyone please tell me the relation between broad tires and road grip ? | It's a surprisingly complicated question. Given your mention of friction, probably the main point is that for a car tyre the friction is not linearly dependant on load. Wikipedia has some information about this here . If you had perfectly smooth surfaces the friction is actually proportional to the area of contact and independant of the load. This is because friction is an adhesive effect between atoms/molecules on the surfaces that are in contact. However in the real world surfaces are not smooth. If you touch two metal surfaces together the contact is between high spots on the two surfaces so the area that is in contact is much less than than the apparent area of contact. If you increase the load you deform these high spots and broaden them, so the effect of load is to increase the real area of contact. The real area of contact is approximately proportional to the load, and the friction is proportional to the area of contact, so the friction ends up being approximately proportional to the load. However a rubber type is a lot softer than metal, and a road is a lot rougher than a metal plate. Even at low loads the tyre deforms to key into the irregularities in the road, so increasing the load has a lesser effect. That's why you get the sub-linear dependance described in the Wikipedia article. But this is only the start of the complexity. If you use a wider tyre the contact patch area isn't necessarily bigger. A wider tyre has a wider shorter contact patch while a narrow tyre has a narrower longer contact patch. The contact patch area depends on the tyre pressure, the deformation of the sidewalls and probably lots of other things I can't think of at the moment. And anyway, if by "grip" you mean grip when cornering, the grip isn't just controlled by the contact patch area. When a car is cornering the contact patch is being twisted. This is known as the slip angle . The wider shorter contact patch on a wide tyre has a smaller slip angle and as a result grips better. | {
"source": [
"https://physics.stackexchange.com/questions/29903",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9788/"
]
} |
29,929 | We created a table in my physics class which contained the strength of gravity on different planet and objects in space. At altitude 0 (Earth), the gravitational strength is 100%. On the Moon at altitude 240,000 miles, it's 0.028%. And on the International Space Station at 4,250 miles, the gravitational strength compared to the surface of the earth is 89%. Here's my question:
Why is the strength of gravity compared to the surface of the Earth 89% even though it appears like the ISS has no gravity since we see astronauts just "floating" around? | The effective gravity inside the ISS is very close to zero, because the station is in free fall. The effective gravity is a combination of gravity and acceleration. (I don't know that "effective gravity" is a commonly used phrase, but it seems to me to be applicable here.) If you're standing on the surface of the Earth, you feel gravity (1g, 9.8 m/s 2 ) because you're not in free fall. Your feet press down against the ground, and the ground presses up against your feet. Inside the ISS, there's a downward gravitational pull of about 0.89g, but the station itself is simultaneously accelerating downward at 0.89g -- because of the gravitational pull. Everyone and everything inside the station experiences the same gravity and acceleration, and the sum is close to zero. Imagine taking the ISS and putting it a mile above the Earth's surface. It would experience about the same 1.0g gravity you have standing on the surface, but in addition the station would accelerate downward at 1.0g (ignoring air resistance). Again, you'll have free fall inside the station, since everything inside it experiences the same gravity and acceleration (at least until it hits the ground). The big difference, of course, is that the ISS never hits the ground. Its horizontal speed means that by the time it's fallen, say, 1 meter, the ground is 1 meter farther down, because the Earth's surface is curved. In effect, the station is perpetually falling, but never getting any closer to the ground. That's what an orbit is. (As Douglas Adams said, the secret of flying is to throw yourself at the ground and miss.) But it's not quite that simple. There's still a little bit of atmosphere even at the height at which the ISS orbits, and that causes some drag. Every now and then they have to re-boost the station, using rockets. During a re-boost, the station isn't in free fall. The result is, in effect, a very small "gravitational" pull inside the station -- which you can see in a fascinating NASA video about reboosting the station . | {
"source": [
"https://physics.stackexchange.com/questions/29929",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9799/"
]
} |
29,955 | Some (generous) assumptions: We have a spaceship that can reach a reasonable fraction of light speed. The ship is able to withstand the high energies of matter impacting at that speed. Given the amount of matter in inter-stellar space, at high speed, would it encounter enough of it and frequently enough that an aerodynamic shape would significantly reduce its drag (and thus save fuel)? | For the sorts of vehicles we're used to, like cars and aeroplanes, there are two contributions to drag. There's the drag caused by turbulence, and the drag caused by the effort of pushing the air out of the way. The streamlining in cars and aeroplanes is designed to reduce the drag due to turbulence. The effort of pushing the air out of the way is basically down to the cross-sectional area of whatever is pushing its way through the air. Turbulence requires energy transfer between gas molecules, so you can't get turbulence on length scales shorter than the mean free path of the gas molecules. The Wikipedia article on mean free paths helpfully lists values of the mean free path for the sort of gas densities you get in space. The gas density is very variable, ranging from $10^6$ molecules per $\mathrm{cm}^3$ in nebulae to (much) less than one molecule per $\mathrm{cm}^3$ in intergalactic space, but if we take the value of $10^4$ in the table on Wikipedia the mean free path is $100\,000\ \mathrm{km}$ . So unless your spaceship is very big indeed we can ignore drag due to turbulence. A sidenote: turbulence is extremely important in nebulae, and a quick glance at any of the Hubble pictures of nebulae shows turbulent motion. However the length scale of the turbulence is of the order of light-years, so it's nothing to worry a spaceship. So your spaceship designer doesn't have to worry about the sort of streamlining used in aeroplanes, but what about the drag due to hitting gas molecules? Let's start with a non-relativistic calculation, say at $0.5c$ , and use the density of $10^4\ \mathrm{cm^{-3}}$ I mentioned above, and let's suppose that the gas is atomic hydrogen. If the mass per cubic metre is $\rho$ and you're travelling at a speed $v$ then the mass you hit per second is: $$ m = \rho v $$ Suppose when you hit the gas molecules you accelerate them to match your speed, then the rate of change of momentum is this mass times your speed, $v$ , and the rate of change of momentum is just the force so: $$ F = \rho v^2 $$ An atom density of $10^4\ \mathrm{cm^{-3}}$ is $10^{10}\ \mathrm{m^{-3}}$ or about $1.7 \times 10^{-17}\ \mathrm{kg/m^3}$ and $0.5c$ is $1.5 \times 10^8\ \mathrm{m/s}$ so $F$ is about $0.4\ \mathrm{N/m^2}$ . So unless your spaceship is very big the drag from hitting atoms is insignificant as well, so not only do you not worry about streamlining, you don't have to worry about the cross-section either. However so far I've only talked about non-relativistic speeds, and at relativistic speeds you get two effects: the gas density goes up due to Lorentz contraction the relativistic mass of the hydrogen atoms goes up so it gets increasingly harder to accelerate them to match your speed These two effects add a factor of $\gamma^2$ to the equation for the force: $$ F = \rho v^2 \gamma^2 $$ so if you take $v = 0.999c$ then you get $F$ is about $7.5\ \mathrm{N/m^2}$ , which is still pretty small. However $\gamma$ increases without limit as you approach the speed of light so eventually the drag will be enough to stop you accelerating any more. Incidentally, if you have a friendly university library to hand have a look at Powell, C. (1975) Heating and Drag at Relativistic Speeds. J. British Interplanetary Soc., 28, 546–552. Annoyingly, I have Googled in vain for an online copy. | {
"source": [
"https://physics.stackexchange.com/questions/29955",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1264/"
]
} |
29,956 | I'm interested in learning how to use geometry and topology in physics. Could anyone recommend a book that covers these topics, preferably with some proofs, physical applications, and emphasis on geometrical intuition? I've taken an introductory course in real analysis but no other higher math. | If you want to learn topology wholesale, I would recommend Munkres' book, "Topology", which goes quite far in terms of introductory material. However, in terms of what might be useful for physics I would recommend either: Nakahara's "Geometry, Topology and Physics" Naber's "Topology, Geometry and Gauge Fields: Foundations" Personally, I haven't read much of Nakahara, but I've heard good things about it, although it may presuppose too many concepts. I've read selections of Naber and it seems fairly well written and understandable and starts from first principles, but again, it may not focus as much on the fundamentals, if that's what you're looking for. | {
"source": [
"https://physics.stackexchange.com/questions/29956",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
30,166 | I know very little about special relativity. I never learnt it properly, but every time I read someone saying If you boost in the $x$ -direction, you get such and such my mind goes blank! I tried understanding it but always get stuck with articles that assume that the reader knows everything. So, what is a Lorentz boost, and how to calculate it? And why does the direction matter? | Lorentz boost is simply a Lorentz transformation which doesn't involve rotation. For example, Lorentz boost in the x direction looks like this: \begin{equation}
\left[
\begin{array}{cccc}
\gamma & -\beta \gamma & 0 & 0 \newline
-\beta \gamma & \gamma & 0 & 0 \newline
0 & 0 & 1 & 0 \newline
0 & 0 & 0 & 1
\end{array}
\right]
\end{equation} where coordinates are written as (t, x, y, z) and \begin{equation}
\beta = \frac{v}{c}
\end{equation}
\begin{equation}
\gamma = \frac{1}{\sqrt{1 - \frac{v^2}{c^2}}}
\end{equation} This is a linear transformation which given coordinates of an event in one reference frame allows one to determine the coordinates in a frame of reference moving with respect to the first reference frame at velocity v in the x direction. The ones on the diagonal mean that the transformation does not change the y and z coordinates (i.e. it only affects time t and distance along the x direction). For comparison, Lorentz boost in the y direction looks like this: \begin{equation}
\left[
\begin{array}{cccc}
\gamma & 0 & -\beta \gamma & 0 \newline
0 & 1 & 0 & 0 \newline
-\beta \gamma & 0 & \gamma & 0 \newline
0 & 0 & 0 & 1
\end{array}
\right]
\end{equation} which means that the transformation does not affect the x and z directions (i.e. it only affects time and the y direction). In order to calculate Lorentz boost for any direction one starts by determining the following values: \begin{equation}
\gamma = \frac{1}{\sqrt{1 - \frac{v_x^2+v_y^2+v_z^2}{c^2}}}
\end{equation}
\begin{equation}
\beta_x = \frac{v_x}{c},
\beta_y = \frac{v_y}{c},
\beta_z = \frac{v_z}{c}
\end{equation} Then the matrix form of the Lorentz boost for velocity v=(v x , v y , v z ) is this: \begin{equation}
\left[
\begin{array}{cccc}
L_{tt} & L_{tx} & L_{ty} & L_{tz} \newline
L_{xt} & L_{xx} & L_{xy} & L_{xz} \newline
L_{yt} & L_{yx} & L_{yy} & L_{yz} \newline
L_{zt} & L_{zx} & L_{zy} & L_{zz} \newline
\end{array}
\right]
\end{equation} where \begin{equation}
L_{tt} = \gamma
\end{equation}
\begin{equation}
L_{ta} = L_{at} = -\beta_a \gamma
\end{equation}
\begin{equation}
L_{ab} = L_{ba} = (\gamma - 1) \frac{\beta_a \beta_b}{\beta_x^2 + \beta_y^2 + \beta_z^2} + \delta_{ab} = (\gamma - 1) \frac{v_a v_b}{v^2} + \delta_{ab}
\end{equation} where a and b are x , y or z and δ ab is the Kronecker delta . | {
"source": [
"https://physics.stackexchange.com/questions/30166",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9897/"
]
} |
30,267 | Possible Duplicate: What experiment would disprove string theory? A hypothesis without hard evidence sounds very much like philosophy or religion to me. All of them tries to establish a functional model explaining how the world works. In my understanding, string theory is an unprovable theory. What differs string theory from philosophy or religion? | How does string theory differ from philosophy or religion? As the field of mathematics, string theory differs from philosophy and religion not by its experimental verifiability but by its methods. It is mostly based on mathematical reasoning. It uses basically the same tools that are used in other areas of quantum physics. It has many points of contact with other areas of physics and mathematics. It is studied in physics and math departments. Some of those working on string theories are also highly regarded in other areas of theoretical physics or mathematics. (E.g. Witten got the Fields medal, the highest distinction in mathematics.) Some concepts first discussed in string theory found later use in particle physics. (e.g., AdS/CFT, hep-th/9905111; hep-ph/0702210) It may make one day testable predictions of previously unknown effects, and can then be checked for its validity. | {
"source": [
"https://physics.stackexchange.com/questions/30267",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9934/"
]
} |
30,268 | This comes from a brain teaser but I'm not sure I can solve it: You are in a rowing boat on a lake. A large heavy rock is also in the boat. You heave the rock overboard. It sinks to the bottom of the lake. What happens to the water level in the lake? Does it rise, fall or stay the same? I'd say that the water level drops, because when you drop the stone in the lake, the water level rises according to the volume of the stone, BUT the water level decreases by a volume of water weighting the stone's weight, when you take it off the boat. Is that correct? | This diagram is my attempt to show the situation first when the rock is in the boat and secondly when you've chucked the rock over the side. The mass of the boat of $M$ and the mass of the rock is $m$. The density of water is $\rho_w$ and the density of the rock is $\rho_r$. In the first case Archimedes' principle tells us that the volume of water displaced is: $$ V_{disp1} = \frac{M + m}{\rho_w} $$ In the second case the volume of water displaced is: $$ V_{disp2} = \frac{M}{\rho_w} + \frac{m}{\rho_r} $$ where the second term is just the volume of the rock. If we take the difference of these two we get: $$ V_{disp1} - V_{disp2} = \frac{m}{\rho_w} - \frac{m}{\rho_r} $$ I think it's safe to assume that $\rho_r$ > $\rho_w$, i.e. the rock sinks in water, and in that case $ V_{disp1} - V_{disp2}$ is positive i.e. more water is displaced when the rock is in the boat, so the water level falls when you chuck the rock overboard. | {
"source": [
"https://physics.stackexchange.com/questions/30268",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9935/"
]
} |
30,341 | My four-year-old daughter asked me why paper tends to fall apart when wet, and I wasn't sure. I speculated that the water lubricates the paper fibers so that they can untangle and separate more easily, but I really wasn't sure. | Forever_a_Newcomer is on the right lines, but it's not like water dissolving salt. Paper is mostly made from cellulose fibres (depending on the type there may also be filers and glazes like clay). Cellulose molecules bristle with hydroxyl (OH) groups, and these form hydrogen bonds with each other. It's these hydrogen bonds that make the individual fibres stiff, and also hold the fibres together . Water is also full of OH bonds, obviously since it's H$_2$O, and the water molecules form hydrogen bonds with the hydroxyl groups on the cellulose, which breaks the hydrogen bonds that cellulose molecules form with each other . There are two results from this: firstly the cellulose fibes in the paper become floppy, because their internal hydrogen bonds are broken, and secondly the fibres separate from each other more easily. The combination of these two effects makes paper easier to tear apart when wet. Most organic materials show similar behaviour. For example cotton is also easier to tear when wet (cotton is also made mostly from cellulose). Also hair becomes floppier and more easily damaged when wet, though the effect is less pronounced because hair contains fewer hydrogen bonds than cellulose fibres. | {
"source": [
"https://physics.stackexchange.com/questions/30341",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9062/"
]
} |
30,366 | When something gets wet, it usually appears darker. This can be observed with soil, sand, cloth, paper, concrete, bricks... What is the reason for this? How does water soaking into the material change its optical properties? | When you look at a surface like sand, bricks, etc, the light you are seeing is reflected by diffuse reflection . With a flat surface like a mirror, light falling on the surface is reflected back at the same angle it hit the surface ( specular reflection ) and you see a mirror image of the light falling on the surface. However a material like sand is basically lots of small grains of glass, and light is reflected at all the surfaces of the grains. The result is that the light falling on the sand gets reflected back in effectively random directions and the reflected light just looks white. The reflection comes from the refractive index mismatch at the boundary between between air $\left(n = 1.004\right)$ and sand $\left(n \approx 1.54\right)$. Light is reflected from any refractive index change. So suppose you filled the spaces between the sand grains with a liquid of refractive index $1.54$. If you did this there would no longer be a refractive index change when light crossed the boundary between the liquid and the sand, so no light would be reflected. The result would be that the sand/liquid would be transparent. And this is the reason behind the darkening you see when you add water to sand. The refractive index of water $\left(n = 1.33\right)$ is less than sand, so you still get some reflection. However the reflection from a water/sand boundary is a lot less than from an air/sand boundary because the refractive index change is less. The reason that sand gets darker when you add water to it is simply that there is a lot less light reflected. The same applies to brick, cloth, etc. If you look at a lot of material close up you find they're actually transparent. For example cloth is made from cotton or man made fibres, and if you look at a single fibre under a microscope you'll find you can see through it. The reason the materials are opaque is purely down to reflection at the air/material boundaries. | {
"source": [
"https://physics.stackexchange.com/questions/30366",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4573/"
]
} |
30,574 | I read that Quantum Chromodynamics is a theory with a mass gap. What is a mass gap in laymen terms? Why some theories have it? Which theories does not have it? Note: I searched for mass gap before asking, all topics about it assume that the reader knows what it is, or define it in an advanced and technical way. So I'm looking for a simple answer if possible. | A mass-gap means that aside from the vacuum (totally empty space), the next higher energy state has an energy which is bigger than zero by a finite amount, not by an arbitrarily small amount. This usually means no massless particles, since massless particles can have arbitrarily low energy. Another way of saying mass-gap which is somewhat more mathematical is that all correlation functions (the statistical versions of quantum fields) are exponentially decaying, so that the field values in imaginary time are independent when you go out far enough away. This is to contrast with a theory with no mass gap, where the correlations go to zero slowly, as a power of the distance between the points. The mathematical definition is that there exists a positive constant A such that the energy of any state obeys: $\langle\psi|H|\psi\rangle \ge \langle 0| H |0\rangle + A$ Where $|\psi\rangle$ is any state and $|0\rangle$ is the vacuum state. | {
"source": [
"https://physics.stackexchange.com/questions/30574",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9897/"
]
} |
30,597 | I was reading Brian Greene's "Hidden Reality" and came to the part about Hawking Radiation. Quantum jitters that occur near the event horizon of a black hole, which create both positive-energy particles and negative-energy particles, produce the so-called Hawking radiation. However, I do not understand why only the negative-energy particles are being absorbed into the black hole, while the positive-energy particles shoot outward. Shouldn't there be 50/50 chance that each type of particle is being absorbed by the black hole? Also, the book mentions that a negative-energy particle would appear to an observer inside the black hole as positive. Why? | There are two ways to approach your question. The first is to explain what Brian Greene means, and the second is to point out that the "particles being swallowed" explanation is a metaphor and isn't actually how the calculation is done. I'll attempt both, but I'm outside my comfort zone so if others can expand or correct what follows please jump in! When a pair of virtual particles are produced there isn't a negative energy particle and a positive energy particle. Instead the pair form an entangled system where it's impossible to distinguish between them. This entangled system can interact with the black hole and split, and the interaction guarantees that the emerging particle will be the positive one. NB "positive" and "negative" doesn't mean "particle" and "anti-particle" (for what it does mean see below), and the black hole will radiate equal numbers of particles and anti-particles. Now onto the second bit, and I approach this with trepidation. When you quantise a field you get positive frequency and negative frequency parts. You can sort of think of these as representing particles and anti-particles. How the positive and negative frequencies are defined depends on your choice of vacuum, and in quantum field theory the vacuum is unambiguously defined. The problem is that in a curved spacetime, like the region near a black hole, the vacuum changes. That means observers far from the black hole see the vacuum as different from observers near the black hole, and the two observers see different numbers of particles (and antiparticles). A vaccum near the event horizon looks like excess particles to observers far away, and this is the source of the radiation. See the Wikipedia article on the Bogoliubov transformation for more information, though I must admit I found this article largely incomprehensible. Exactly the same maths gives the Unruh effect , i.e. the production of particles in an accelerated frame. The fact that the Unruh effect also produces particles shows that a black hole is not necessary for the radiation, so it can't simply be virtual particles being swallowed. | {
"source": [
"https://physics.stackexchange.com/questions/30597",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9974/"
]
} |
30,732 | If the Higgs field gives mass to particles, and the Higgs boson itself has mass, does this mean there is some kind of self-interaction? Also, does the Higgs Boson have zero rest mass and so move at light-speed? | Most of the popular science TV programmes and magazine articles give entirely the wrong idea about how the Higgs mechanism works. They tend to give the impression that there is a single Higgs boson that (a) causes particles masses and (b) will be found around 125GeV by the LHC. The mass is generated by the Higgs field. See the Wikipedia article on the Higgs mechanism for details. To (over)simplify, the Higgs field has four degrees of freedom, three of which interact with the W and Z bosons and generate masses. The remaining degree of freedom is what we see as the 125Gev Higgs boson. In a sense, the Higgs boson that the LHC is about to discover is just what's left over after the Higgs field has done it's work. The Higgs boson gets its mass from the Higgs mechanism just like the W and Z bosons: it's not the origin of the particle masses. The Higgs boson doesn't have zero rest mass. A quick footnote: Matt Strassler's blog has an excellent article about this. The Higgs mass can be written as an interaction with the Higgs field just like e.g. the W boson. However Matt Strassler makes the point that this is a coincidence rather than anything fundamental and unlike the W and Z the Higgs boson could have a non-zero mass even if the Higgs field was zero everywhere. | {
"source": [
"https://physics.stackexchange.com/questions/30732",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7993/"
]
} |
30,922 | I'm a bit confused about the difference between these two concepts. According to Wikipedia the Fermi energy and Fermi level are closely related concepts. From my understanding, the Fermi energy is the highest occupied energy level of a system in absolute zero? Is that correct? Then what's the difference between Fermi energy and Fermi level? | If you consider a typical metal the highest energy band (i.e. the conduction band) is partially filled. The conduction band is effectively continuous, so thermal energy can excite electrons within this band leaving holes lower in the band. At absolute zero there is no thermal energy, so electrons fill the band starting from the bottom and there is a sharp cutoff at the highest occupied energy level. This energy defines the Fermi energy. At finite temperatures there is no sharply defined most energetic electron because thermal energy is continuously exciting electrons within the band. The best you can do is define the energy level with a 50% probability of occupation, and this is the Fermi level. | {
"source": [
"https://physics.stackexchange.com/questions/30922",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10158/"
]
} |
31,201 | Might a planet perform figure-8 orbits around two stars? I'm thinking that if the two stars were equal mass (and not orbiting each other) then a planet that were to go right between them would continue in a straight line, with no preference for either star. But since the two stars would in fact be orbiting each other, the system would be rotating and thus there would be a Coriolis preference for one of the stars. Might that preference be made to alternate stars? Another possibility would be if each star were in turn orbited closely by another planet, which would perform three orbits for each orbit of our planet of interest. Then things could be timed where on one pass star A's inner planet were aligned right to pull the planet of interest into an orbit, and on another pass star B's inner planet would be aligned right to pull the planet of interest into an orbit. So we have a system of five bodies, two massive (stars) each orbited by a minor, and one minor performing figure-8s. Is this at least plausible if contrived? | It would be possible, but very unlikely, since the orbits wouldn't be stable. Try to take a look at this visualization of the gravitational potential of a binary star system (from the Wikipedia Roche Lobe entry): If the planet orbits just one of the stars, its orbit will be inside one of the lobes of the thick-lined figure eight at the bottom part, analogous to a ball rolling around inside one of the "bowls" on the 3D-figure. Such an orbit will be stable, just like the Earth's around the sun (bar perturbations from other planets, but let's leave them out for now), and there will be many different orbital energies for which this is true. The same goes for an orbit around both stars: the planet will have many different energy levels at which it would simply experience the two stars' gravity combined as the gravity of one single body (and in which case the figure wouldn't apply, since it would be practically unaffected by the two stars orbiting each other). In order to orbit in a figure eight, you have to imagine that the ball has to roll across the ridge between the two indentations in the 3D part of the figure. It is clear that this is possible, but also intuitively clear that this would only be possible for a narrow range of orbital energies (a little less and it would go into one of the holes, a little more and it would simply just orbit them both), and that it would not be a stable orbit. The ball would have to roll in an orbit where it exactly passes the central saddle point at the ridge (L1) in order to stay stable, the tiniest little imperfection will get it perturbed even further away from its ideal trajectory. Your 5-body system could possibly be timed in such a way that it would work, but it would suffer the same fundamental flaw, and as far as I can see, it would also introduce even more sources of instability into the system. This is, by the way, the gravitational potential in the rotated coordinate system , and you can see from the symmetry of the system that the coreolis preference you mention is not present. A simple symmetry argument should convince you of the same, though: Assume the system is rotating clockwise. This should allegedly give you a preference for one of the stars. But if you now let the system continue, while you rotate yourself 180 degrees up/down, it will now be rotating counter clockwise, which should give a coreolis preference for the other star, which of course cannot be the case, since there is no preferred up/down direction in a system like this. | {
"source": [
"https://physics.stackexchange.com/questions/31201",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4877/"
]
} |
31,215 | Is it possible to make a photovoltaic cell that would only absorb the invisible part of the electromagnetic spectrum, while letting visible light pass through or bounce off its surface? I guess that if the n-type Si and p-type are added special dopants so the energy gap needed for the electron to escape would be the same as the one provided by the photons of such radiations, it would work. But this is when it gets tricky as visible light stands in the middle of IR and UV light (what you would normally look for in the non-visible spectrum, ignoring the other waves), so if the cell would have a specific band gap that would be satisfied from UV light, it wouldn't be from IR as it has lower energy (provided visible light is let intact). | It would be possible, but very unlikely, since the orbits wouldn't be stable. Try to take a look at this visualization of the gravitational potential of a binary star system (from the Wikipedia Roche Lobe entry): If the planet orbits just one of the stars, its orbit will be inside one of the lobes of the thick-lined figure eight at the bottom part, analogous to a ball rolling around inside one of the "bowls" on the 3D-figure. Such an orbit will be stable, just like the Earth's around the sun (bar perturbations from other planets, but let's leave them out for now), and there will be many different orbital energies for which this is true. The same goes for an orbit around both stars: the planet will have many different energy levels at which it would simply experience the two stars' gravity combined as the gravity of one single body (and in which case the figure wouldn't apply, since it would be practically unaffected by the two stars orbiting each other). In order to orbit in a figure eight, you have to imagine that the ball has to roll across the ridge between the two indentations in the 3D part of the figure. It is clear that this is possible, but also intuitively clear that this would only be possible for a narrow range of orbital energies (a little less and it would go into one of the holes, a little more and it would simply just orbit them both), and that it would not be a stable orbit. The ball would have to roll in an orbit where it exactly passes the central saddle point at the ridge (L1) in order to stay stable, the tiniest little imperfection will get it perturbed even further away from its ideal trajectory. Your 5-body system could possibly be timed in such a way that it would work, but it would suffer the same fundamental flaw, and as far as I can see, it would also introduce even more sources of instability into the system. This is, by the way, the gravitational potential in the rotated coordinate system , and you can see from the symmetry of the system that the coreolis preference you mention is not present. A simple symmetry argument should convince you of the same, though: Assume the system is rotating clockwise. This should allegedly give you a preference for one of the stars. But if you now let the system continue, while you rotate yourself 180 degrees up/down, it will now be rotating counter clockwise, which should give a coreolis preference for the other star, which of course cannot be the case, since there is no preferred up/down direction in a system like this. | {
"source": [
"https://physics.stackexchange.com/questions/31215",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9786/"
]
} |
31,326 | Browsing Quora, I saw the following question with contradicting answers. For the highest voted answer : The bits are represented by certain orientations of magnetic fields
which shouldn't have any effect on gravitational mass. But, another answer contradicts that one: Most importantly, higher information content correlates with a more
energetic configuration and this is true regardless of the particular
type of storage... Now, as per Einstein's most famous formula, energy
is equivalent to mass. Which answer is correct? | I wrote a blog post about this some time ago. The answer is yes, but by a tiny amount that you would never be able to measure: something like $10^{-14}\text{ g}$ (roughly) for a typical ~1TB hard drive. That value comes from the formula for the potential energy of a pair of magnetic dipoles, $$E = \frac{\mu_0}{4\pi}\frac{\mu_1 \mu_2 \cos\theta}{r^3}$$ In my post, I estimate that a hard drive might contain $10^{23}$ electrons total, split into $10^{12}$ magnetic domains which are spaced around $0.1\ \mathrm{\mu m}$ apart. That means the magnetic moment of each of these domains is $10^{11}\mu_B$, with $\mu_B = \frac{e\hbar}{2m_e}$ being the Bohr magneton . If you plug this into the formula above, and multiply by 4 under the assumption that each magnetic domain interacts with 4 nearest neighbors, you wind up finding that the total energy is no more than $5\text{ J}$, depending on the value of $\cos\theta$. That corresponds, via $E = mc^2$, to an equivalent mass of around $10^{-14}\text{ g}$. Admittedly all of these numbers are rough order-of-magnitude estimates, and there are various other effects that contribute little bits to the energy, but any corrections aren't going to shift this by more than a couple of orders of magnitude one way or another. Given that the equivalent mass of the energy stored in the magnets is a full 17 orders of magnitude less than the mass of the hard drive itself, it's safe to say that the difference is undetectable. Incidentally, I also tried out the equivalent calculation for flash memory in another blog post. | {
"source": [
"https://physics.stackexchange.com/questions/31326",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/119/"
]
} |
31,395 | Suppose you reset the parameters of the standard model so that the Higgs field average value is zero in the vacuum, what would happen to standard matter? If the fundamental fermions go from a finite to a zero rest mass, I'm pretty sure that the electrons would fly away from nuclei at the speed of light, leaving positively charged nuclei trying to get away from each other. Looking at the solution for the Hydrogen atom, I don't see how it would be possible to have atoms with zero rest mass electrons. What happens to protons and neutrons? Since only a very small part of the mass of protons and neutrons is the rest mass of the quarks, and since they're flying around in there at relativistic speeds already, and since the nuclear force is so much stronger than the electrical force with an incredible aversion to naked color, would protons and neutrons remain bound assemblages of quarks and virtual gluons? Would they get a little larger? A little less massive? What would happen to nuclei? Would they stay together? If the protons and neutrons hold together and their properties change only some, then I might expect the same of nuclei. Different stable isotopes, different sizes, and different masses, but I would expect there would still be nuclei. Also the W and Z particles go to zero rest mass. What does that do to the electroweak interactions? Does that affect normal stable matter (outside of nuclear decay modes)? Is the weak force no longer weak? What happens to the forces overall? | The analysis of the phase structure of gauge theories is a whole field. Some major breakthroughs were the t'Hooft anomaly matching conditions, the Banks-Zaks theories, Seiberg duality, and Seiberg Witten theory. There is a lot of controversy here, because we don't have experiment or simulation data for most of the space, and there is much more unknown than known. The first thing to note is that when the Higgs field vacuum expectation value is zero, the Higgs doesn't touch the low energy physics. You can ignore the Higgs at energy scales lower than it's mass, and if this mass is much greater than the proton mass, the result is indistinguishable qualitatively from the Higgsless standard model. So I'll describe the Higgsless standard model. Higgsless standard model Even without the Higgs, electroweak symmetry is broken anyway by QCD condensates. When the Higgs VEV is zero, the W and Z do not become completely massless, although they become much much lighter. The reason is that QCD has a nontrivial vacuum, where quarks antiquark pairs form a q-qbar scalar fluid that breaks the chiral symmetry of the quark fields spontaneously. This phenomenon is robust to the number of light quark flavors, assuming that there aren't so many that you deconfine QCD. QCD is still asymptotically free with 6 flavors, and it should be confining even with 6 flavors of quarks. So I have no compunctions about assuming the confinement mechanism still works with 6 flavors, and all 6 are now like the up and down quark. Assuming the qualitative vacuum structure is analogous to QCD is plausible and consistent with the anomaly conditions, but if someone were to say "no, the vacuum structure of QCD with 6 light quarks is radically different from the vacuum structure of QCD", I wouldn't know that this is wrong with certainty, although it would be strange. Anyway, assuming that QCD with 6 light quarks produces the same sorts of condensates as QCD with 3 light quarks (actually 2 light quarks and a semi-light strange quark), the vacuum will be filled with a fluid which breaks SU(6)xSU(6) chiral rotations of quark fields into the diagonal SU(6) subgroup. The SU(6) is exact in the strong interactions and mass terms, it is only broken by electroweak interactions. The electroweak interactions are entirely symmetric between the 3 families, so there is a completely exact SU(3) unbroken to all orders. The SU(6)xSU(6) breaking makes a collection of massless Goldstone bosons, massless pions. The number of massless pions is the number of generators of SU(6), which is 35. Of these, 8 are exactly massless, while the rest get small masses from electroweak interactions (but 3 of the remaining 27 go away into W's and Z's by Higgs mechanism, see below). The 8 massless scalars give long-range nuclear forces, which are an attractive inverse square force between nuclei, in addition to gravity. The hadrons are all nearly exactly symmetric under flavor SU(6) isospin, and exactly symmetric under the SU(3) subgroup. All the strongly interacting particles fall into representation of SU(6) now, and the mass-breaking is by terms which are classified by the embedding of SU(3) into SU(6) defined by rotating pairs of coordinates together into each other. The pions and the nucleons are stable, the pion stability is ensured by being massless, the nucleon stability by approximate baryon number conservation. At least the lowest energy SU(3) multiplet The condensate order-parameter involved in breaking the chiral SU(6) symmetry of the quarks is $\sum_i \bar{q}_i q_i$ for $q_i$ an indexed list of the quark fields u,d,c,s,t,b. The order parameter is just like a mass term for the quarks, and I have already diagonalized this order parameter to find the mass states. The important thing about this condensate is that the SU(2) gauge group acts only on the left-handed part of the quark fields, and the left-handed and right handed parts have different U(1) charge. So the condensate breaks the SU(2)xU(1) gauge symmetry. The breaking preserves a certain unbroken U(1) subgroup, which you find by acting the SU(2) and U(1) generators. The left handed quark field has charge 1/6 and makes a doublet, so for the combination $I_3+Y/2$ where I is the SU(2) generator and Y is the U(1) generator, you get a transformation of 2/3 and 1/3 on the top and bottom component, which is exactly the same as $I_z + Y/2$ on the singlets (since they have no I). So this combination isn't chiral, and preserves the vacuum. So the QCD vacuum preserves the ordinary electromagnetic subgroup, which means it makes a Higgs, just like the real Higgs, which breaks the SU(2)xU(1) down to U(1) electromagnetic, with W and Z bosons just like in the standard model. This is not really as much of a coincidence as it appears to be--- a large part of this is due to the fact that QCD condensates in our universe are not charged, so that they don't break electromagnetism, because u-bar and u have opposite electromagnetic charge transformation. This means that a u-bar u condensation leaves electromagnetism unbroken, and it isn't a surprise that it doesn't leave any of the rest of SU(2) and U(1) unbroken, because it's a chiral condensate, and these are chiral gauge transformations. The major difference is that there are 3 separate Higgs-like condensates, one for each family, each with an identical VEV, all completely symmetric with each other under the global exact SU(3) family symmetry. The W's and Z's get a mass from an arbitrary one of these 3, leaving 2 dynamical Higgs-like condensates. The main difference is that these scalar condensates don't necessarily have a simple distinguishable higgs-boson-like oscillation, unlike a fundamental scalar Higgs. The result of this is that the W's and Z's acquire QCD-scale masses, so around 100 MeV for the W's and Z's, as opposed to approximately 100 GeV in the real world. The ratio of the W and Z mass is exactly as in the standard model. Behavior of analogs of ordinary objects The low energy spectrum of QCD is modified drastically, due to the large quark number. The 8 massless pions and 24 nearly massless pions (three of the pions are eaten by the W's and Z's to become part of the massive vectors) include all the diquark degrees of freedom that we call the pions,kaons and certain heavy quark mesons. There will still be a single instanton heavy eta-prime from the instanton violated chiral U(1) part of U(6)xU(6). There should be 35 rho particles splitting into 8 and 27 and 35 A particles splitting into 8 and 27 effectively gauging the flavor symmetry. The 6 quarks could be thought of as getting a mass from their strong interaction with the Higgs-like condensates, of order some meVs, but since the mass of a quark is defined at short distances, from the propagator, it might be more correct to say the quarks are massless. Some of the particles you see in the data-book, the sigma(660), the f0(980) should disappear (as these are weird--- they might be the product of pion interactions making some extremely unstable bound states, something which wouldn't work with massless pions) The electron and neutrino will be massless except for nonrenormalizable quark-lepton direct coupling, which would couple the electron to the Higgslike chiral quark condensate. This effect is dimension 6, so the compton wavelength of the electron will be comparable to the current radius of the visible universe. The neutrino mass will be even more strongly suppressed, so it might as well be exactly massless. The massless electron will lead the electromagnetic coupling (the unHiggsed U(1) left over below the QCD scale) to logarithmically go to zero at large distances, from the log-running of QED screening. So electromagnetism, although it will be the same subgroup of SU(2) and U(1) as in the Higgsed standard model, will be much weaker at macroscopic distances than it is in our universe. Nuclei should form as usual at short distances, although Isospin is now a nearly exact SU(6) symmetry broken only by electromagnetism, and not by quark mass, and with an exact SU(3) subgroup. So all nuclei come in SU(6) multiplets slightly split into SU(3) multiplets. The strong force will be longer ranged, and without the log-falloff of the electromagnetic force, because the pions quickly run to a free-field theory, since the pion self-interactions are of a sigma-model type. The pion interactions will look similar to gravity in a Newtonian approximation, but scalar mediated, so not obeying the equivalence principle, and disappearing in scattering at velocities comparable to the speed of light. The combination of a long-ranged attractive nuclear force and a log-running screened electromagnetic force might give you nuclear bound galaxies, held at fixed densities by the residual slowly screened electrostatic repulsion. These galaxies will be penetrated by a cloud of massless electrons and positrons constantly pair-producing from the vacuum. | {
"source": [
"https://physics.stackexchange.com/questions/31395",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9541/"
]
} |
31,514 | A neutron outside the nucleus lives for about 15 minutes and decays mainly through weak decays (beta decay). Many other weakly decaying particles decay with lifetimes between $10^{-10}$ and $10^{-12}$ seconds, which is consistent with $\alpha_W \simeq 10^{-6}$. Why does the neutron lives so much longer than the others? | NB: I feel like this is a pretty half-assed job, and I apologize for that but having opened my mouth in the comments I guess I have to write something to back it up. We start with Fermi's golden rule for all transitions. The probability of the transition is $$ P_{i\to f} = \frac{2\pi}{\hbar} \left|M_{i,f}\right|^2 \rho $$ where $\rho$ is the density of final states which is proportional to $p^2$ for massive particles. To find the rate 1 for all possible final states we sum over these probabilities incoherently. When the mass difference between the initial and final states is much less than the $W$ mass the matrix element $M_{i,f}$ depends only weakly (hah!) on the particular state and the sum is well approximated by a sum only over the density of states: $$P_\text{decay} \approx \frac{2\pi}{\hbar} \left|M_{}\right|^2 \int_\text{all outcomes} \rho .$$ This sum is collectively called the phase space available to the decay. In these cases the matrix element is also quite small for the reason that Dr BDO discusses . The phase space computation can be quite complicated as it must be taken over all unconstrained momenta of the products. For decays to two body states it turns out to be easy, there is no freedom in the final states except the $4\pi$ angular distribution in the decay frame (their are eight degrees of freedom in two 4-vectors, but 2 masses and the conservation of four momentum account for all of them except the azimuthal and polar angles of one of the particles). The decays that you have asked about are to three body states. That gives us twelve degrees of freedom less three constraints from masses, four from conservation of 4-momentum which leaves five. Three of these are the Euler angles describing the orientation of the decay (and a factor of $8\pi^2$ to $\rho$ ), so our sum is over two non-trival momenta. The integral looks something like $$
\begin{array}\\
\rho \propto \int p_1^2 \mathrm{d}p_1 \int p_2^2 \mathrm{d}p_2 \int \mathrm{d}(\cos\theta)
&\delta(m_0 - E_1 - E_2-E_3 ) \\
&\delta(E_1^2 - m_1^2 - p_1^2) \\
&\delta(E_2^2 - m_2^2 - p_2^2) \\
&\delta(E_2^2 - m_2^2 - p_2^2) \\
&\delta(\vec{p}_1 + \vec{p}_2 + \vec{p}_3)
\end{array}
$$ which is easier to compute in Monte Carlo than by hand. (BTW--the reason for introducing the seemingly redundant integral over the angle $\theta$ between the momenta of particles 1 and 2 will become evident in a little while). For beta decays the remnant nucleus is very heavy compared to the released energy, which simplifies the above in one limit . In the case of muon decay, it is not unreasonable to treat all the products as ultra-relativistic, and the above reduces to $$
\begin{array}\\
\rho \propto \int p_1^2 \mathrm{d}p_1 \int p_2^2 \mathrm{d}p_2 \int \mathrm{d}(\cos\theta)
&\delta(m_0 - E_1 - E_2 - E_3 ) \\
&\delta(E_1 - p_1) \\
&\delta(E_2 - p_2) \\
&\delta(E_3 - p_3) \\
&\delta(\vec{p}_1 + \vec{p}_2 + \vec{p}_3) \\
= \int p_1^2 \mathrm{d}p_1 \int p_2^2 \mathrm{d}p_2 \int \mathrm{d}(\cos\theta)
&\delta(m_0 - p_1 - p_2 - p_3 ) \\
&\delta(\vec{p}_1 + \vec{p}_2 + \vec{p}_3) \\
= \int p_1^2 \mathrm{d}p_1 \int p_2^2 \mathrm{d}p_2 \int \mathrm{d}(\cos\theta)
&\delta(m_0 - p_1 - p_2 - \left|\vec{p}_1 + \vec{p}_2\right| )\\
= \int p_1^2 \mathrm{d}p_1 \int p_2^2 \mathrm{d}p_2 \int \mathrm{d}(\cos\theta)
&\delta\left(m_0 - p_1 - p_2 - \sqrt{p_1^2 + p_2^2 - p_1p_2\cos\theta} \right)
\end{array}
$$ The integral over the angle will evaluate to one in some regions and zero in others and as such is equivalent to correctly assigning the limits of the other two integrals, so writing $\delta m = m_0 - m_1 - m_2 - m_3$ we get $$
\begin{array}
\rho
& \propto \int_0^{\delta m/2} p_1^2 \mathrm{d}p_1 \int_0^{\delta m-p_1} p_2^2 \mathrm{d}p_2 \\
& \propto \int_0^{\delta m/2} p_1^2 \mathrm{d}p_1 \left[ \frac{p_2^3}{3}\right]_{p_2=0}^{\delta m-p_1} \\
& \propto \int_0^{\delta m/2} p_1^2 \mathrm{d}p_1 \frac{(\delta m - p_1)^3}{3}
\end{array}
$$ which I am not going to bother finishing but shows that that phase space can vary as a high power of the mass difference (up to the sixth power in this case). 1 The lifetime of the state is inversely proportional to the probability | {
"source": [
"https://physics.stackexchange.com/questions/31514",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1502/"
]
} |
31,519 | At quantum scale, gravity is the weakest force. Its even negligible in front of weak force, electromagnetic force, strong force. At macroscopic scale, we see gravity everywhere. Its actually ruling the universe. Electromagnetic force is also everywhere, but its at rank 2 when it comes to controlling motion of macroscopic bodies. And, there's no luck finding strong force and weak force. How can that be? Is that because gravity only adds up but others cancelled out too? I am unable to understand how resultant of weakest force can be so big. Can you please show it with calculation? | NB: I feel like this is a pretty half-assed job, and I apologize for that but having opened my mouth in the comments I guess I have to write something to back it up. We start with Fermi's golden rule for all transitions. The probability of the transition is $$ P_{i\to f} = \frac{2\pi}{\hbar} \left|M_{i,f}\right|^2 \rho $$ where $\rho$ is the density of final states which is proportional to $p^2$ for massive particles. To find the rate 1 for all possible final states we sum over these probabilities incoherently. When the mass difference between the initial and final states is much less than the $W$ mass the matrix element $M_{i,f}$ depends only weakly (hah!) on the particular state and the sum is well approximated by a sum only over the density of states: $$P_\text{decay} \approx \frac{2\pi}{\hbar} \left|M_{}\right|^2 \int_\text{all outcomes} \rho .$$ This sum is collectively called the phase space available to the decay. In these cases the matrix element is also quite small for the reason that Dr BDO discusses . The phase space computation can be quite complicated as it must be taken over all unconstrained momenta of the products. For decays to two body states it turns out to be easy, there is no freedom in the final states except the $4\pi$ angular distribution in the decay frame (their are eight degrees of freedom in two 4-vectors, but 2 masses and the conservation of four momentum account for all of them except the azimuthal and polar angles of one of the particles). The decays that you have asked about are to three body states. That gives us twelve degrees of freedom less three constraints from masses, four from conservation of 4-momentum which leaves five. Three of these are the Euler angles describing the orientation of the decay (and a factor of $8\pi^2$ to $\rho$ ), so our sum is over two non-trival momenta. The integral looks something like $$
\begin{array}\\
\rho \propto \int p_1^2 \mathrm{d}p_1 \int p_2^2 \mathrm{d}p_2 \int \mathrm{d}(\cos\theta)
&\delta(m_0 - E_1 - E_2-E_3 ) \\
&\delta(E_1^2 - m_1^2 - p_1^2) \\
&\delta(E_2^2 - m_2^2 - p_2^2) \\
&\delta(E_2^2 - m_2^2 - p_2^2) \\
&\delta(\vec{p}_1 + \vec{p}_2 + \vec{p}_3)
\end{array}
$$ which is easier to compute in Monte Carlo than by hand. (BTW--the reason for introducing the seemingly redundant integral over the angle $\theta$ between the momenta of particles 1 and 2 will become evident in a little while). For beta decays the remnant nucleus is very heavy compared to the released energy, which simplifies the above in one limit . In the case of muon decay, it is not unreasonable to treat all the products as ultra-relativistic, and the above reduces to $$
\begin{array}\\
\rho \propto \int p_1^2 \mathrm{d}p_1 \int p_2^2 \mathrm{d}p_2 \int \mathrm{d}(\cos\theta)
&\delta(m_0 - E_1 - E_2 - E_3 ) \\
&\delta(E_1 - p_1) \\
&\delta(E_2 - p_2) \\
&\delta(E_3 - p_3) \\
&\delta(\vec{p}_1 + \vec{p}_2 + \vec{p}_3) \\
= \int p_1^2 \mathrm{d}p_1 \int p_2^2 \mathrm{d}p_2 \int \mathrm{d}(\cos\theta)
&\delta(m_0 - p_1 - p_2 - p_3 ) \\
&\delta(\vec{p}_1 + \vec{p}_2 + \vec{p}_3) \\
= \int p_1^2 \mathrm{d}p_1 \int p_2^2 \mathrm{d}p_2 \int \mathrm{d}(\cos\theta)
&\delta(m_0 - p_1 - p_2 - \left|\vec{p}_1 + \vec{p}_2\right| )\\
= \int p_1^2 \mathrm{d}p_1 \int p_2^2 \mathrm{d}p_2 \int \mathrm{d}(\cos\theta)
&\delta\left(m_0 - p_1 - p_2 - \sqrt{p_1^2 + p_2^2 - p_1p_2\cos\theta} \right)
\end{array}
$$ The integral over the angle will evaluate to one in some regions and zero in others and as such is equivalent to correctly assigning the limits of the other two integrals, so writing $\delta m = m_0 - m_1 - m_2 - m_3$ we get $$
\begin{array}
\rho
& \propto \int_0^{\delta m/2} p_1^2 \mathrm{d}p_1 \int_0^{\delta m-p_1} p_2^2 \mathrm{d}p_2 \\
& \propto \int_0^{\delta m/2} p_1^2 \mathrm{d}p_1 \left[ \frac{p_2^3}{3}\right]_{p_2=0}^{\delta m-p_1} \\
& \propto \int_0^{\delta m/2} p_1^2 \mathrm{d}p_1 \frac{(\delta m - p_1)^3}{3}
\end{array}
$$ which I am not going to bother finishing but shows that that phase space can vary as a high power of the mass difference (up to the sixth power in this case). 1 The lifetime of the state is inversely proportional to the probability | {
"source": [
"https://physics.stackexchange.com/questions/31519",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2170/"
]
} |
32,011 | I have a pretty good knowledge of physics, but couldn't deeply understand what a tensor is and why it is so fundamental. | A (rank 2 contravariant) tensor is a vector of vectors. If you have a vector, it's 3 numbers which point in a certain direction. What that means is that they rotate into each other when you do a rotation of coordinates. So that the 3 vector components $V^i$ transform into $$V'^i = A^i_j V^j$$ under a linear transformation of coordinates. A tensor is a vector of 3 vectors that rotate into each other under rotation (and also rotate as vectors--- the order of the two rotation operations is irrelevant). If a vector is $V^i$ where i runs from 1-3 (or 1-4, or from whatever to whatever), the tensor is $T^{ij}$, where the first index labels the vector, and the second index labels the vector component (or vice versa). When you rotate coordinates T transforms as $$ T'^{ij} = A^i_k A^j_l T^{kl} = \sum_{kl} A^i_k A^j_l T^{kl} $$ Where I use the Einstein summation convention that a repeated index is summed over, so that the middle expression really means the sum on the far right. A rank 3 tensor is a vector of rank 2 tensors, a rank four tensor is a vector of rank 3 tensors, so on to arbitrary rank. The notation is $T^{ijkl}$ and so on with as many upper indices as you have a rank. The transformation law is one A for each index, meaning each index transforms separately as a vector. A covariant vector, or covector, is a linear function from vectors to numbers. This is described completely by the coefficients, $U_i$, and the linear function is $$ U_i V^i = \sum_i U_i V^i = U_1 V^1 + U_2 V^2 + U_3 V^3 $$ where the Einstein convention is employed in the first expression, which just means that if the same index name occurs twice, once lower and once upper, you understand that you are supposed to sum over the index, and you say the index is contracted. The most general linear function is some linear combination of the three components with some coefficients, so this is the general covector. The transformation law for a covector must be by the inverse matrix $$ U'_i = \bar{A}_i^j U_j $$ Matrix multiplication is simple in the Einstein convention: $$ M^i_j N^j_k = (MN)^i_k $$ And the definition of $\bar{A}$ (the inverse matrix) makes it that the inner product $U_i V^i$ stays the same under a coordinate transformation (you should check this). A rank-2 covariant tensor is a covector of covectors, and so on to arbitrarily high rank. You can also make a rank m,n tensor $T^{i_1 i_2 ... i_m}_{j_1j_2 ... j_n}$, with m upper and n lower indices. Each index transforms separately as a vector or covector according to whether it is up or down. Any lower index may be contracted with any upper index in a tensor product, since this is an invariant operation. This means that the rank m,n tensors can be viewed in many ways: As the most general linear function from m covectors and n vectors into numbers As the most general linear function from a rank m covariant tensor into a rank n contravariant tensor As the most general linear function from a rank n contravariant tensor into a rank m covariant tensor. And so on for a number of interpretations that grows exponentially with the rank. This is the mathemtician's preferred definition, which does not emphasize the transformation properties, rather it emphasizes the linear maps involved. The two definitions are identical, but I am happy I learned the physicist definition first. In ordinary Euclidean space in rectangular coordinates, you don't need to distinguish between vectors and covectors, because rotation matrices have an inverse which is their transpose, which means that covectors and vectors transform the same under rotations. This means that you can have only up indices, or only down, it doesn't matter. You can replace an upper index with a lower index keeping the components unchanged. In a more general situation, the map between vectors and covectors is called a metric tensor $g_{ij}$. This tensor takes a vector V and produces a covector (traditionally written with the same name but with a lower index) $$ V_i = g_{ij} V^i$$ And this allows you to define a notion of length $$ |V|^2 = V_i V^i = g_{ij}V^i V^j $$ this is also a notion of dot-product, which can be extracted from the notion of length as follows: $$ 2 V\cdot U = |V+U|^2 - |V|^2 - |U|^2 = 2 g_{\mu\nu} V^\mu U^\nu $$ In Euclidean space, the metric tensor $g_{ij}= \delta_{ij}$ which is the Kronecker delta. It's like the identity matrix, except it's a tensor, not a matrix (a matrix takes vectors to vectors, so it has one upper and one lower index--- note that this means it automatically takes covectors to covectors, this is multiplication of the covector by the transpose matrix in matrix notation, but Einstein notation subsumes and extends matrix notation, so it is best to think of all matrix operations as shorthand for some index contractions). The calculus of tensors is important, because many quantities are naturally vectors of vectors. The stress tensor: If you have a scalar conserved quantity, the current density of the charge is a vector. If you have a vector conserved quantity (like momentum), the current density of momentum is a tensor, called the stress tensor The tensor of inertia: For rotational motion of rigid object, the angular velocity is a vector and the angular momentum is a vector which is a linear function of the angular velocity. The linear map between them is called the tensor of inertia. Only for highly symmetric bodies is the tensor proportional to $\delta^i_j$, so that the two always point in the same direction. This is omitted from elementary mechanics courses, because tensors are considered too abstract. Axial vectors: every axial vector in a parity preserving theory can be thought of as a rank 2 antisymmetric tensor, by mapping with the tensor $\epsilon_{ijk}$ High spin represnetations: The theory of group representations is incomprehensible without tensors, and is relatively intuitive if you use them. Curvature: the curvature of a manifold is the linear change in a vector when you take it around a closed loop formed by two vectors. It is a linear function of three vectors which produces a vector, and is naturally a rank 1,3 tensor. metric tensor: this was discussed before. This is the main ingredient of general relativity Differential forms: these are antisymmetric tensors of rank n, meaning tensors which have the property that $A_{ij} =-A_{ji}$ and the analogous thing for higher rank, where you get a minus sign for each transposition. In general, tensors are the founding tool for group representations, and you need them for all aspects of physics, since symmetry is so central to physics. | {
"source": [
"https://physics.stackexchange.com/questions/32011",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6336/"
]
} |
32,041 | What is the physical/deeper reason for the second order shift of the ground state energy in time independent perturbation theory to be always down? I know that it follows from the formula quite straightforwardly, but I could not find a deeper reason for it even if if I took up examples like the linear Stark effect. Would you please tell me why? | I) The lowering of the ground state energy is a special case of the more general phenomenon of level repulsion (because the excited energy levels by definition must be larger than the ground state energy). II) Level repulsion is not just a quantum phenomenon. It also happens for purely classical systems, e.g. two coupled oscillators, as mentioned in the link. III) Mathematically, the non-zero off-diagonal elements ("the interactions") of an $n\times n$ Hermitian matrix $(H_{ij})_{1\leq i,j \leq n}$ ("the Hamiltonian") cause the real eigenvalues $E_1$, $\ldots$, $E_n$, (counted with multiplicity) to, loosely speaking, spread out more than the distribution of diagonal elements $H_{11}$, $\ldots$, $H_{nn}$. This eigenvalue repulsion effect is e.g. encoded in the Schur-Horn Theorem . IV) Perhaps a more physical understanding of level repulsion is as follows. When considering a perturbation problem $H=H^{(0)}+V$, we would like to find the true energy eigenstates $|1 \rangle$, $\ldots$, $|n \rangle$, with energy eigenvalues $E_1$, $\ldots$, $E_n$. A priori we only know the unperturbed energy eigenstates $|1^{(0)} \rangle$, $\ldots$, $|n^{(0)} \rangle$, with energy eigenvalues $H_{11}$, $\ldots$, $H_{nn}$. (We have for the sake of simplicity assumed that the interaction part $V$ has no diagonal part. This is always possible via reorganizing $H^{(0)} \leftrightarrow V$.) Thus an unperturbed energy eigenstates $$|i^{(0)} \rangle ~=~ \sum_{j=1}^n |j \rangle~ \langle j |i^{(0)} \rangle$$ is really a linear combination of the true energy eigenstates $|1 \rangle$, $\ldots$, $|n \rangle$. The square of the overlaps $\langle j |i^{(0)} \rangle$ has a probabilistic interpretation $$ \sum_{j=1}^n |\langle j |i^{(0)}\rangle|^2
~=~\langle i^{(0)}|i^{(0)}\rangle ~=~1. $$ Hence the unperturbed energy eigenvalue $$H_{ii}~=~\langle i^{(0)}|H|i^{(0)}\rangle ~=~\sum_{j=1}^n E_j|\langle j |i^{(0)}\rangle|^2 $$ is a quantum average of the true energy eigenvalues $E_1$, $\ldots$, $E_n$ of the system. Intuitively, the
distribution of unperturbed energy eigenvalues $H_{11}$, $\ldots$, $H_{nn}$, must therefore, loosely speaking, be closer to each other than the distribution of true energy eigenvalues $E_1$, $\ldots$, $E_n$. V) Finally, let us mention that eigenvalue repulsion plays an important role in random matrix theory . Integrating out "angular" d.o.f. leads to a Vandermonde measure factor $$ \prod_{1\leq i<j \leq n} |E_j-E_i|^{\beta}, $$ so that the partition function ${\cal Z}$ favors that the eigenvalues $E_1$, $\ldots$, $E_n$ are different. Here the power $\beta$ is traditionally $1$, $2$, or $4$, depending on the random matrix ensemble. For Hermitian matrices $\beta=2$. See also this post. | {
"source": [
"https://physics.stackexchange.com/questions/32041",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8440/"
]
} |
32,122 | Can anyone please provide an intuitive explanation of why phase shift of 180 degrees occurs in the Electric Field of a EM wave, when reflected from an optically denser medium? I tried searching for it but everywhere the result is just used.The reason behind it is never specified. | This is a general property of waves. If you have waves reflecting off a clamped point (like waves running on a string that you pinch hard at one point), the waves get phase inverted. The reason is the principle of superposition and the condition that the amplitude at the clamped point is zero. The sum of the reflected and transmitted wave must be the amplitude of oscillation at all points, so that the reflected wave must be phase inverted to cancel the incoming wave. This property is continuous with the behavior of waves going from a less massive string to a more massive string. The reflection in this case has opposite phase, because the more massive string doesn't respond as quickly to the tension force, and the amplitude of oscillation at the contact point is less than the amplitude of the incoming wave. This means (by superposition) that the reflected wave must cancel part of the incoming wave, and it is phase reflected. When a wave goes from a more massive string to a less massive string, the less massive string responds with less force, so that the derivative at the oscillating end is flatter than it should be. This means that the reflected wave is reflected in phase with the incoming wave, so that the spatial derivative of the wave is cancelled, not the amplitude reduced. In optical materials of high density are analogous to strings with a higher density, hence the name. If you go into a material with low speed of light, the time derivative term in the wave-equation is suppressed, so that the field responds more sluggishly, the same way that a massive material responds more sluggishly to tension pulls. Since the eletric field response in these materials is reduced, the reflected wave is phase inverted to make the sum on the surface less, as is appropriate to match with the transmitted wave. | {
"source": [
"https://physics.stackexchange.com/questions/32122",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
32,203 | So Gerard 't Hooft has a brand new paper (thanks to Mitchell Porter for making me aware of it) so this is somewhat of a expansion to the question I posed on this site a month or so ago regarding 't Hoofts work. Now he has taken it quite a big step further: http://arxiv.org/abs/1207.3612 Does anyone here consider the ideas put forth in this paper plausible?
And if not, could you explain exactly why not? | I only see these writings now, since usually I ignore blogs. For good reason, because here also, the commentaries are written in haste, long before their authors really took the time to think. My claim is simple, as explained umpteen times in my papers: I construct REAL quantum mechanics out of CA like models. I DO have problems of a mathematical nature, but these are infinitely more subtle than what you people are complaining about. These mathematical problems are the reason why I try to phrase things with care, trying not to overstate my case. The claim is that the difficulties that are still there have nothing to do with Bell's inequalities, or the psychological problems people have with entangled states. Even in any REAL QM theory, once you have a basis of states in which the evolution law is a permutator, the complex phases of the states in this basis cease to have any physical significance. If you limit your measurements to measuring which of these basis states you are in, the amplitudes are all you need, so we can choose the phases at will. Assuming that such CA models might describe the real world amounts to assume that measurements of the CA are all you need to find out what happens in the macro world. Indeed, the models I look at have so much internal structure that it is highly unlikely that you would need to measure anything more. I don't think one has to worry that the needle of some measuring device would not be big enough to affect any of the CA modes. If it does, then that's all I need. So, in the CA, the phases don't matter. However, you CAN define operators, as many as you like. This, I found, one has to do. Think of the evolution operator. It is a permutator. A most useful thing to do mathematically, is to investigate how eigenstates behave. Indeed, in the real world we only look at states where the energy (of particles, atoms and the like) is much below the Planck energy, so indeed, in practice, we select out states that are close to the eigenstates of the evolution operator, or equivalently, the Hamiltonian. All I suggest is, well, let's look at such states. How do they evolve? Well, because they are eigenstates, yes, they now do contain phases. Manmade ones, but that's alright. As soon as you consider SUCH states, relative phases, superposition, and everything else quantum, suddenly becomes relevant. Just like in the real world. In fact, operators are extremely useful to construct large scale solutions of cellular automata, as I demonstrated (for instance using BCH). The proper thing to do mathematically, is to arrange the solutions in the form of templates, whose superpositions form the complete set of solutions of the system you are investigating. My theory is that electrons, photons, everything we are used to in quantum theory, are nothing but templates. Now if these automata are too chaotic at too tiny Planckian scales, then working with them becomes awkward, and this is why I began to look at systems where the small scale structure, to some extent, is integrable. That works in 1+1 dimensions because you have right movers and left movers. And now it so happens that this works fantastically well in string theory, which has 1+1 dimensional underlying math. Maybe die-hard string theorists are not interested, amused or surprised, but I am. If you just take the world sheet of the string, you can make all of qm disappear; if you arrange the target space variables carefully, you find that it all matches if this target space takes the form of a lattice with lattice mesh length equal to 2 pi times square root of alphaprime. Yes, you may attack me with Bell's inequalities. They are puzzling, aren't they? But please remember that, as in all no-go theorems that we have seen in physics, their weakest part is on page one, line one, the assumptions. As became clear in my CA work, there is a large redundancy in the definition of the phases of wave functions. When people describe a physical experiment they usually assume they know the phases. So, in handling an experiment concerning Bells's inequalities, it is taken for granted (sorry: assumed) that if you have measured one operator, say the z component of a spin, then an other operator, say the x component, will have some value if that had been measured instead. That's totally wrong. In terms of the underlying CA variables, there are no measurable non-commuting operators. There are only the templates, whose phases are arbitrary. If you aren't able to measure the x component (of a spin) because you did measure the z component, then there is no x component, because the phases were ill-defined. Still, you can ask what actually happens when an Aspect like experiment is done. In arguments about this, I sometimes invoke "super determinism", which states that, if you want to change your mind about what to measure, because you have "free will", then this change of mind always has its roots in the past, all the way to time -> minus infinity, whether you like it or not. The cellular automaton states cannot be the same as in the other case where you did not change your mind. Some of the templates you use have to be chosen different, and so the arbitrary phases cannot be ignored. But if you don't buy anything of the above, the simple straight argument is that I construct real honest-to-god quantum mechanics. Since that ignores Bell's inequalities, that should put the argument to an end. They are violated. | {
"source": [
"https://physics.stackexchange.com/questions/32203",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9822/"
]
} |
32,269 | So this has puzzled me for many a year... I still am no closer to coming to a conclusion, after many arguments that is. I don't think it can, others 100% think it will. If you have a plane trying to take off whilst on a tread mill which will run at the same speed as whatever the planes tires rotation speed is will it take off? [edited to be more clear] The question is simple. Will a plane take off if you put this plane onto a treadmill that will match whatever speed the plane wheels are moving at. So the plane should not be able to move. This is a hypothetical situation of course. But I am very interested. | Idealizing the plane's wheels as frictionless, the thrust from the propeller accelerates the plane through the air regardless of the treadmill. The thrust comes from the prop, and the wheels, being frictionless, do not hold the plane back in any way. If the treadmill is too short, the plane just runs of the end of it and then continues rolling towards take off. If the treadmill is long enough for a normal takeoff roll, the plane accelerates through the air and rotates off of the treadmill. UPDATE: Don't take Alfred's word for it. Mythbusters has actually done the experiment. UPDATE 2: I've been thinking about how the problem is posed (for now as I'm typing this) and it occurred to me that the constraint "run at the same speed as whatever the planes tyres rotation speed" actually means run such that the plane doesn't move with respect to the ground . Consider a wheel of radius $R$ on a treadmill. The treadmill surface has a linear speed $v_T$ to the right. The center of the wheel has a linear speed $v_P$ to the left. The CCW angular speed of the wheel is: $\omega = \dfrac{v_T + v_P}{R}$ If " run at the same speed as whatever the planes tyres rotation speed " means : $\omega = \dfrac{v_T}{R}$ then the constraint requires $v_P = 0$. That is, the question, as posed , is: If the treadmill is run such that the plane doesn't move, will the plane take off? Obviously, the answer is no . The plane must move to take off. Looking at mwengler's long answer, we see what is happening. The rotational speed of the tires and treadmill are not the key, it is the acceleration of the treadmill that imparts a force on the wheel axles (ignoring friction for simplicity here). So, it is in fact the case that it is possible, in principle , ( don't think it is possible in practice though) to control the treadmill in such a way that it imparts a holding force on the plane, preventing it from moving. But, once again, this force is not proportional to the wheels rotational speed , but to the wheel's angular acceleration (note that in the idealized case of massless wheels, it isn't even possible in principle as the lower the moment of inertia of the wheels, the greater the required angular acceleration). | {
"source": [
"https://physics.stackexchange.com/questions/32269",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10657/"
]
} |
32,402 | I have this childrens rubber ball which glows in the dark after it's exposed to light. I "charge" it with a flash light then play with my dogs at night. I thought to try a very intense green laser, and see how the ball reacted. The laser light had no effect on the balls ability to glow. So I'm left wondering, why does laser light not allow luminescence (maybe not the right word) materials to glow? EDIT In Response to Answer. So I tried a little modification. I tried exciting the ball with three different light sources; a "super bright" Red LED, a very very "super bright" white LED and a blue LED of unknown specs (no package, bottom of my kit). I held the ball to each light source (driven with the same current) for the same approximate amount of time and compared the results. The red LED had no effect. The white had a bit of an effect, enough to see dimly in normal room lighting. The blue led had a significant effect, causing a bright glow. This was interesting as the blue LED was the least bright visually. Yay science! | The ball is probably glowing because it has strontium aluminate in, which produces light by phosphoresence . It's a characteristic of phosphorescence that the light emission is quite long lived. This happens because when you shine light onto a phosphor the light promotes it into an excited state that subsequently decays by interactions with the solid lattice into a long lived metastable state. It's this metastable state that decays slowly and emits light as it does so. Because of this mechanism the light emitted is always a longer wavelength/lower energy than the light you need to excite the phosphor. You don't say what colour light the ball emits, but if it uses strontium aluminate it will be a slightly bluey green colour. The light needed to excite is has to be bluer than the light it emits, and that's why your green laser won't make the ball glow. It has too long a wavelength. White light from a torch won't contain that much blue light (unless it has a xenon bulb) but the green laser won't have any blue light at all because obviously lasers are monochromatic. It's also possible that the ball contains zinc sulphide . This isn't as good a phosphor as strontium aluminate, but it's a lot cheaper. If the ball does contain zinc sulphide then the situation is a little more confused because the colour of the phosphorescence is determined by metal additives such as silver and copper. However, the basic principle still applies, that the light needed to excite the phosphor has to be nearer the blue end of the spectrum than the light emitted. Anyhow, thanks (and +1 :-) for a facinating question and I wish I could give you another +1 for actually doing the experiment with the laser. If you have some time on your hands see if you can find some coloured gels and try shining your torch onto the ball through the gels. You should find that the red gel won't cause any glow while the blue gel will, and at some point in the spectrum there will be a colour where the glow starts. | {
"source": [
"https://physics.stackexchange.com/questions/32402",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7913/"
]
} |
32,414 | What is the most energy-efficient way to crush the hardest bedrock on earth while assuming it is impossible to use the chain reaction energy from that bedrock? How many energy is needed? | The ball is probably glowing because it has strontium aluminate in, which produces light by phosphoresence . It's a characteristic of phosphorescence that the light emission is quite long lived. This happens because when you shine light onto a phosphor the light promotes it into an excited state that subsequently decays by interactions with the solid lattice into a long lived metastable state. It's this metastable state that decays slowly and emits light as it does so. Because of this mechanism the light emitted is always a longer wavelength/lower energy than the light you need to excite the phosphor. You don't say what colour light the ball emits, but if it uses strontium aluminate it will be a slightly bluey green colour. The light needed to excite is has to be bluer than the light it emits, and that's why your green laser won't make the ball glow. It has too long a wavelength. White light from a torch won't contain that much blue light (unless it has a xenon bulb) but the green laser won't have any blue light at all because obviously lasers are monochromatic. It's also possible that the ball contains zinc sulphide . This isn't as good a phosphor as strontium aluminate, but it's a lot cheaper. If the ball does contain zinc sulphide then the situation is a little more confused because the colour of the phosphorescence is determined by metal additives such as silver and copper. However, the basic principle still applies, that the light needed to excite the phosphor has to be nearer the blue end of the spectrum than the light emitted. Anyhow, thanks (and +1 :-) for a facinating question and I wish I could give you another +1 for actually doing the experiment with the laser. If you have some time on your hands see if you can find some coloured gels and try shining your torch onto the ball through the gels. You should find that the red gel won't cause any glow while the blue gel will, and at some point in the spectrum there will be a colour where the glow starts. | {
"source": [
"https://physics.stackexchange.com/questions/32414",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8885/"
]
} |
32,422 | I am trying to understand how complex numbers made their way into QM. Can we have a theory of the same physics without complex numbers? If so, is the theory using complex numbers easier? | The nature of complex numbers in QM turned up in a recent discussion, and I got called a stupid hack for questioning their relevance. Mainly for therapeutic reasons, I wrote up my take on the issue: On the Role of Complex Numbers in Quantum Mechanics Motivation It has been claimed that one of the defining characteristics that separate the quantum world from the classical one is the use of complex numbers. It's dogma, and there's some truth to it, but it's not the whole story: While complex numbers necessarily turn up as first-class citizen of the quantum world, I'll argue that our old friend the reals shouldn't be underestimated. A bird's eye view of quantum mechanics In the algebraic formulation, we have a set of observables of a quantum system that comes with the structure of a real vector space. The states of our system can be realized as normalized positive (thus necessarily real) linear functionals on that space. In the wave-function formulation, the Schrödinger equation is manifestly complex and acts on complex-valued functions. However, it is written in terms of ordinary partial derivatives of real variables and separates into two coupled real equations - the continuity equation for the probability amplitude and a Hamilton-Jacobi-type equation for the phase angle. The manifestly real model of 2-state quantum systems is well known. Complex and Real Algebraic Formulation Let's take a look at how we end up with complex numbers in the algebraic formulation: We complexify the space of observables and make it into a $C^*$-algebra. We then go ahead and represent it by linear operators on a complex Hilbert space (GNS construction). Pure states end up as complex rays, mixed ones as density operators. However, that's not the only way to do it: We can let the real space be real and endow it with the structure of a Lie-Jordan-Algebra. We then go ahead and represent it by linear operators on a real Hilbert space (Hilbert-Schmidt construction). Both pure and mixed states will end up as real rays. While the pure ones are necessarily unique, the mixed ones in general are not. The Reason for Complexity Even in manifestly real formulations, the complex structure is still there, but in disguise: There's a 2-out-of-3 property connecting the unitary group $U(n)$ with the orthogonal group $O(2n)$, the symplectic group $Sp(2n,\mathbb R)$ and the complex general linear group $GL(n,\mathbb C)$: If two of the last three are present and compatible, you'll get the third one for free. An example for this is the Lie-bracket and Jordan product: Together with a compatibility condition, these are enough to reconstruct the associative product of the $C^*$-algebra. Another instance of this is the Kähler structure of the projective complex Hilbert space taken as a real manifold, which is what you end up with when you remove the gauge freedom from your representation of pure states: It comes with a symplectic product which specifies the dynamics via Hamiltonian vector fields, and a Riemannian metric that gives you probabilities. Make them compatible and you'll get an implicitly-defined almost-complex structure. Quantum mechanics is unitary, with the symplectic structure being responsible for the dynamics, the orthogonal structure being responsible for probabilities and the complex structure connecting these two. It can be realized on both real and complex spaces in reasonably natural ways, but all structure is necessarily present, even if not manifestly so. Conclusion Is the preference for complex spaces just a historical accident? Not really. The complex formulation is a simplification as structure gets pushed down into the scalars of our theory, and there's a certain elegance to unifying two real structures into a single complex one. On the other hand, one could argue that it doesn't make sense to mix structures responsible for distinct features of our theory (dynamics and probabilities), or that introducing un-observables to our algebra is a design smell as preferably we should only use interior operations. While we'll probably keep doing quantum mechanics in terms of complex realizations, one should keep in mind that the theory can be made manifestly real. This fact shouldn't really surprise anyone who has taken the bird's eye view instead of just looking throught the blinders of specific formalisms. | {
"source": [
"https://physics.stackexchange.com/questions/32422",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9604/"
]
} |
32,685 | Spacetime of special relativity is frequently illustrated with its spatial part reduced to one or two spatial dimension (with light sector or cone, respectively). Taken literally, is it possible for $2+1$ or $1+1$ (flat) spacetime dimensions to accommodate Maxwell's equations and their particular solution - electromagnetic radiation (light)? | No, because the polarization of the electromagnetic field must be perpendicular to the direction of motion of the light, and there aren't enough directions to enforce this condition. So in 1d, a gauge theory becomes nonpropagating, there are no photons, you just get a long range Coulomb force that is constant with distance. In the 1960s, Schwinger analyzed QED in 1+1 d (Schwinger model) and showed that electrons are confined with positrons to make positronium mesons. A much more elaborate model was solved by t'Hooft (the t'Hooft model, the nonabelian Schwinger model) which is a model of a confining meson spectrum. EDIT: 2+1 Dimensions Yes, light exists in 2+1 dimensions, and there is no major qualitative difference with 3+1 dimensions. I thought you wanted 1+1, where it's interesting. | {
"source": [
"https://physics.stackexchange.com/questions/32685",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7786/"
]
} |
32,818 | I'm thinking of applying to do a PhD in String Theory, starting in September 2013. I'm gradually learning more about the subject through external reading, but still most papers are impenetrable! Could anyone give me a description of the areas of current String Theory research that would be accessible for a doctoral student? Are there any particular open problems that might be worth getting to know? Also any general advice on how to choose a PhD topic would be most welcome! Many thanks in advance! | The papers are impenetrable because you are lacking the background, and it is carefully kept hidden from students, so that only the ones that read the old literature can enter the field. The only way to learn it is semi-historical (meaning historical but with hindsight, so you don't learn the stuff that's bogus). Work through at least a good chunk of Green/Schwarz/Witten, Polchinsky and Polyakov's Gauge-fields and strings, without thinking, just to learn what the calculation methods are. Afterwards or simultaneously, read the 1960s articles on bootstrap to understand where all this is coming from, so that you understand the philosophy and founding notions. The original papers are absolutely essential if you want to understand the subject in any nonsuperficial way. There are no substitutes (except review articles from the same era). The boostrap becomes taboo in 1976, so nothing from this point on will be pedagogically or philosophically correct or persuasive (except to the converted). The later literature has huge gaps in explanation, the lacunae correspond precisely to the bootstrap ideas that are left out. You can read a superficial description here: What are bootstraps? and in one of my questions: Are There Strings that aren't Chew-ish? . Without bootstraps, you won't really understand why strings interact by topology, or why they are unique (or even why they are unitary, although Polchinsky has a discussion of this). The Dolen Horn Schmidt paper on finite energy sum-rules is extremely interesting collider physics by itself, but is dismissed in GSW by calling it an "accident"! The literature I found most helpful for unlocking the mysteries of the 1960s were Gribov's 1967 classic "Theory of Complex Angular Momentum" (this is the Rosetta stone for all this literature, although Landau's QM has a Regge theory section which helps too), Veneziano's string review of 1974(or 75) Scherk's review of strings (and generally all his articles), and Mandelstam's review of string theory (also mid 70s, he's like the Bohr and Kramers put together), and the articles in Superstrings I/II then become clear. Then you can follow by reading Witten and reading whoever Witten cites (this is sort of considered standard practice). The articles of t'Hooft on holography from 86-91, Susskind from 90-97, are pretty much self-contained, and require no elaborate string machinery, but they make you understand why the theory looks the way it does. They allow you to understand the physical leap in 97 with Maldacena's work and AdS/CFT. The general rule in string theory is that the mathematics is straightforward (altough difficult), but physics can be completely opaque. You can learn to calculate, but without the historical literature, you won't know why you're getting the right answer or what are the correct generalizations. Open questions There are so many, it's impossible to list them. You won't get a good one from an academic advisor, you probably want to find your own, and quickly. If you read the original and 1990s literature, you will see a million open problems, although in the modern literature (past 2000) you will see only one really: What is the correct formulation of the KLT relations? It is becoming obvious that N=8 SUGRA is finite, and nobody has a proof. It's coming soon, and this is what so many of the best people are working on. This is more mathematics than physics, but it's important in understanding what the perturbative structure of strings are. This is the major concern right now, because it relates string theory to perturbation theory calculations that are important for LHC. The questions in traditional string theory are unfortunately affected by large-extra-dimension disease. This was the free-for-all that ended the second string revolution and led it to degenerate into fantasy (see here: ) Here are some other open problems. I will try to avoid repeating my previous list: What is currently incomplete in M-theory? : What is the precise swampland volume field-number sum-rule? There should be a swampland constraint on the total number of fields from some measure of the volume measure of the compactification. If you have a tiny compactification, there is a central charge constraint and modular invariance, that picks out the gauge group size in heterotic and type I strings. You can't make too much low-energy stuff without violating consistency. As the dimensions get larger, you can stuff more crap in and get more low-energy matter. But there is a heuristic that the more matter you get out, the bigger the compactification. But there is no precise relation known. What is it? How much stuff do we expect total in our universe, including dark matter, and the Higgs sector? How do you prove the mass charge inequality? This is a spectral constraint on string vacua that tells you the lightest charged particle must be lighter than it's mass. There are heuristic arguments that persuade one that it must be true, but it should be provable in any holographic theory. Yet the proof is just out of reach. Simeon Hellerman has a paper on mass bounds for neutral black holes which is a large step forward. What's going on with extremal black holes? If you make a stack of D-branes, and pull one away, there is no restoring force. For appropriate branes, this is described by an N=2 gauge theory with a modulus. If you let the brane slowly collide with the others, it makes oscillations in the field theory, and the collision is described by a geometry analogous to Atiyah and Hitchin monopoles. But this is a black hole collision model now. The point is that it is classically reversible and leads to oscillations, brane bouncing in and out. But you naively expect that in a true black hole collision that this leads to irreversible absorption. What are the near extremal black holes doing classically? Are they irreversible? Are they reversible? I think they're reversible. This is related to the question of recovering the classical geometry from AdS/CFT. The correspondence is very hard to take the classical local limit on, where it is supposed to recover supergravity. You know it works, but this doesn't mean you can trace what happens to classical matter starting far-away going into a stack of branes. Does it come out again (if they are reversible than it must)? But how? Are there calculable non-SUSY vacua? There is a paper on SO(16)xSO(16) heterotic strings which is quickly reviewed in Polchinsky Alvarez-Gaume/Ginsparg/Moore/Vafa. This model is notable, because it is not SUSY, but it has zero vacuum energy at zero coupling. This is a relic of the fact that it is a projection of a SUSY model. Are there other such projections? What's the general idea here? There are also a lot of unorthodox vacua found in the 1980s that were swept under the rug, because people wanted strings to be more unique than they are. This work one should read (although I didn't read enough, just one or two papers, so that I know these exist). Anyway, you will get better ideas than these just from reading, but to do this, you need to quickly go over the old literature, and this only takes a few months if one knows where to look. The key for me was Gribov. | {
"source": [
"https://physics.stackexchange.com/questions/32818",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10833/"
]
} |
32,989 | There is this puzzling thing that is called Mpemba effect : paradoxically, warm (35°C) water freezes faster than cold (5°C) water. As a physisist, I've been asked about it several times already. And I have no definite answer to that, apart from the standard: "there are many things that can influence it". So, does anyone knows about the status or progress on that effect? Any recent reviews, publications or other references? | One boring Monday morning in the lab a group of us did the experiment, and to our surprise we found that the hot water (in sealed containers) did freeze faster. On closer examination we discovered that the shelves in our freezer were covered in frost, like I imagine most freezers, and the hot water was melting the frost and creating a good thermal contact between the beaker of water and the shelf. That turned out to be why the hot water froze faster. When we thoroughly cleaned the freezer shelf the effect went away and the hot water took longer to freeze. I think the rumours about hot water freezing faster illustrate the dangers of improperly controlled experiments. As Ron mentions, evaporation could also be a factor and it would be easy for a home experimenter to get the wrong conclusion. Add to that the fact we'd secretly all be delighted if we could prove hot water really does freeze faster, and you can see how the rumour has spread. | {
"source": [
"https://physics.stackexchange.com/questions/32989",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/386/"
]
} |
33,215 | I'm really interested in quantum theory and would like to learn all that I can about it. I've followed a few tutorials and read a few books but none satisfied me completely. I'm looking for introductions for beginners which do not depend heavily on linear algebra or calculus, or which provide a soft introduction for the requisite mathematics as they go along. What are good introductory guides to QM along these lines? | Introduction to Quantum Mechanics by David Griffiths , any day! Just pick up this book once and try reading it. Since you have no prior background, this is the book to start with. It is aimed at students who have a solid background in basic calculus, but assumes very little background material besides it: A lot of linear algebra is introduced in an essentially self-contained way. Furthermore, it contains all the essential basic material and examples such as the harmonic oscillator, hydrogen atom, etc. The second half of the book is dedicated to perturbation theory. For freshmen or second-year students this a pretty good place to start learning about QM, although some of the other answers to this question suggest books that go a bit further, or proceed at a more rigorous level. | {
"source": [
"https://physics.stackexchange.com/questions/33215",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10970/"
]
} |
33,273 | Is the spacetime continuous or discrete?
Or better, is the 4-dimensional spacetime of general-relativity discrete or continuous? What if we consider additional dimensions like string theory hypothesizes? Are those compact additional dimensions discrete or continuous? Are there experimental evidences of continuity/discreteness? When particles move inside space do they occupy spacetime by little chunks?
What would imply if spacetime is discrete on continuous theories? I've found little information on the web and books. Probably my question is ill-posed and I apologize for this. | is the 4-dimensional spacetime of general-relativity discrete or continuous? In the usual definition of general relativity, spacetime is continuous. However, general relativity is a classical theory and does not take quantum effects into account. Such effects are expected to show up at very short distances, where your question is relevant. Are there experimental evidences of continuity/discreteness? All the experimental evidence points to continuous space, down to the shortest distances at which we have been able to measure . We don't know what happens at shorter distances. We also do not have any direct experimental evidence that gravity is a quantum theory, with the same caveat. On the other hand, we are quite confident that a complete theory of nature must include quantum gravity and not just classical gravity. And, we have an educated guess of the distance scale at which quantum effects should become measurable: this is the Planck length, roughly $10^{-33}$ cm. This is much much shorter than the shortest distance at which we can carry out experiments, so at least we are not surprised that we did not see any such effects so far. Before proceeding, one more caveat. There is an interesting and quite recent astrophysical experiment that showed that Lorentz symmetry holds even below the Planck length. If Lorentz symmetry is broken, it generally means that photons with different energies will travel at different velocities. At the experiment, they managed to detect a pair of photons that were created at almost the same time but had very different energies. They reached the detector almost simultaneously, which means their velocities were similar. Because the photons travelled an enormous distance before reaching us, they must have had almost the same velocity. So we know that at least Lorentz symmetry holds at very short distances, and it seems difficult to reconcile this experimental fact with a discrete spacetime. So at least naively it seems that this is evidence against discreteness. Is the spacetime continuous or discrete? At long distances spacetime can certainly be thought of as continuous. At short distances, the short answer is: we don't know. String theory is the only consistent theory of quantum gravity we know of, where we can actually compute things with some confidence. (You will probably hear some opinions that contradict this statement, mentioning loop quantum gravity, causal sets, etc., which are not related to string theory, but what I said is the common view in the community of high-energy theorists.) String theory is giving us some strong hints that perhaps spacetime at short distances is not continuous or discrete, but something else that we don't understand yet. So the situation is that even theoretically, without talking about actual experiments that check the theory, we don't know what spacetime is like at short distances. Perhaps this is why you don't see this question mentioned a lot. My personal guess is that spacetime at short distances is neither continuous nor discrete, but has a different nature that may require new mathematical tools to describe. Or better, What if we consider additional dimensions like string theory hypothesizes? Are those compact additional dimensions discrete or continuous? Adding extra dimensions does not change any of the above. | {
"source": [
"https://physics.stackexchange.com/questions/33273",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10941/"
]
} |
33,404 | I initially thought that dark energy must in some way violate conservation of mass and energy since the component of the energy density of space that comes from dark energy is constant, and space is expanding. Therefore, as space expands the energy in the universe that comes from dark energy would increase. I presumed the source of this energy was not coming from the conversion of other types of energy to dark energy, so it must violate conservation. I decided to Google this and came upon this article: http://scienceblogs.com/startswithabang/2011/12/02/dark-energy-accelerated-expans/ It says that dark energy does NOT violate conservation and quotes Carroll, Press, and Turner (1992): "…the patch does negative work on its surroundings, because it has negative pressure. Assuming the patch expands adiabatically [i.e. without loss or gain of heat], one may equate this negative work to the increase of mass/energy of the patch. One thereby recovers the correct equation of state for dark energy: P = – ρ c2. So the mathematics is consistent." Is there a way to explain this in layman's terms? (The blog attempted to do this, but it was very unclear to me.) More specifically, can you explain where my initial train of thought described above fails when I erroneously concluded that dark energy violates conservation? Thank you. | The total energy in the space does increase, precisely because of the reason you mention. Energy is not expected to be conserved, because the metric is not invariant under time translations. What does hold is the first law of thermodynamics, $dU = -P dV + \cdots$. Since the pressure in this system is negative, this is one way of seeing the origin of the extra energy as the space grows. | {
"source": [
"https://physics.stackexchange.com/questions/33404",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5553/"
]
} |
33,621 | The current is maximum through those segments of a circuit that offer the least resistance. But how do electrons know beforehand that which path will resist their drift the least? | This is really the same as Adam's answer but phrased differently. Suppose you have a single wire and you connect it to a battery. Electrons start to flow, but as they do so the resistance to their flow (i.e. the resistance of the wire) generates a potential difference. The electron flow rate, i.e. the current, builds up until the potential difference is equal to the battery voltage, and at that point the current becomes constant. All this happens at about the speed of light. Now take your example of having let's say two wires (A and B) with different resistances connected between the wires - lets say $R_A \gt R_B$. The first few electrons to flow will be randomly distributed between the two wires, A and B, but because wire A has a greater resistance the potential difference along it will build up faster. The electrons feel this potential difference so fewer electrons will flow through A and more electrons will flow through wire B. In turn the potential along wire B will build up and eventually the potential difference along both wires will be equal to the battery. As above this happens extremely rapidly. So the electrons don't know in advance what path has the least resistance, and indeed the first few electrons to flow will choose random paths. However once the current has stabilised electron flow is restricted by the electron flowing ahead, and these are restricted by the resistance of the paths. To make an analogy, imagine there are two doors leading out of a theatre, one small door and one big door. The first person to leave after the show will pick a door at random, but as the queues build up more people will pick the larger door because the queue moves faster. | {
"source": [
"https://physics.stackexchange.com/questions/33621",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9244/"
]
} |
33,627 | Okay. I have two ways of working out the height of the atmosphere from pressure, and they give different answers. Could someone please explain which one is wrong and why? (assuming the density is constant throughout the atmosphere) 1) $P=h \rho g$, $\frac{P}{\rho g} = h = \frac{1.01\times 10^5}{1.2\times9.81} = 8600m$ 2) Pressure acts over SA of Earth. Let r be the radius of the Earth. Area of the Earth is $4 \pi r^2$ Volume of the atmosphere is the volume of a sphere with radius $(h+r)$ minus the volume of a sphere with radius $r$.
$\frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3$
Pressure exerted by the mass of the atmosphere is: $P=\frac{F}{A}$ $PA=mg$ $4\pi r^2 P = \rho V g$ $4\pi r^2 P = \rho g (\frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3)$ $\frac{4\pi r^2 P}{\rho g} = \frac{4}{3}\pi (h+r)^3 - \frac{4}{3}\pi r^3$ $3 \times \frac{r^2 P}{\rho g} = (h+r)^3 - r^3$ $3 \times \frac{r^2 P}{\rho g} + r^3 = (h+r)^3$ $(3 \times \frac{r^2 P}{\rho g} + r^3)^{\frac{1}{3}} - r = h$ $(3 \times \frac{(6400\times10^3)^2 \times 1.01 \times 10^5}{1.23 \times 9.81} + (6400\times10^3)^3)^{\frac{1}{3}} - (6400\times10^3) = h = 8570m$ I know that from Occams razor the first is the right one, but surely since $h\rho g$ comes from considering the weight on the fluid above say a 1m^2 square, considering the weight of the atmosphere above a sphere should give the same answer? | This is really the same as Adam's answer but phrased differently. Suppose you have a single wire and you connect it to a battery. Electrons start to flow, but as they do so the resistance to their flow (i.e. the resistance of the wire) generates a potential difference. The electron flow rate, i.e. the current, builds up until the potential difference is equal to the battery voltage, and at that point the current becomes constant. All this happens at about the speed of light. Now take your example of having let's say two wires (A and B) with different resistances connected between the wires - lets say $R_A \gt R_B$. The first few electrons to flow will be randomly distributed between the two wires, A and B, but because wire A has a greater resistance the potential difference along it will build up faster. The electrons feel this potential difference so fewer electrons will flow through A and more electrons will flow through wire B. In turn the potential along wire B will build up and eventually the potential difference along both wires will be equal to the battery. As above this happens extremely rapidly. So the electrons don't know in advance what path has the least resistance, and indeed the first few electrons to flow will choose random paths. However once the current has stabilised electron flow is restricted by the electron flowing ahead, and these are restricted by the resistance of the paths. To make an analogy, imagine there are two doors leading out of a theatre, one small door and one big door. The first person to leave after the show will pick a door at random, but as the queues build up more people will pick the larger door because the queue moves faster. | {
"source": [
"https://physics.stackexchange.com/questions/33627",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11059/"
]
} |
33,629 | From what I have read, the problem with modern semiconductors/electronics seems to be quantum tunnelling and heat. The root of these problems is the size of the devices. The electrons are leaking out, and currents are causing active materials to melt. How far have we become in this regard? Can we make our devices even smaller? What is being done to maintain advancements in computing power? What is the main research, particularly in quantum mechanics and in solid state physics, being done to compute faster using less energy and space? | This is really the same as Adam's answer but phrased differently. Suppose you have a single wire and you connect it to a battery. Electrons start to flow, but as they do so the resistance to their flow (i.e. the resistance of the wire) generates a potential difference. The electron flow rate, i.e. the current, builds up until the potential difference is equal to the battery voltage, and at that point the current becomes constant. All this happens at about the speed of light. Now take your example of having let's say two wires (A and B) with different resistances connected between the wires - lets say $R_A \gt R_B$. The first few electrons to flow will be randomly distributed between the two wires, A and B, but because wire A has a greater resistance the potential difference along it will build up faster. The electrons feel this potential difference so fewer electrons will flow through A and more electrons will flow through wire B. In turn the potential along wire B will build up and eventually the potential difference along both wires will be equal to the battery. As above this happens extremely rapidly. So the electrons don't know in advance what path has the least resistance, and indeed the first few electrons to flow will choose random paths. However once the current has stabilised electron flow is restricted by the electron flowing ahead, and these are restricted by the resistance of the paths. To make an analogy, imagine there are two doors leading out of a theatre, one small door and one big door. The first person to leave after the show will pick a door at random, but as the queues build up more people will pick the larger door because the queue moves faster. | {
"source": [
"https://physics.stackexchange.com/questions/33629",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11132/"
]
} |
33,667 | Suppose you fall from the top of a ladder straight down. You will hit the ground with an amount of force. Now suppose that you fall over while holding onto the ladder, tipping over in an arc instead of falling straight down. You will hit the ground with another amount of force. Neglecting the mass of the ladder and air resistance, which impact will have the most force? Falling or tipping? This has been a bit of a debate between myself, my father, and my grandfather. I believe that they would fall with the same force because, relative to the ground, you start with the same amount of potential energy in both situations. My grandfather and father, however, guess that it would fall with around half the force because the force is dissipated somewhat by the forward motion and the support of the ladder. We would appreciate an answer from a third party to help us find the solution. | Both you and your ancestors are wrong. But I bet you would never guess the real answer! Assuming the base of the ladder doesn't slide you have a rotating system. Just like a freely falling man you convert potential energy to kinetic energy, but for a rotating system the kinetic energy is given by: $$ T = \frac{1}{2}I\omega^2 $$ where $I$ is the angular momentum and $\omega$ is the angular velocity. Note that $v = r\omega$, where $r$ is the radius (i.e. the length of the ladder). We'll need this shortly. Let's start by ignoring the mass of the ladder. In that case the moment of inertia of the system is just due to the man and assuming we treat the man as a point mass $I = ml^2$, where $m$ is the mass of the man and $l$ the length of the ladder. Setting the change in potential energy $mgl$ equal to the kinetic energy we get: $$ mgl = \frac{1}{2}I\omega^2 = \frac{1}{2}ml^2\omega^2 = \frac{1}{2}mv^2 $$ where we get the last step by noting that $l\omega = v$. So: $$ v^2 = 2gl $$ This is exactly the same result as we get for the man falling straight down, so you hit the ground with the same speed whether you fall straight down or whether you hold onto the ladder. But now let's include the mass of the ladder, $m_L$. This adds to the potential energy because the centre of gravity of the ladder falls by $0.5l$, so: $$ V = mgl + \frac{1}{2}m_Lgl $$ Now lets work out the kinetic energy. Since the man and ladder are rotating at the same angular velocity we get: $$ T = \frac{1}{2}I\omega^2 + \frac{1}{2}I_L\omega^2 $$ For a rod of mass $m$ and length $l$ the moment of inertia is: $$ I_L = \frac{1}{3}m_Ll^2 $$ So let's set the potential and kinetic energy equal, as as before we'll substitute for $I$ and $I_L$ and set $\omega = v/l$. We get: $$ mgl + \frac{1}{2}m_Lgl = \frac{1}{2}mv^2 + \frac{1}{6}m_Lv^2$$ and rearranging this gives: $$ v^2 = 2gl \frac{m + \frac{1}{2}m_L}{m + \frac{1}{3}m_L} $$ and if $m_L \gt 0$ the top of the fraction is greater than the bottom i.e. the velocity is greater than $2gl$. If you hold onto the ladder you actually hit the ground faster than if you let go! This seems counterintuitive, but it's because left to itself the ladder would rotate faster than the combined system of you and the ladder. In effect the ladder is accelerating you as you and the ladder fall. That's why the final velocity is higher. | {
"source": [
"https://physics.stackexchange.com/questions/33667",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11147/"
]
} |
33,760 | In three dimensions, the Dirac delta function $\delta^3 (\textbf{r}) = \delta(x) \delta(y) \delta(z)$ is defined by the volume integral: $$\int_{\text{all space}} \delta^3 (\textbf{r}) \, dV = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \delta(x) \delta(y) \delta(z) \, dx \, dy \, dz = 1$$ where $$\delta(x) = 0 \text{ if } x \neq 0$$ and $$\delta(x) = \infty \text{ if } x = 0$$ and similarly for $\delta(y)$ and $\delta(z)$ . Does this mean that $\delta^3 (\textbf{r})$ has dimensions of reciprocal volume? As an example, a textbook that I am reading states: For a collection of $N$ point charges we can define a charge density $$\rho(\textbf{r}) = \sum_{i=1}^N q_i \delta(\textbf{r} - \textbf{r}_i)$$ where $\textbf{r}_i$ and $q_i$ are the position and charge of particle $i$ , respectively. Typically, I would think of charge density as having units of charge per volume in three dimensions: $(\text{volume})^{-1}$ . For example, I would think that units of $\frac{\text{C}}{\text{m}^3}$ might be possible SI units of charge density. If my assumption is true, then $\delta^3 (\textbf{r})$ must have units of $(\text{volume})^{-1}$ , like $\text{m}^{-3}$ for example. Is this correct? | Yes. The Dirac delta always has the inverse dimension of its argument. You can read this from its definition, your first equation. So in one dimension $\delta(x)$ has dimensions of inverse length, in three spatial dimensions $\delta^{(3)}(\vec x)$ (sometimes simply written $\delta(\vec x)$) has dimension of inverse volume, and in $n$ dimensions of momentum $\delta^{(n)}(\vec p)$ has dimensions of inverse momentum to the power of $n$. | {
"source": [
"https://physics.stackexchange.com/questions/33760",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5004/"
]
} |
34,217 | Deterministic models. Clarification of the question: The problem with these blogs is that people are inclined to start yelling at each other. (I admit, I got infected and it's difficult not to raise one's electronic voice.) I want to ask my question without an entourage of polemics. My recent papers were greeted with scepticism. I've no problem with that. What disturbes me is the general reaction that they are "wrong". My question is summarised as follows: Did any of these people actually read the work and can anyone tell me where a mistake was made? Now the details. I can't help being disgusted by the "many world" interpretation, or the Bohm-de Broglie "pilot waves", and even the idea that the quantum world must be non-local is difficult to buy. I want to know what is really going on, and in order to try to get some ideas, I construct some models with various degrees of sophistication. These models are of course "wrong" in the sense that they do not describe the real world, they do not generate the Standard Model, but one can imagine starting from such simple models and adding more and more complicated details to make them look more realistic, in various stages. Of course I know what the difficulties are when one tries to underpin QM with determinism. Simple probabilistic theories fail in an essential way. One or several of the usual assumptions made in such a deterministic theory will probably have to be abandoned; I am fully aware of that. On the other hand, our world seems to be extremely logical and natural. Therefore, I decided to start my investigation at the other end. Make assumptions that later surely will have to be amended; make some simple models, compare these with what we know about the real world, and then modify the assumptions any way we like. The no-go theorems tell us that a simple cellular automaton model is not likely to work. One way I tried to "amend" them, was to introduce information loss. At first sight this would carry me even further away from QM, but if you look a little more closely, you find that one still can introduce a Hilbert space, but it becomes much smaller and it may become holographic, which is something we may actually want. If you then realize that information loss makes any mapping from the deterministic model to QM states fundamentally non-local—while the physics itself stays local—then maybe the idea becomes more attractive. Now the problem with this is that again one makes too big assumptions, and the math is quite complicated and unattractive.
So I went back to a reversible, local, deterministic automaton and asked: To what extent does this resemble QM, and where does it go wrong? With the idea in mind that we will alter the assumptions, maybe add information loss, put in an expanding universe, but all that comes later; first I want to know what goes wrong. And here is the surprise: In a sense, nothing goes wrong. All you have to assume is that we use quantum states, even if the evolution laws themselves are deterministic. So the probability distributions are given by quantum amplitudes. The point is that, when describing the mapping between the deterministic system and the quantum system, there is a lot of freedom. If you look at any one periodic mode of the deterministic system, you can define a common contribution to the energy for all states in this mode, and this introduces a large number of arbitrary constants, so we are given much freedom. Using this freedom I end up with quite a few models that I happen to find interesting. Starting with deterministic systems I end up with quantum systems. I mean real quantum systems, not any of those ugly concoctions. On the other hand, they are still a long way off from the Standard Model, or even anything else that shows decent, interacting particles. Except string theory. Is the model I constructed a counterexample, showing that what everyone tells me about fundamental QM being incompatible with determinism, is wrong? No, I don't believe that. The idea was that, somewhere, I will have to modify my assumptions, but maybe the usual assumptions made in the no-go theorems will have to be looked at as well. I personally think people are too quick in rejecting " superdeterminism ". I do reject "conspiracy", but that might not be the same thing. Superdeterminism simply states that you can't "change your mind" (about which component of a spin to measure), by "free will", without also having a modification of the deterministic modes of your world in the distant past. It's obviously true in a deterministic world, and maybe this is an essential fact that has to be taken into account. It does not imply "conspiracy". Does someone have a good, or better, idea about this approach, without name-calling? Why are some of you so strongly opinionated that it is "wrong"? Am I stepping on someone's religeous feelings? I hope not. References: "Relating the quantum mechanics of discrete systems to standard canonical quantum mechanics", arXiv:1204.4926 [quant-ph]; "Duality between a deterministic cellular automaton and a bosonic quantum field theory in $1+1$ dimensions", arXiv:1205.4107 [quant-ph]; "Discreteness and Determinism in Superstrings", arXiv:1207.3612 [hep-th]. Further reactions on the answers given. (Writing this as "comment" failed, then writing this as "answer" generated objections. I'll try to erase the "answer" that I should not have put there...) First: thank you for the elaborate answers. I realise that my question raises philosophical issues; these are interesting and important, but not my main concern. I want to know why I find no technical problem while constructing my model. I am flattered by the impression that my theories were so "easy" to construct. Indeed, I made my presentation as transparent as possible, but it wasn't easy. There are many dead alleys, and not all models work equally well. For instance, the harmonic oscillator can be mapped onto a simple periodic automaton, but then one does hit upon technicalities: The hamiltonian of a periodic system seems to be unbounded above and below, while the harmonic oscillator has a ground state. The time-reversible cellular automaton (CA) that consists of two steps $A$ and $B$, where both $A$ and $B$ can be written as the exponent of physically reasonable Hamiltonians, itself is much more difficult to express as a Hamiltonian theory, because the BCH series does not converge. Also, explicit $3+1$ dimensional QFT models resisted my attempts to rewrite them as cellular automata. This is why I was surprised that the superstring works so nicely, it seems, but even here, to achieve this, quite a few tricks had to be invented. @RonMaimon. I here repeat what I said in a comment, just because there the 600 character limit distorted my text too much. You gave a good exposition of the problem in earlier contributions: in a CA the "ontic" wave function of the universe can only be in specific modes of the CA. This means that the universe can only be in states $\psi_1,\ \psi_2,\ ...$ that have the property $\langle\psi_i\,|\,\psi_j\rangle=\delta_{ij}$, whereas the quantum world that we would like to describe, allows for many more states that are not at all orthonormal to each other. How could these states ever arise? I summarise, with apologies for the repetition: We usually think that Hilbert space is separable, that is, inside every infinitesimal volume element of this world there is a Hilbert space, and the entire Hilbert space is the product of all these. Normally, we assume that any of the states in this joint Hilbert space may represent an "ontic" state of the Universe. I think this might not be true. The ontic states of the universe may form a much smaller class of states $\psi_i$; in terms of CA states, they must form an orthonormal set. In terms of "Standard Model" (SM) states, this orthonormal set is not separable, and this is why, locally, we think we have not only the basis elements but also all superpositions.
The orthonormal set is then easy to map back onto the CA states. I don't think we have to talk about a non-denumerable number of states, but the number of CA states is extremely large. In short: the mathematical system allows us to choose: take all CA states, then the orthonormal set is large enough to describe all possible universes, or choose the much smaller set of SM states, then you also need many superimposed states to describe the universe. The transition from one description to the other is natural and smooth in the mathematical sense. I suspect that, this way, one can see how a description that is not quantum mechanical at the CA level (admitting only "classical" probabilities), can "gradually" force us into accepting quantum amplitudes when turning to larger distance scales, and limiting ourselves to much lower energy levels only. You see, in words, all of this might sound crooky and vague, but in my models I think I am forced to think this way, simply by looking at the expressions: In terms of the SM states, I could easily decide to accept all quantum amplitudes, but when turning to the CA basis, I discover that superpositions are superfluous; they can be replaced by classical probabilities without changing any of the physics, because in the CA, the phase factors in the superpositions will never become observable. @Ron I understand that what you are trying to do is something else. It is not clear to me whether you want to interpret $\delta\rho$ as a wave function. (I am not worried about the absence of $\mathrm{i}$, as long as the minus sign is allowed.) My theory is much more direct; I use the original "quantum" description with only conventional wave functions and conventional probabilities. (New since Sunday Aug. 20, 2012) There is a problem with my argument. (I correct some statements I had put here earlier). I have to work with two kinds of states: 1: the template states, used whever you do quantum mechanics, these allow for any kinds of superposition; and 2: the ontic states, the set of states that form the basis of the CA. The ontic states $|n\rangle$ are all orthonormal: $\langle n|m\rangle=\delta_{nm}$, so no superpositions are allowed for them (unless you want to construct a template state of course). One can then ask the question: How can it be that we (think we) see superimposed states in experiments? Aren't experiments only seeing ontic states? My answer has always been: Who cares about that problem? Just use the rules of QM. Use the templates to do any calculation you like, compute your state $|\psi\rangle$, and then note that the CA probabilities, $\rho_n=|\langle n|\psi\rangle|^2$, evolve exactly as probabilities are supposed to do. That works, but it leaves the question unanswered, and for some reason, my friends on this discussion page get upset by that. So I started thinking about it. I concluded that the template states can be used to describe the ontic states, but this means that, somewhere along the line, they have to be reduced to an orthonormal set. How does this happen? In particular, how can it be that experiments strongly suggest that superpositions play extremely important roles, while according to my theory, somehow, these are plutoed by saying that they aren't ontic? Looking at the math expressions, I now tend to think that orthonormality is restored by "superdeterminism", combined with vacuum fluctuations. The thing we call vacuum state, $|\emptyset\rangle$, is not an ontological state, but a superposition of many, perhaps all, CA states. The phases can be chosen to be anything, but it makes sense to choose them to be $+1$ for the vacuum. This is actually a nice way to define phases: all other phases you might introduce for non-vacuum states now have a definite meaning. The states we normally consider in an experiment are usually orthogonal to the vacuum. If we say that we can do experiments with two states, $A$ and $B$, that are not orthonormal to each other, this means that these are template states; it is easy to construct such states and to calculate how they evolve. However, it is safe to assume that, actually, the ontological states $|n\rangle$ with non-vanishing inner product with $A$, must be different from the states $|m\rangle$ that occur in $B$, so that, in spite of the template, $\langle A|B\rangle=0$. This is because the universe never repeats itself exactly. My physical interpretation of this is "superdeterminism": If, in an EPR or Bell experiment, Alice (or Bob) changes her (his) mind about what to measure, she (he) works with states $m$ which all differ from all states $n$ used previously. In the template states, all one has to do is assume at least one change in one of the physical states somewhere else in the universe. The contradiction then disappears. The role of vacuum fluctuations is also unavoidable when considering the decay of an unstable particle. I think there's no problem with the above arguments, but some people find it difficult to accept that the working of their minds may have any effect at all on vacuum fluctuations, or the converse, that vacuum fluctuations might affect their minds. The "free will" of an observer is at risk; people won't like that. But most disturbingly, this argument would imply that what my friends have been teaching at Harvard and other places, for many decades as we are told, is actually incorrect. I want to stay modest; I find this disturbing. A revised version of my latest paper was now sent to the arXiv (will probably be available from Monday or Tuesday). Thanks to you all. My conclusion did not change, but I now have more precise arguments concerning Bell's inequalities and what vacuum fluctuations can do to them. | I can tell you why I don't believe in it. I think my reasons are different from most physicists' reasons, however. Regular quantum mechanics implies the existence of quantum computation. If you believe in the difficulty of factoring (and a number of other classical problems), then a deterministic underpinning for quantum mechanics would seem to imply one of the following. There is a classical polynomial-time algorithm for factoring and other problems which can be solved on a quantum computer. The deterministic underpinnings of quantum mechanics require $2^n$ resources for a system of size $O(n)$. Quantum computation doesn't actually work in practice. None of these seem at all likely to me. For the first, it is quite conceivable that there is a polynomial-time algorithm for factoring, but quantum computation can solve lots of similar periodicity problems, and you can argue that there can't be a single algorithm that solves all of them on a classical computer, so you would have to have different classical algorithms for each classical problem that a quantum computer can solve by period finding. For the second, deterministic underpinnings of quantum mechanics that require $2^n$ resources for a system of size $O(n)$ are really unsatisfactory (but maybe quite possible ... after all, the theory that the universe is a simulation on a classical computer falls in this class of theories, and while truly unsatisfactory, can't be ruled out by this argument). For the third, I haven't seen any reasonable way to how you could make quantum computation impossible while still maintaining consistency with current experimental results. | {
"source": [
"https://physics.stackexchange.com/questions/34217",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11205/"
]
} |
34,241 | In considering the (special) relativistic EM field, I understand that assuming a Lagrangian density of the form $$\mathcal{L} =-\frac{1}{4}F_{\mu\nu}F^{\mu\nu} + \frac{1}{c}j_\mu A^\mu$$ and following the Euler-Lagrange equations recovers Maxwell's equations. Does there exist a first-principles derivation of this Lagrangian? A reference or explanation would be greatly appreciated! | Abstract In the following we'll prove that a compatible Lagrangian density for the electromagnetic field in presence of charges and currents is \begin{equation}
\mathcal{L}_{em}\:=\:\epsilon_{0}\cdot\dfrac{\left|\!\left|\mathbf{E}\right|\!\right|^{2}-c^{2}\left|\!\left|\mathbf{B}\right|\!\right|^{2}}{2}-\rho \phi + \mathbf{j} \boldsymbol{\cdot} \mathbf{A}
\tag{045}
\end{equation} that is the Euler-Langrange equations produced from this Lagrangian are the Maxwell equations for the electromagnetic field. This Lagrangian density is derived by a trial and error (1) procedure, not by guessing. 1. Introduction The Maxwell's differential equations of electromagnetic field in presence of charges and currents are \begin{align}
\boldsymbol{\nabla} \boldsymbol{\times} \mathbf{E} & = -\frac{\partial \mathbf{B}}{\partial t}
\tag{001a}\\
\boldsymbol{\nabla} \boldsymbol{\times} \mathbf{B} & = \mu_{0}\mathbf{j}+\frac{1}{c^{2}}\frac{\partial \mathbf{E}}{\partial t}
\tag{001b}\\
\nabla \boldsymbol{\cdot} \mathbf{E} & = \frac{\rho}{\epsilon_{0}}
\tag{001c}\\
\nabla \boldsymbol{\cdot}\mathbf{B}& = 0
\tag{001d}
\end{align} where $\: \mathbf{E} =$ electric field intensity vector, $\:\mathbf{B}=$ magnetic-flux density vector, $\:\rho=$ electric charge density, $\:\mathbf{j} =$ electric current density vector.
All quantities are functions of the three space coordinates $\:\left( x_{1},x_{2},x_{3}\right) \equiv \left( x,y,z\right)\:$ and time $\:t \equiv x_{4}\:$ . From equation (001d) the magnetic-flux vector $\:\mathbf{B}\:$ may be expressed as the curl of a vector potential $\:\mathbf{A}\:$ \begin{equation}
\mathbf{B}=\boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}
\tag{002}
\end{equation} and from (002) equation (001a) yields \begin{equation}
\boldsymbol{\nabla} \boldsymbol{\times}\left(\mathbf{E}+\frac{\partial \mathbf{A}}{\partial t}\right) =\boldsymbol{0}
\tag{003}
\end{equation} so the parentheses term may be expressed as the gradient of a scalar function \begin{equation*}
\mathbf{E}+\frac{\partial \mathbf{A}}{\partial t} =-\boldsymbol{\nabla}\phi
\end{equation*} that is \begin{equation}
\mathbf{E} =-\boldsymbol{\nabla}\phi -\frac{\partial \mathbf{A}}{\partial t}
\tag{004}
\end{equation} So the six scalar variables, the components of vectors $\:\mathbf{E}\:$ and $\:\mathbf{B}\:$ , can be expressed as functions of 4 scalar variables, the scalar potential $\:\phi\:$ and three components of vector potential $\:\mathbf{A}$ . Inserting the expressions of $\:\mathbf{E}\:$ and $\:\mathbf{B}\:$ , equations (002) and (004) respectively, in equations (001b) and (001c) we have \begin{equation}
\boldsymbol{\nabla} \boldsymbol{\times} \left(\boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}\right) =\mu_{0}\mathbf{j}+\frac{1}{c^{2}}\frac{\partial }{\partial t}\left(-\boldsymbol{\nabla}\phi -\frac{\partial \mathbf{A}}{\partial t}\right)
\tag{005}
\end{equation} and \begin{equation}
\boxed{\:
-\nabla^{2}\phi-\frac{\partial }{\partial t}\left(\nabla \boldsymbol{\cdot}\mathbf{A}\right) =\frac{\rho}{\epsilon_{0}}
\:}
\tag{006}
\end{equation} Given that \begin{equation}
\boldsymbol{\nabla} \boldsymbol{\times} \left( \boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}\right) =\boldsymbol{\nabla}\left(\nabla \boldsymbol{\cdot}\mathbf{A}\right)- \nabla^{2}\mathbf{A}
\tag{007}
\end{equation} equation (005) yields \begin{equation}
\boxed{\:
\frac{1}{c^{2}}\frac{\partial^{2}\mathbf{A}}{\partial t^{2}}-\nabla^{2}\mathbf{A}+ \boldsymbol{\nabla}\left(\nabla \boldsymbol{\cdot} \mathbf{A}+\frac{1}{c^{2}}\frac{\partial \phi}{\partial t}\right) =\mu_{0}\mathbf{j}
\:}
\tag{008}
\end{equation} 2. The Euler-Lagrange equations of EM Field Now, our main task is to find a Lagrangian density $\:\mathcal{L}\:$ , function of the four ''field coordinates'' and their 1rst order derivatives \begin{equation}
\mathcal{L}=\mathcal{L}\left(\eta_{\jmath}, \overset{\centerdot}{\eta}_{\jmath}, \boldsymbol{\nabla}\eta_{\jmath}\right) \qquad \left(\jmath=1,2,3,4\right)
\tag{009}
\end{equation} such that the four scalar electromagnetic field equations (006) and (008) are derived from the Lagrange equations \begin{equation}
\frac{\partial }{\partial t}\left[\frac{\partial \mathcal{L}}{\partial \left(\dfrac{\partial \eta_{\jmath}}{\partial t}\right)}\right]+\sum_{k=1}^{k=3}\frac{\partial }{\partial x_{k}}\left[\frac{\partial \mathcal{L}}{\partial \left(\dfrac{\partial \eta_{\jmath}}{\partial x_{k}}\right)}\right]- \frac{\partial \mathcal{L}}{\partial \eta_{\jmath}}=0\:, \quad \left(\jmath=1,2,3,4\right)
\tag{010}
\end{equation} simplified in notation to \begin{equation}
\boxed{\:
\dfrac{\partial }{\partial t}\left(\dfrac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\eta}_{\jmath}}\right) + \nabla \boldsymbol{\cdot}\left[\dfrac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\eta_{\jmath}\right)}\right]- \frac{\partial \mathcal{L}}{\partial \eta_{\jmath}}=0, \quad \left(\jmath=1,2,3,4\right)
\:}
\tag{011}
\end{equation} Here the Lagrangian density $\:\mathcal{L}\:$ is a function of the four ''field coordinates'' \begin{align}
\eta_{1}&=\mathrm{A}_1\left( x_{1},x_{2},x_{3},t\right)
\tag{012.1}\\
\eta_{2}&=\mathrm{A}_2\left( x_{1},x_{2},x_{3},t\right)
\tag{012.2}\\
\eta_{3}&=\mathrm{A}_3\left( x_{1},x_{2},x_{3},t\right)
\tag{012.3}\\
\eta_{4}&=\:\;\phi \left( x_{1},x_{2},x_{3},t\right)
\tag{012.4}
\end{align} their time derivatives \begin{align}
\overset{\centerdot}{\eta}_{1} & \equiv \dfrac{\partial \eta_{1}}{\partial t} =\dfrac{\partial \mathrm{A}_{1}}{\partial t}\equiv\overset{\centerdot}{\mathrm{A}}_{1}
\tag{013.1}\\
\overset{\centerdot}{\eta}_{2} & \equiv \dfrac{\partial \eta_{2}}{\partial t} =\dfrac{\partial \mathrm{A}_{2}}{\partial t}\equiv \overset{\centerdot}{\mathrm{A}}_{2}
\tag{013.2}\\
\overset{\centerdot}{\eta}_{3} & \equiv \dfrac{\partial \eta_{3}}{\partial t} =\dfrac{\partial \mathrm{A}_{3}}{\partial t}\equiv\overset{\centerdot}{\mathrm{A}}_{3}
\tag{013.3}\\
\overset{\centerdot}{\eta}_{4} & \equiv \dfrac{\partial \eta_{4}}{\partial t} =\dfrac{\partial \phi}{\partial t}\equiv\overset{\centerdot}{\phi}
\tag{013.4}
\end{align} and their gradients \begin{equation}
\begin{array}{cccc}
\boldsymbol{\nabla}\eta_{1}=\boldsymbol{\nabla}\mathrm{A}_1 \:,\: & \boldsymbol{\nabla}\eta_{2}=\boldsymbol{\nabla}\mathrm{A}_{2} \:,\: \boldsymbol{\nabla}\eta_{3}=\boldsymbol{\nabla}\mathrm{A}_3 \:,\: & \boldsymbol{\nabla}\eta_{4}=\boldsymbol{\nabla}\phi
\end{array}
\tag{014}
\end{equation} We express equations (006) and (008) in forms that are similar to the Lagrange equations (011) \begin{equation}
\boxed{\:
\dfrac{\partial }{\partial t}\left(\nabla \boldsymbol{\cdot} \mathbf{A}\right)+\nabla \boldsymbol{\cdot} \left(\boldsymbol{\nabla}\phi \right) -\left(-\frac{\rho}{\epsilon_{0}}\right) =0
\:}
\tag{015}
\end{equation} and \begin{equation}
\boxed{\:
\dfrac{\partial}{\partial t}\left(\frac{\partial \mathrm{A}_{k}}{\partial t}+\frac{\partial \phi}{\partial x_{k}}\right)+\nabla \boldsymbol{\cdot} \left[c^{2}\left(\frac{\partial \mathbf{A}}{\partial x_{k}}- \boldsymbol{\nabla}\mathrm{A}_{k}\right)\right] -\frac{\mathrm{j}_{k}}{\epsilon_{0}}=0
\:}
\tag{016}
\end{equation} The Lagrange equation (011) for $\:\jmath=4\:$ , that is for $\:\eta_{4}=\phi \:$ , is \begin{equation}
\frac{\partial }{\partial t}\left(\frac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\phi}}\right) + \nabla \boldsymbol{\cdot} \left[\frac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\phi\right)}\right]- \frac{\partial \mathcal{L}}{\partial \phi}=0
\tag{017}
\end{equation} Comparing equations (015) and (017), we note that the first could be derived from the second if \begin{equation}
\dfrac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\phi}}=\nabla \boldsymbol{\cdot} \mathbf{A}\:, \qquad \dfrac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\phi\right)}=\boldsymbol{\nabla}\phi \:, \qquad \frac{\partial \mathcal{L}}{\partial \phi}=-\frac{\rho}{\epsilon_{0}}
\tag{018}
\end{equation} so that the Lagrangian density $\:\mathcal{L}\:$ must contain respectively the terms \begin{equation}
\mathcal{L}_{\boldsymbol{\alpha_{1}}}\equiv\left(\nabla \boldsymbol{\cdot} \mathbf{A}\right)\overset{\centerdot}{\phi}\:, \qquad \mathcal{L}_{\boldsymbol{\alpha_{2}}}\equiv\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}\:, \qquad \mathcal{L}_{\boldsymbol{\alpha_{3}}}\equiv-\frac{\rho \phi}{\epsilon_{0}}
\tag{019}
\end{equation} and consequently their sum \begin{equation}
\mathcal{L}_{\boldsymbol{\alpha}}=\mathcal{L}_{\boldsymbol{\alpha_{1}}}+\mathcal{L}_{\boldsymbol{\alpha_{2}}} +\mathcal{L}_{\boldsymbol{\alpha_{3}}}=\left(\nabla \boldsymbol{\cdot} \mathbf{A}\right)\overset{\centerdot}{\phi}+\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}-\frac{\rho \phi}{\epsilon_{0}}
\tag{020}
\end{equation} We suppose that an appropriate Lagrangian density $\:\mathcal{L}\:$ would be of the form \begin{equation}
\mathcal{L}=\mathcal{L}_{\boldsymbol{\alpha}}+\mathcal{L}_{\boldsymbol{\beta}}
\tag{021}
\end{equation} and since $\:\mathcal{L}_{\boldsymbol{\alpha}}\:$ produces equation (015), we expect that $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ , to be determined, will produce equations (016). This expectation would be right if equations (015) and (016) were decoupled, for example if the first contains $\:\phi $ -terms only and the second $\:\mathbf{A} $ -terms only. But here this is not the case : $\:\mathcal{L}_{\boldsymbol{\alpha}}\:$ as containing $\:\mathbf{A} $ -terms would participate to the production of equations (016) and moreover $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ would participate to the production of equation (015), possibly destroying mutually the production of the equations as we expected. But here we follow a trial and error procedure, which will direct to the right answer as we'll see in the following. Now, the Lagrange equations (011) for $\:\jmath=k=1,2,3\:$ , that is for $\:\eta_{k}=\mathrm{A}_{k} \:$ , are \begin{equation}
\frac{\partial }{\partial t}\left(\dfrac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\mathrm{A}}_{k}}\right) + \nabla \boldsymbol{\cdot} \left[\dfrac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\mathrm{A}_{k}\right)}\right]- \frac{\partial \mathcal{L}}{\partial \mathrm{A}_{k}}=0
\tag{022}
\end{equation} Comparing equations (016) and (022), we note that the first could be derived from the second if \begin{equation}
\dfrac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\mathrm{A}}_{k}}= \overset{\centerdot}{\mathrm{A}}_{k}+\frac{\partial \phi}{\partial x_{k}}\:, \quad \dfrac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\mathrm{A}_{k}\right)}=c^{2}\left(\frac{\partial \mathbf{A}}{\partial x_{k}}- \boldsymbol{\nabla}\mathrm{A}_{k}\right)\:, \quad \frac{\partial \mathcal{L}}{\partial \mathrm{A}}_{k}=\frac{\mathrm{j}_{k}}{\epsilon_{0}}
\tag{023}
\end{equation} From the 1rst of equations (023) the $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ part of the Lagrange density $\:\mathcal{L}\:$ must contain the terms \begin{equation}
\frac{1}{2}\left\Vert \overset{\centerdot}{\mathrm{A}}_{k}\right\Vert^{2}+\frac{\partial \phi}{\partial x_{k}}\overset{\centerdot}{\mathrm{A}}_{k}\:, \quad k=1,2,3
\tag{024}
\end{equation} and so their sum with respect to $\:k\:$ \begin{equation}
\mathcal{L}_{\boldsymbol{\beta_{1}}}\equiv \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}
\tag{025}
\end{equation} From the 2nd of equations (023) the $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ part of the Lagrange density $\:\mathcal{L}\:$ must contain the terms \begin{equation}
\tfrac{1}{2}c^{2}\left[\frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k} -\Vert \boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right] \:, \quad k=1,2,3
\tag{026}
\end{equation} and so their sum with respect to $\:k\:$ \begin{equation}
\mathcal{L}_{\boldsymbol{\beta_{2}}}\equiv\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}}\boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]
\tag{027}
\end{equation} From the 3rd of equations (023) the $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ part of the Lagrange density $\:\mathcal{L}\:$ must contain the terms \begin{equation}
\frac{\mathrm{j}_{k}\mathrm{A}_{k}}{\epsilon_{0}} \:, \quad k=1,2,3
\tag{028}
\end{equation} and so their sum with respect to $\:k\:$ \begin{equation}
\mathcal{L}_{\boldsymbol{\beta_{3}}}\equiv \frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\tag{029}
\end{equation} From equations (025), (027) and (029) the $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ part of the Lagrange density $\:\mathcal{L}\:$ is \begin{align}
\mathcal{L}_{\boldsymbol{\beta}} & = \mathcal{L}_{\boldsymbol{\beta_{1}}}+\mathcal{L}_{\boldsymbol{\beta_{2}}} +\mathcal{L}_{\boldsymbol{\beta_{3}}}
\tag{030}\\
& = \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot}\mathbf{\dot{A}}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\nonumber
\end{align} Finally, from the expressions (020) and (030) for the densities $\:\mathcal{L}_{\boldsymbol{\alpha}},\mathcal{L}_{\boldsymbol{\beta}}\:$ the Lagrange density $\:\mathcal{L}=\mathcal{L}_{\boldsymbol{\alpha}}+\mathcal{L}_{\boldsymbol{\beta}}\:$ is \begin{align}
\mathcal{L}& = \mathcal{L}_{\boldsymbol{\alpha}} + \mathcal{L}_{\boldsymbol{\beta}}
\tag{031}\\
& = \left(\nabla \boldsymbol{\cdot} \mathbf{A}\right)\overset{\centerdot}{\phi}+\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}-\frac{\rho \phi}{\epsilon_{0}}
\nonumber\\
& + \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j}\boldsymbol{\cdot}\mathbf{A}}{\epsilon_{0}}
\nonumber\\
& \text{(this is a wrong Lagrange density)}
\nonumber
\end{align} 3. Error-Trial-Final Success Insertion of this Lagrange density expression in the Lagrange equation with respect to $\:\phi \:$ , that is equation (017), doesn't yield equation (006) but \begin{equation}
-\nabla^{2}\phi-\frac{\partial }{\partial t}\left(2\nabla \boldsymbol{\cdot} \mathbf{A}\right) =\frac{\rho}{\epsilon_{0}}\:, \quad (\textbf{wrong})
\tag{032}
\end{equation} The appearance of an extra $\:\left( \nabla \boldsymbol{\cdot} \mathbf{A}\right) \:$ is due to the term $\:\left( \boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}\right) \:$ of $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ and that's why the Lagrange density given by equation (031) is not an appropriate one. In order to resolve this problem we must look at (015), that is (006), from a different point of view as follows \begin{equation}
\nabla \boldsymbol{\cdot}\left(\boldsymbol{\nabla}\phi + \mathbf{\dot{A}}\right) -\left(-\frac{\rho}{\epsilon_{0}}\right) =0
\tag{033}
\end{equation} Comparing equations (033) and (017), we note that the first could be derived from the second if in place of (018) we have \begin{equation}
\dfrac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\phi}}=0\:, \qquad \dfrac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\phi\right)}=\boldsymbol{\nabla}\phi + \mathbf{\dot{A}} \:, \qquad \frac{\partial \mathcal{L}}{\partial \phi}=-\frac{\rho}{\epsilon_{0}}
\tag{034}
\end{equation} so in place of (019) and (020) respectively the equations \begin{equation}
\mathcal{L}^{\prime}_{\boldsymbol{\alpha_{1}}}\equiv 0\:, \quad \mathcal{L}^{\prime}_{\boldsymbol{\alpha_{2}}}\equiv\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2} +\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}\:, \quad \mathcal{L}^{\prime}_{\boldsymbol{\alpha_{3}}}=\mathcal{L}_{\boldsymbol{\alpha_{3}}}\equiv-\frac{\rho \phi}{\epsilon_{0}}
\tag{035}
\end{equation} \begin{equation}
\mathcal{L}^{\prime}_{\boldsymbol{\alpha}}=\mathcal{L}^{\prime}_{\boldsymbol{\alpha_{1}}}+\mathcal{L}^{\prime}_{\boldsymbol{\alpha_{2}}} +\mathcal{L}^{\prime}_{\boldsymbol{\alpha_{3}}}=\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}-\frac{\rho \phi}{\epsilon_{0}}
\tag{036}
\end{equation} Now, it's necessary to omit from $\:\mathcal{L}_{\boldsymbol{\beta_{1}}}\:$ , equation (025), the second term $\:\left( \boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}\right) \:$ since it appears in $\:\mathcal{L}^{\prime}_{\boldsymbol{\alpha_{2}}} \:$ , see the second of above equations (035). So we have in place of (025) \begin{equation}
\mathcal{L}^{\prime}_{\boldsymbol{\beta_{1}}}\equiv \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}
\tag{037}
\end{equation} while $\:\mathcal{L}_{\boldsymbol{\beta_{2}}},\mathcal{L}_{\boldsymbol{\beta_{3}}}\:$ remain unchanged as in equations
(027) and (029) \begin{align}
\mathcal{L}^{\prime}_{\boldsymbol{\beta_{2}}} &=\mathcal{L}_{\boldsymbol{\beta_{2}}}\equiv\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}}\boldsymbol{\cdot}\boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]
\tag{038} \\
\mathcal{L}^{\prime}_{\boldsymbol{\beta_{3}}} &=\mathcal{L}_{\boldsymbol{\beta_{3}}}\equiv \frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\tag{039}
\end{align} In place of (030) \begin{align}
\mathcal{L}^{\prime}_{\boldsymbol{\beta}} & = \mathcal{L}^{\prime}_{\boldsymbol{\beta_{1}}}+\mathcal{L}^{\prime}_{\boldsymbol{\beta_{2}}} +\mathcal{L}^{\prime}_{\boldsymbol{\beta_{3}}}
\tag{040} \\
& = \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\nonumber
\end{align} and finally for the new Lagrangian density we have in place of (031) \begin{align}
\mathcal{L}^{\prime}& = \mathcal{L}^{\prime}_{\boldsymbol{\alpha}} + \mathcal{L}^{\prime}_{\boldsymbol{\beta}}
\tag{041} \\
& = \tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2} +\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}} -\frac{\rho \phi}{\epsilon_{0}}
\nonumber\\
& + \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k} -\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\nonumber
\end{align} Density $\:\mathcal{L}^{\prime}\:$ of (041) is obtained from density $\:\mathcal{L}\:$ of (031) if we omit the term $\:\left(\nabla \boldsymbol{\cdot} \mathbf{A}\right)\overset{\centerdot}{\phi}\:$ . So $\:\mathcal{L}^{\prime}\:$ is independent of $\:\overset{\centerdot}{\phi}$ . In the following equations the brace over the left 3 terms groups that part of the density $\:\mathcal{L}^{\prime}\:$ that essentially participates to the production of the electromagnetic equation (006) from the Lagrange equation with respect to $\:\phi \:$ , equation (017), while the brace under the right 4 terms groups that part of the density $\:\mathcal{L}^{\prime}\:$ that essentially participates to the production of the electromagnetic equations (008) from the Lagrange equations with respect to $\:\mathrm{A}_{1},\mathrm{A}_{2},\mathrm{A}_{3} \:$ , equation (022). \begin{equation*}
\mathcal{L}^{\prime}=\overbrace{\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}-\frac{\rho \phi}{\epsilon_{0}}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}}^{\text{with respect to }\phi}+\tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[\frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert \boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\end{equation*} \begin{equation*}
\mathcal{L}^{\prime}=\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}-\frac{\rho \phi}{\epsilon_{0}}+\underbrace{\boldsymbol{\nabla}\phi\boldsymbol{\cdot} \mathbf{\dot{A}}+\tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[\frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j}\boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}}_{\text{with respect to }\mathbf{A}}
\end{equation*} Note the common term $\:\left( \boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}\right)$ . Reordering the terms in the expression (041) of the density $\:\mathcal{L}^{\prime}\:$ we have \begin{equation}
\mathcal{L}^{\prime}=\underbrace{\tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}}_{\tfrac{1}{2}\left\Vert - \boldsymbol{\nabla}\phi -\frac{\partial \mathbf{A}}{\partial t}\right\Vert^{2}}-\tfrac{1}{2}c^{2}\underbrace{\sum^{k=3}_{k=1}\left[\Vert \boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}-\frac{\partial \mathbf{A}}{\partial x_{k}}\boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}\right]}_{\left\Vert \boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}\right\Vert^{2}}+\frac{1}{\epsilon_{0}}\left( -\rho \phi + \mathbf{j}\boldsymbol{\cdot} \mathbf{A}\right)
\end{equation} \begin{equation}
-----------------
\tag{042}
\end{equation} that is \begin{equation}
\mathcal{L}^{\prime}=\tfrac{1}{2}\left|\!\left|- \boldsymbol{\nabla}\phi -\frac{\partial \mathbf{A}}{\partial t}\right|\!\right|^{2}-\tfrac{1}{2}c^{2}\left|\!\left| \boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}\right|\!\right|^{2}+\frac{1}{\epsilon_{0}}\left( -\rho \phi + \mathbf{j} \boldsymbol{\cdot} \mathbf{A}\right)
\tag{043}
\end{equation} or \begin{equation}
\mathcal{L}^{\prime}=\frac{\left|\!\left|\mathbf{E}\right|\!\right|^{2}-c^{2}\left|\!\left|\mathbf{B}\right|\!\right|^{2}}{2}+\frac{1}{\epsilon_{0}}\left( -\rho \phi + \mathbf{j}\boldsymbol{\cdot}\mathbf{A}\right)
\tag{044}
\end{equation} Now, if the density $\:\mathcal{L}^{\prime}\:$ must have dimensions of energy per unit volume we define $\:\mathcal{L}_{em}=\epsilon_{0}\mathcal{L}^{\prime} \:$ so \begin{equation}
\boxed{\:
\mathcal{L}_{em}\:=\:\epsilon_{0}\cdot\dfrac{\left|\!\left|\mathbf{E}\right|\!\right|^{2}-c^{2}\left|\!\left|\mathbf{B}\right|\!\right|^{2}}{2}-\rho \phi + \mathbf{j} \boldsymbol{\cdot} \mathbf{A}
\:}
\tag{045}
\end{equation} having in mind that \begin{align}
\left\Vert\mathbf{E}\right\Vert^{2} & = \left\Vert - \boldsymbol{\nabla}\phi -\dfrac{\partial \mathbf{A}}{\partial t}\right\Vert^{2} = \left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\Vert \boldsymbol{\nabla}\phi \Vert^{2}+2\left(\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}\right)
\tag{046a}\\
&
\nonumber\\
\left\Vert\mathbf{B}\right\Vert^{2} & = \left\Vert\boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}\right\Vert^{2}=\sum^{k=3}_{k=1}\left[\Vert \boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}-\dfrac{\partial \mathbf{A}}{\partial x_{k}}\boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}\right]
\tag{046b}
\end{align} The scalar $\:\left(\left|\!\left|\mathbf{E}\right|\!\right|^{2}-c^{2}\left|\!\left|\mathbf{B}\right|\!\right|^{2}\right)\:$ is one of the two Lorentz invariants (2) of the field (the other is $\:\mathbf{E}\boldsymbol{\cdot}\mathbf{B}$ ) essentially equal to a constant times $\:\mathcal{E}_{\mu\nu}\mathcal{E}^{\mu\nu}\:$ , where $\:\mathcal{E}^{\mu\nu}\:$ the antisymmetric field (2) tensor. On the other hand, the scalar $\: \left(-\rho \phi + \mathbf{j} \boldsymbol{\cdot} \mathbf{A}\right)\:$ is essentially the inner product $\:J_{\mu}A^{\mu}\:$ in Minkowski space of two 4-vectors : the 4-current density $\:J^{\mu}=\left(c\rho,\mathbf{j}\right)\:$ and the 4-potential $\:A^{\mu}=\left(\phi/c,\mathbf{A}\right)\:$ , a Lorentz invariant scalar too. So, the Lagrange density $\:\mathcal{L}_{em}\:$ in equation (045) is Lorentz invariant. (1) By a trial and error procedure I found the Lagrangian in a more difficult and complicated case : see my answer as user82794 here Obtain the Lagrangian from the system of coupled equation (2) Following W.Rindler in "Introduction to Special Relativity" Ed.1982, this tensor is derived in equation (38.15) \begin{equation}
\mathcal{E}_{\mu\nu}=
\begin{bmatrix}
0 & E_{1} & E_{2} & E_{3} \\
-E_{1} & 0 & -cB_{3} & cB_{2} \\
-E_{2} & cB_{3} & 0 & -cB_{1} \\
-E_{3} & -cB_{2} & cB_{1} & 0
\end{bmatrix}
\quad \text{so} \quad
\mathcal{E}^{\mu\nu}=
\begin{bmatrix}
0 & -E_{1} & - E_{2} & -E_{3} \\
E_{1} & 0 & -cB_{3} & cB_{2} \\
E_{2} & cB_{3} & 0 & -cB_{1} \\
E_{3} & -cB_{2} & cB_{1} & 0
\end{bmatrix}
\tag{38.15}
\end{equation} which by making the (duality) replacements $\:\mathbf{E}\to -c\mathbf{B}\:$ and $\:c\mathbf{B}\to \mathbf{E}\:$ yields \begin{equation}
\mathcal{B}_{\mu\nu}=
\begin{bmatrix}
0 & -cB_{1} & -cB_{2} & - cB_{3} \\
cB_{1} & 0 & -E_{3} & E_{2} \\
cB_{2} & cE_{3} & 0 & -E_{1} \\
cB_{3} & -E_{2} & E_{1} & 0
\end{bmatrix}
\quad \text{so} \quad
\mathcal{B}^{\mu\nu}=
\begin{bmatrix}
0 & cB_{1} & cB_{2} & cB_{3} \\
-cB_{1} & 0 & -E_{3} & E_{2} \\
-cB_{2} & cE_{3} & 0 & -E_{1} \\
-cB_{3} & -E_{2} & E_{1} & 0
\end{bmatrix}
\tag{39.05}
\end{equation} The two invariants of $\:\mathcal{E}^{\mu\nu}\:$ -immediately recognizable as such from their mode of formation - can be expressed as follows: \begin{align}
X & =\dfrac{1}{2}\mathcal{E}_{\mu\nu}\mathcal{E}^{\mu\nu}=-\dfrac{1}{2}\mathcal{B}_{\mu\nu}\mathcal{B}^{\mu\nu}=c^{2}\left|\!\left|\mathbf{B}\right|\!\right|^{2}-\left|\!\left|\mathbf{E}\right|\!\right|^{2}
\tag{39.06}\\
Y & =\dfrac{1}{4}\mathcal{B}_{\mu\nu}\mathcal{E}^{\mu\nu}=c\mathbf{B}\boldsymbol{\cdot}\mathbf{E}
\tag{39.07}
\end{align} | {
"source": [
"https://physics.stackexchange.com/questions/34241",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11408/"
]
} |
34,243 | In Quantum Mechanics, position is an observable, but time may be not. I think that time is simply a classical parameter associated with the act of measurement, but is there an observable of time? And if the observable will exist, what is an operator of time? | Abstract In the following we'll prove that a compatible Lagrangian density for the electromagnetic field in presence of charges and currents is \begin{equation}
\mathcal{L}_{em}\:=\:\epsilon_{0}\cdot\dfrac{\left|\!\left|\mathbf{E}\right|\!\right|^{2}-c^{2}\left|\!\left|\mathbf{B}\right|\!\right|^{2}}{2}-\rho \phi + \mathbf{j} \boldsymbol{\cdot} \mathbf{A}
\tag{045}
\end{equation} that is the Euler-Langrange equations produced from this Lagrangian are the Maxwell equations for the electromagnetic field. This Lagrangian density is derived by a trial and error (1) procedure, not by guessing. 1. Introduction The Maxwell's differential equations of electromagnetic field in presence of charges and currents are \begin{align}
\boldsymbol{\nabla} \boldsymbol{\times} \mathbf{E} & = -\frac{\partial \mathbf{B}}{\partial t}
\tag{001a}\\
\boldsymbol{\nabla} \boldsymbol{\times} \mathbf{B} & = \mu_{0}\mathbf{j}+\frac{1}{c^{2}}\frac{\partial \mathbf{E}}{\partial t}
\tag{001b}\\
\nabla \boldsymbol{\cdot} \mathbf{E} & = \frac{\rho}{\epsilon_{0}}
\tag{001c}\\
\nabla \boldsymbol{\cdot}\mathbf{B}& = 0
\tag{001d}
\end{align} where $\: \mathbf{E} =$ electric field intensity vector, $\:\mathbf{B}=$ magnetic-flux density vector, $\:\rho=$ electric charge density, $\:\mathbf{j} =$ electric current density vector.
All quantities are functions of the three space coordinates $\:\left( x_{1},x_{2},x_{3}\right) \equiv \left( x,y,z\right)\:$ and time $\:t \equiv x_{4}\:$ . From equation (001d) the magnetic-flux vector $\:\mathbf{B}\:$ may be expressed as the curl of a vector potential $\:\mathbf{A}\:$ \begin{equation}
\mathbf{B}=\boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}
\tag{002}
\end{equation} and from (002) equation (001a) yields \begin{equation}
\boldsymbol{\nabla} \boldsymbol{\times}\left(\mathbf{E}+\frac{\partial \mathbf{A}}{\partial t}\right) =\boldsymbol{0}
\tag{003}
\end{equation} so the parentheses term may be expressed as the gradient of a scalar function \begin{equation*}
\mathbf{E}+\frac{\partial \mathbf{A}}{\partial t} =-\boldsymbol{\nabla}\phi
\end{equation*} that is \begin{equation}
\mathbf{E} =-\boldsymbol{\nabla}\phi -\frac{\partial \mathbf{A}}{\partial t}
\tag{004}
\end{equation} So the six scalar variables, the components of vectors $\:\mathbf{E}\:$ and $\:\mathbf{B}\:$ , can be expressed as functions of 4 scalar variables, the scalar potential $\:\phi\:$ and three components of vector potential $\:\mathbf{A}$ . Inserting the expressions of $\:\mathbf{E}\:$ and $\:\mathbf{B}\:$ , equations (002) and (004) respectively, in equations (001b) and (001c) we have \begin{equation}
\boldsymbol{\nabla} \boldsymbol{\times} \left(\boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}\right) =\mu_{0}\mathbf{j}+\frac{1}{c^{2}}\frac{\partial }{\partial t}\left(-\boldsymbol{\nabla}\phi -\frac{\partial \mathbf{A}}{\partial t}\right)
\tag{005}
\end{equation} and \begin{equation}
\boxed{\:
-\nabla^{2}\phi-\frac{\partial }{\partial t}\left(\nabla \boldsymbol{\cdot}\mathbf{A}\right) =\frac{\rho}{\epsilon_{0}}
\:}
\tag{006}
\end{equation} Given that \begin{equation}
\boldsymbol{\nabla} \boldsymbol{\times} \left( \boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}\right) =\boldsymbol{\nabla}\left(\nabla \boldsymbol{\cdot}\mathbf{A}\right)- \nabla^{2}\mathbf{A}
\tag{007}
\end{equation} equation (005) yields \begin{equation}
\boxed{\:
\frac{1}{c^{2}}\frac{\partial^{2}\mathbf{A}}{\partial t^{2}}-\nabla^{2}\mathbf{A}+ \boldsymbol{\nabla}\left(\nabla \boldsymbol{\cdot} \mathbf{A}+\frac{1}{c^{2}}\frac{\partial \phi}{\partial t}\right) =\mu_{0}\mathbf{j}
\:}
\tag{008}
\end{equation} 2. The Euler-Lagrange equations of EM Field Now, our main task is to find a Lagrangian density $\:\mathcal{L}\:$ , function of the four ''field coordinates'' and their 1rst order derivatives \begin{equation}
\mathcal{L}=\mathcal{L}\left(\eta_{\jmath}, \overset{\centerdot}{\eta}_{\jmath}, \boldsymbol{\nabla}\eta_{\jmath}\right) \qquad \left(\jmath=1,2,3,4\right)
\tag{009}
\end{equation} such that the four scalar electromagnetic field equations (006) and (008) are derived from the Lagrange equations \begin{equation}
\frac{\partial }{\partial t}\left[\frac{\partial \mathcal{L}}{\partial \left(\dfrac{\partial \eta_{\jmath}}{\partial t}\right)}\right]+\sum_{k=1}^{k=3}\frac{\partial }{\partial x_{k}}\left[\frac{\partial \mathcal{L}}{\partial \left(\dfrac{\partial \eta_{\jmath}}{\partial x_{k}}\right)}\right]- \frac{\partial \mathcal{L}}{\partial \eta_{\jmath}}=0\:, \quad \left(\jmath=1,2,3,4\right)
\tag{010}
\end{equation} simplified in notation to \begin{equation}
\boxed{\:
\dfrac{\partial }{\partial t}\left(\dfrac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\eta}_{\jmath}}\right) + \nabla \boldsymbol{\cdot}\left[\dfrac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\eta_{\jmath}\right)}\right]- \frac{\partial \mathcal{L}}{\partial \eta_{\jmath}}=0, \quad \left(\jmath=1,2,3,4\right)
\:}
\tag{011}
\end{equation} Here the Lagrangian density $\:\mathcal{L}\:$ is a function of the four ''field coordinates'' \begin{align}
\eta_{1}&=\mathrm{A}_1\left( x_{1},x_{2},x_{3},t\right)
\tag{012.1}\\
\eta_{2}&=\mathrm{A}_2\left( x_{1},x_{2},x_{3},t\right)
\tag{012.2}\\
\eta_{3}&=\mathrm{A}_3\left( x_{1},x_{2},x_{3},t\right)
\tag{012.3}\\
\eta_{4}&=\:\;\phi \left( x_{1},x_{2},x_{3},t\right)
\tag{012.4}
\end{align} their time derivatives \begin{align}
\overset{\centerdot}{\eta}_{1} & \equiv \dfrac{\partial \eta_{1}}{\partial t} =\dfrac{\partial \mathrm{A}_{1}}{\partial t}\equiv\overset{\centerdot}{\mathrm{A}}_{1}
\tag{013.1}\\
\overset{\centerdot}{\eta}_{2} & \equiv \dfrac{\partial \eta_{2}}{\partial t} =\dfrac{\partial \mathrm{A}_{2}}{\partial t}\equiv \overset{\centerdot}{\mathrm{A}}_{2}
\tag{013.2}\\
\overset{\centerdot}{\eta}_{3} & \equiv \dfrac{\partial \eta_{3}}{\partial t} =\dfrac{\partial \mathrm{A}_{3}}{\partial t}\equiv\overset{\centerdot}{\mathrm{A}}_{3}
\tag{013.3}\\
\overset{\centerdot}{\eta}_{4} & \equiv \dfrac{\partial \eta_{4}}{\partial t} =\dfrac{\partial \phi}{\partial t}\equiv\overset{\centerdot}{\phi}
\tag{013.4}
\end{align} and their gradients \begin{equation}
\begin{array}{cccc}
\boldsymbol{\nabla}\eta_{1}=\boldsymbol{\nabla}\mathrm{A}_1 \:,\: & \boldsymbol{\nabla}\eta_{2}=\boldsymbol{\nabla}\mathrm{A}_{2} \:,\: \boldsymbol{\nabla}\eta_{3}=\boldsymbol{\nabla}\mathrm{A}_3 \:,\: & \boldsymbol{\nabla}\eta_{4}=\boldsymbol{\nabla}\phi
\end{array}
\tag{014}
\end{equation} We express equations (006) and (008) in forms that are similar to the Lagrange equations (011) \begin{equation}
\boxed{\:
\dfrac{\partial }{\partial t}\left(\nabla \boldsymbol{\cdot} \mathbf{A}\right)+\nabla \boldsymbol{\cdot} \left(\boldsymbol{\nabla}\phi \right) -\left(-\frac{\rho}{\epsilon_{0}}\right) =0
\:}
\tag{015}
\end{equation} and \begin{equation}
\boxed{\:
\dfrac{\partial}{\partial t}\left(\frac{\partial \mathrm{A}_{k}}{\partial t}+\frac{\partial \phi}{\partial x_{k}}\right)+\nabla \boldsymbol{\cdot} \left[c^{2}\left(\frac{\partial \mathbf{A}}{\partial x_{k}}- \boldsymbol{\nabla}\mathrm{A}_{k}\right)\right] -\frac{\mathrm{j}_{k}}{\epsilon_{0}}=0
\:}
\tag{016}
\end{equation} The Lagrange equation (011) for $\:\jmath=4\:$ , that is for $\:\eta_{4}=\phi \:$ , is \begin{equation}
\frac{\partial }{\partial t}\left(\frac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\phi}}\right) + \nabla \boldsymbol{\cdot} \left[\frac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\phi\right)}\right]- \frac{\partial \mathcal{L}}{\partial \phi}=0
\tag{017}
\end{equation} Comparing equations (015) and (017), we note that the first could be derived from the second if \begin{equation}
\dfrac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\phi}}=\nabla \boldsymbol{\cdot} \mathbf{A}\:, \qquad \dfrac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\phi\right)}=\boldsymbol{\nabla}\phi \:, \qquad \frac{\partial \mathcal{L}}{\partial \phi}=-\frac{\rho}{\epsilon_{0}}
\tag{018}
\end{equation} so that the Lagrangian density $\:\mathcal{L}\:$ must contain respectively the terms \begin{equation}
\mathcal{L}_{\boldsymbol{\alpha_{1}}}\equiv\left(\nabla \boldsymbol{\cdot} \mathbf{A}\right)\overset{\centerdot}{\phi}\:, \qquad \mathcal{L}_{\boldsymbol{\alpha_{2}}}\equiv\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}\:, \qquad \mathcal{L}_{\boldsymbol{\alpha_{3}}}\equiv-\frac{\rho \phi}{\epsilon_{0}}
\tag{019}
\end{equation} and consequently their sum \begin{equation}
\mathcal{L}_{\boldsymbol{\alpha}}=\mathcal{L}_{\boldsymbol{\alpha_{1}}}+\mathcal{L}_{\boldsymbol{\alpha_{2}}} +\mathcal{L}_{\boldsymbol{\alpha_{3}}}=\left(\nabla \boldsymbol{\cdot} \mathbf{A}\right)\overset{\centerdot}{\phi}+\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}-\frac{\rho \phi}{\epsilon_{0}}
\tag{020}
\end{equation} We suppose that an appropriate Lagrangian density $\:\mathcal{L}\:$ would be of the form \begin{equation}
\mathcal{L}=\mathcal{L}_{\boldsymbol{\alpha}}+\mathcal{L}_{\boldsymbol{\beta}}
\tag{021}
\end{equation} and since $\:\mathcal{L}_{\boldsymbol{\alpha}}\:$ produces equation (015), we expect that $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ , to be determined, will produce equations (016). This expectation would be right if equations (015) and (016) were decoupled, for example if the first contains $\:\phi $ -terms only and the second $\:\mathbf{A} $ -terms only. But here this is not the case : $\:\mathcal{L}_{\boldsymbol{\alpha}}\:$ as containing $\:\mathbf{A} $ -terms would participate to the production of equations (016) and moreover $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ would participate to the production of equation (015), possibly destroying mutually the production of the equations as we expected. But here we follow a trial and error procedure, which will direct to the right answer as we'll see in the following. Now, the Lagrange equations (011) for $\:\jmath=k=1,2,3\:$ , that is for $\:\eta_{k}=\mathrm{A}_{k} \:$ , are \begin{equation}
\frac{\partial }{\partial t}\left(\dfrac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\mathrm{A}}_{k}}\right) + \nabla \boldsymbol{\cdot} \left[\dfrac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\mathrm{A}_{k}\right)}\right]- \frac{\partial \mathcal{L}}{\partial \mathrm{A}_{k}}=0
\tag{022}
\end{equation} Comparing equations (016) and (022), we note that the first could be derived from the second if \begin{equation}
\dfrac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\mathrm{A}}_{k}}= \overset{\centerdot}{\mathrm{A}}_{k}+\frac{\partial \phi}{\partial x_{k}}\:, \quad \dfrac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\mathrm{A}_{k}\right)}=c^{2}\left(\frac{\partial \mathbf{A}}{\partial x_{k}}- \boldsymbol{\nabla}\mathrm{A}_{k}\right)\:, \quad \frac{\partial \mathcal{L}}{\partial \mathrm{A}}_{k}=\frac{\mathrm{j}_{k}}{\epsilon_{0}}
\tag{023}
\end{equation} From the 1rst of equations (023) the $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ part of the Lagrange density $\:\mathcal{L}\:$ must contain the terms \begin{equation}
\frac{1}{2}\left\Vert \overset{\centerdot}{\mathrm{A}}_{k}\right\Vert^{2}+\frac{\partial \phi}{\partial x_{k}}\overset{\centerdot}{\mathrm{A}}_{k}\:, \quad k=1,2,3
\tag{024}
\end{equation} and so their sum with respect to $\:k\:$ \begin{equation}
\mathcal{L}_{\boldsymbol{\beta_{1}}}\equiv \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}
\tag{025}
\end{equation} From the 2nd of equations (023) the $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ part of the Lagrange density $\:\mathcal{L}\:$ must contain the terms \begin{equation}
\tfrac{1}{2}c^{2}\left[\frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k} -\Vert \boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right] \:, \quad k=1,2,3
\tag{026}
\end{equation} and so their sum with respect to $\:k\:$ \begin{equation}
\mathcal{L}_{\boldsymbol{\beta_{2}}}\equiv\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}}\boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]
\tag{027}
\end{equation} From the 3rd of equations (023) the $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ part of the Lagrange density $\:\mathcal{L}\:$ must contain the terms \begin{equation}
\frac{\mathrm{j}_{k}\mathrm{A}_{k}}{\epsilon_{0}} \:, \quad k=1,2,3
\tag{028}
\end{equation} and so their sum with respect to $\:k\:$ \begin{equation}
\mathcal{L}_{\boldsymbol{\beta_{3}}}\equiv \frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\tag{029}
\end{equation} From equations (025), (027) and (029) the $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ part of the Lagrange density $\:\mathcal{L}\:$ is \begin{align}
\mathcal{L}_{\boldsymbol{\beta}} & = \mathcal{L}_{\boldsymbol{\beta_{1}}}+\mathcal{L}_{\boldsymbol{\beta_{2}}} +\mathcal{L}_{\boldsymbol{\beta_{3}}}
\tag{030}\\
& = \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot}\mathbf{\dot{A}}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\nonumber
\end{align} Finally, from the expressions (020) and (030) for the densities $\:\mathcal{L}_{\boldsymbol{\alpha}},\mathcal{L}_{\boldsymbol{\beta}}\:$ the Lagrange density $\:\mathcal{L}=\mathcal{L}_{\boldsymbol{\alpha}}+\mathcal{L}_{\boldsymbol{\beta}}\:$ is \begin{align}
\mathcal{L}& = \mathcal{L}_{\boldsymbol{\alpha}} + \mathcal{L}_{\boldsymbol{\beta}}
\tag{031}\\
& = \left(\nabla \boldsymbol{\cdot} \mathbf{A}\right)\overset{\centerdot}{\phi}+\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}-\frac{\rho \phi}{\epsilon_{0}}
\nonumber\\
& + \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j}\boldsymbol{\cdot}\mathbf{A}}{\epsilon_{0}}
\nonumber\\
& \text{(this is a wrong Lagrange density)}
\nonumber
\end{align} 3. Error-Trial-Final Success Insertion of this Lagrange density expression in the Lagrange equation with respect to $\:\phi \:$ , that is equation (017), doesn't yield equation (006) but \begin{equation}
-\nabla^{2}\phi-\frac{\partial }{\partial t}\left(2\nabla \boldsymbol{\cdot} \mathbf{A}\right) =\frac{\rho}{\epsilon_{0}}\:, \quad (\textbf{wrong})
\tag{032}
\end{equation} The appearance of an extra $\:\left( \nabla \boldsymbol{\cdot} \mathbf{A}\right) \:$ is due to the term $\:\left( \boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}\right) \:$ of $\:\mathcal{L}_{\boldsymbol{\beta}}\:$ and that's why the Lagrange density given by equation (031) is not an appropriate one. In order to resolve this problem we must look at (015), that is (006), from a different point of view as follows \begin{equation}
\nabla \boldsymbol{\cdot}\left(\boldsymbol{\nabla}\phi + \mathbf{\dot{A}}\right) -\left(-\frac{\rho}{\epsilon_{0}}\right) =0
\tag{033}
\end{equation} Comparing equations (033) and (017), we note that the first could be derived from the second if in place of (018) we have \begin{equation}
\dfrac{\partial \mathcal{L}}{\partial \overset{\centerdot}{\phi}}=0\:, \qquad \dfrac{\partial \mathcal{L}}{\partial \left(\boldsymbol{\nabla}\phi\right)}=\boldsymbol{\nabla}\phi + \mathbf{\dot{A}} \:, \qquad \frac{\partial \mathcal{L}}{\partial \phi}=-\frac{\rho}{\epsilon_{0}}
\tag{034}
\end{equation} so in place of (019) and (020) respectively the equations \begin{equation}
\mathcal{L}^{\prime}_{\boldsymbol{\alpha_{1}}}\equiv 0\:, \quad \mathcal{L}^{\prime}_{\boldsymbol{\alpha_{2}}}\equiv\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2} +\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}\:, \quad \mathcal{L}^{\prime}_{\boldsymbol{\alpha_{3}}}=\mathcal{L}_{\boldsymbol{\alpha_{3}}}\equiv-\frac{\rho \phi}{\epsilon_{0}}
\tag{035}
\end{equation} \begin{equation}
\mathcal{L}^{\prime}_{\boldsymbol{\alpha}}=\mathcal{L}^{\prime}_{\boldsymbol{\alpha_{1}}}+\mathcal{L}^{\prime}_{\boldsymbol{\alpha_{2}}} +\mathcal{L}^{\prime}_{\boldsymbol{\alpha_{3}}}=\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}-\frac{\rho \phi}{\epsilon_{0}}
\tag{036}
\end{equation} Now, it's necessary to omit from $\:\mathcal{L}_{\boldsymbol{\beta_{1}}}\:$ , equation (025), the second term $\:\left( \boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}\right) \:$ since it appears in $\:\mathcal{L}^{\prime}_{\boldsymbol{\alpha_{2}}} \:$ , see the second of above equations (035). So we have in place of (025) \begin{equation}
\mathcal{L}^{\prime}_{\boldsymbol{\beta_{1}}}\equiv \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}
\tag{037}
\end{equation} while $\:\mathcal{L}_{\boldsymbol{\beta_{2}}},\mathcal{L}_{\boldsymbol{\beta_{3}}}\:$ remain unchanged as in equations
(027) and (029) \begin{align}
\mathcal{L}^{\prime}_{\boldsymbol{\beta_{2}}} &=\mathcal{L}_{\boldsymbol{\beta_{2}}}\equiv\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}}\boldsymbol{\cdot}\boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]
\tag{038} \\
\mathcal{L}^{\prime}_{\boldsymbol{\beta_{3}}} &=\mathcal{L}_{\boldsymbol{\beta_{3}}}\equiv \frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\tag{039}
\end{align} In place of (030) \begin{align}
\mathcal{L}^{\prime}_{\boldsymbol{\beta}} & = \mathcal{L}^{\prime}_{\boldsymbol{\beta_{1}}}+\mathcal{L}^{\prime}_{\boldsymbol{\beta_{2}}} +\mathcal{L}^{\prime}_{\boldsymbol{\beta_{3}}}
\tag{040} \\
& = \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\nonumber
\end{align} and finally for the new Lagrangian density we have in place of (031) \begin{align}
\mathcal{L}^{\prime}& = \mathcal{L}^{\prime}_{\boldsymbol{\alpha}} + \mathcal{L}^{\prime}_{\boldsymbol{\beta}}
\tag{041} \\
& = \tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2} +\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}} -\frac{\rho \phi}{\epsilon_{0}}
\nonumber\\
& + \tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[ \frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k} -\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\nonumber
\end{align} Density $\:\mathcal{L}^{\prime}\:$ of (041) is obtained from density $\:\mathcal{L}\:$ of (031) if we omit the term $\:\left(\nabla \boldsymbol{\cdot} \mathbf{A}\right)\overset{\centerdot}{\phi}\:$ . So $\:\mathcal{L}^{\prime}\:$ is independent of $\:\overset{\centerdot}{\phi}$ . In the following equations the brace over the left 3 terms groups that part of the density $\:\mathcal{L}^{\prime}\:$ that essentially participates to the production of the electromagnetic equation (006) from the Lagrange equation with respect to $\:\phi \:$ , equation (017), while the brace under the right 4 terms groups that part of the density $\:\mathcal{L}^{\prime}\:$ that essentially participates to the production of the electromagnetic equations (008) from the Lagrange equations with respect to $\:\mathrm{A}_{1},\mathrm{A}_{2},\mathrm{A}_{3} \:$ , equation (022). \begin{equation*}
\mathcal{L}^{\prime}=\overbrace{\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}-\frac{\rho \phi}{\epsilon_{0}}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}}^{\text{with respect to }\phi}+\tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[\frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert \boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j} \boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}
\end{equation*} \begin{equation*}
\mathcal{L}^{\prime}=\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}-\frac{\rho \phi}{\epsilon_{0}}+\underbrace{\boldsymbol{\nabla}\phi\boldsymbol{\cdot} \mathbf{\dot{A}}+\tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\tfrac{1}{2}c^{2}\sum^{k=3}_{k=1}\left[\frac{\partial \mathbf{A}}{\partial x_{k}} \boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}-\Vert\boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}\right]+\frac{\mathbf{j}\boldsymbol{\cdot} \mathbf{A}}{\epsilon_{0}}}_{\text{with respect to }\mathbf{A}}
\end{equation*} Note the common term $\:\left( \boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}\right)$ . Reordering the terms in the expression (041) of the density $\:\mathcal{L}^{\prime}\:$ we have \begin{equation}
\mathcal{L}^{\prime}=\underbrace{\tfrac{1}{2}\left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\tfrac{1}{2}\Vert \boldsymbol{\nabla}\phi \Vert^{2}+\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}}_{\tfrac{1}{2}\left\Vert - \boldsymbol{\nabla}\phi -\frac{\partial \mathbf{A}}{\partial t}\right\Vert^{2}}-\tfrac{1}{2}c^{2}\underbrace{\sum^{k=3}_{k=1}\left[\Vert \boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}-\frac{\partial \mathbf{A}}{\partial x_{k}}\boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}\right]}_{\left\Vert \boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}\right\Vert^{2}}+\frac{1}{\epsilon_{0}}\left( -\rho \phi + \mathbf{j}\boldsymbol{\cdot} \mathbf{A}\right)
\end{equation} \begin{equation}
-----------------
\tag{042}
\end{equation} that is \begin{equation}
\mathcal{L}^{\prime}=\tfrac{1}{2}\left|\!\left|- \boldsymbol{\nabla}\phi -\frac{\partial \mathbf{A}}{\partial t}\right|\!\right|^{2}-\tfrac{1}{2}c^{2}\left|\!\left| \boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}\right|\!\right|^{2}+\frac{1}{\epsilon_{0}}\left( -\rho \phi + \mathbf{j} \boldsymbol{\cdot} \mathbf{A}\right)
\tag{043}
\end{equation} or \begin{equation}
\mathcal{L}^{\prime}=\frac{\left|\!\left|\mathbf{E}\right|\!\right|^{2}-c^{2}\left|\!\left|\mathbf{B}\right|\!\right|^{2}}{2}+\frac{1}{\epsilon_{0}}\left( -\rho \phi + \mathbf{j}\boldsymbol{\cdot}\mathbf{A}\right)
\tag{044}
\end{equation} Now, if the density $\:\mathcal{L}^{\prime}\:$ must have dimensions of energy per unit volume we define $\:\mathcal{L}_{em}=\epsilon_{0}\mathcal{L}^{\prime} \:$ so \begin{equation}
\boxed{\:
\mathcal{L}_{em}\:=\:\epsilon_{0}\cdot\dfrac{\left|\!\left|\mathbf{E}\right|\!\right|^{2}-c^{2}\left|\!\left|\mathbf{B}\right|\!\right|^{2}}{2}-\rho \phi + \mathbf{j} \boldsymbol{\cdot} \mathbf{A}
\:}
\tag{045}
\end{equation} having in mind that \begin{align}
\left\Vert\mathbf{E}\right\Vert^{2} & = \left\Vert - \boldsymbol{\nabla}\phi -\dfrac{\partial \mathbf{A}}{\partial t}\right\Vert^{2} = \left\Vert \mathbf{\dot{A}}\right\Vert^{2}+\Vert \boldsymbol{\nabla}\phi \Vert^{2}+2\left(\boldsymbol{\nabla}\phi \boldsymbol{\cdot} \mathbf{\dot{A}}\right)
\tag{046a}\\
&
\nonumber\\
\left\Vert\mathbf{B}\right\Vert^{2} & = \left\Vert\boldsymbol{\nabla} \boldsymbol{\times} \mathbf{A}\right\Vert^{2}=\sum^{k=3}_{k=1}\left[\Vert \boldsymbol{\nabla}\mathrm{A}_{k}\Vert^{2}-\dfrac{\partial \mathbf{A}}{\partial x_{k}}\boldsymbol{\cdot} \boldsymbol{\nabla}\mathrm{A}_{k}\right]
\tag{046b}
\end{align} The scalar $\:\left(\left|\!\left|\mathbf{E}\right|\!\right|^{2}-c^{2}\left|\!\left|\mathbf{B}\right|\!\right|^{2}\right)\:$ is one of the two Lorentz invariants (2) of the field (the other is $\:\mathbf{E}\boldsymbol{\cdot}\mathbf{B}$ ) essentially equal to a constant times $\:\mathcal{E}_{\mu\nu}\mathcal{E}^{\mu\nu}\:$ , where $\:\mathcal{E}^{\mu\nu}\:$ the antisymmetric field (2) tensor. On the other hand, the scalar $\: \left(-\rho \phi + \mathbf{j} \boldsymbol{\cdot} \mathbf{A}\right)\:$ is essentially the inner product $\:J_{\mu}A^{\mu}\:$ in Minkowski space of two 4-vectors : the 4-current density $\:J^{\mu}=\left(c\rho,\mathbf{j}\right)\:$ and the 4-potential $\:A^{\mu}=\left(\phi/c,\mathbf{A}\right)\:$ , a Lorentz invariant scalar too. So, the Lagrange density $\:\mathcal{L}_{em}\:$ in equation (045) is Lorentz invariant. (1) By a trial and error procedure I found the Lagrangian in a more difficult and complicated case : see my answer as user82794 here Obtain the Lagrangian from the system of coupled equation (2) Following W.Rindler in "Introduction to Special Relativity" Ed.1982, this tensor is derived in equation (38.15) \begin{equation}
\mathcal{E}_{\mu\nu}=
\begin{bmatrix}
0 & E_{1} & E_{2} & E_{3} \\
-E_{1} & 0 & -cB_{3} & cB_{2} \\
-E_{2} & cB_{3} & 0 & -cB_{1} \\
-E_{3} & -cB_{2} & cB_{1} & 0
\end{bmatrix}
\quad \text{so} \quad
\mathcal{E}^{\mu\nu}=
\begin{bmatrix}
0 & -E_{1} & - E_{2} & -E_{3} \\
E_{1} & 0 & -cB_{3} & cB_{2} \\
E_{2} & cB_{3} & 0 & -cB_{1} \\
E_{3} & -cB_{2} & cB_{1} & 0
\end{bmatrix}
\tag{38.15}
\end{equation} which by making the (duality) replacements $\:\mathbf{E}\to -c\mathbf{B}\:$ and $\:c\mathbf{B}\to \mathbf{E}\:$ yields \begin{equation}
\mathcal{B}_{\mu\nu}=
\begin{bmatrix}
0 & -cB_{1} & -cB_{2} & - cB_{3} \\
cB_{1} & 0 & -E_{3} & E_{2} \\
cB_{2} & cE_{3} & 0 & -E_{1} \\
cB_{3} & -E_{2} & E_{1} & 0
\end{bmatrix}
\quad \text{so} \quad
\mathcal{B}^{\mu\nu}=
\begin{bmatrix}
0 & cB_{1} & cB_{2} & cB_{3} \\
-cB_{1} & 0 & -E_{3} & E_{2} \\
-cB_{2} & cE_{3} & 0 & -E_{1} \\
-cB_{3} & -E_{2} & E_{1} & 0
\end{bmatrix}
\tag{39.05}
\end{equation} The two invariants of $\:\mathcal{E}^{\mu\nu}\:$ -immediately recognizable as such from their mode of formation - can be expressed as follows: \begin{align}
X & =\dfrac{1}{2}\mathcal{E}_{\mu\nu}\mathcal{E}^{\mu\nu}=-\dfrac{1}{2}\mathcal{B}_{\mu\nu}\mathcal{B}^{\mu\nu}=c^{2}\left|\!\left|\mathbf{B}\right|\!\right|^{2}-\left|\!\left|\mathbf{E}\right|\!\right|^{2}
\tag{39.06}\\
Y & =\dfrac{1}{4}\mathcal{B}_{\mu\nu}\mathcal{E}^{\mu\nu}=c\mathbf{B}\boldsymbol{\cdot}\mathbf{E}
\tag{39.07}
\end{align} | {
"source": [
"https://physics.stackexchange.com/questions/34243",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11235/"
]
} |
34,352 | Light is clearly affected by gravity, just think about a black hole, but light supposedly has no mass and gravity only affects objects with mass. On the other hand, if light does have mass then doesn't mass become infinitely larger the closer to the speed of light an object travels. So this would result in light have an infinite mass which is impossible. Any explanations? | In general relativity, gravity affects anything with energy . While light doesn't have rest-mass, it still has energy --- and is thus affected by gravity. If you think of gravity as a distortion in space-time ( a la general relativity), it doesn't matter what the secondary object is. As long as it exists, gravity affects it. | {
"source": [
"https://physics.stackexchange.com/questions/34352",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11449/"
]
} |
34,993 | [ Update: Thanks, everyone, for the wonderful replies! I learned something extremely interesting and relevant (namely, the basic way decoherence works in QFT), even though it wasn't what I thought I wanted to know when I asked the question. Partly inspired by wolfgang's answer below, I just asked a new question about Gambini et al.'s "Montevideo interpretation," which (if it worked as claimed) would provide a completely different sort of "gravitational decoherence."] This question is about very speculative technology, but it seems well-defined, and it's hard to imagine that physics.SE folks would have nothing of interest to say about it. For what follows, I'll assume that whatever the right quantum theory of gravity is, it's perfectly unitary, so that there's no problem at all creating superpositions over different configurations of the gravitational metric. I'll also assume that we live in de Sitter space. Suppose someone creates a superposition of the form (1) $\frac{\left|L\right\rangle+\left|R\right\rangle}{\sqrt{2}},$ where |L> represents a large mass on the left side of a box, and |R> represents that same mass on the right side of the box. And suppose this mass is large enough that the |L> and |R> states couple "detectably differently" to the gravitational field (but on the other hand, that all possible sources of decoherence other than gravity have been removed). Then by our assumptions, we ought to get gravity-induced decoherence . That is, the |L> state will get entangled with one "sphere of gravitational influence" spreading outwards from the box at the speed of light, and the |R> state will get entangled with a different such sphere, with the result being that someone who measures only the box will see just the mixed state (2) $\frac{\left|L\right\rangle\left\langle L\right|+\left|R\right\rangle\left\langle R\right|}{2}.$ My question is now the following: Is there any conceivable technology, consistent with known physics (and with our assumption of a dS space), that could reverse the decoherence and return the mixed state (2) to the pure state (1)? If so, how might it work? For example: if we'd had sufficient foresight, could we have surrounded the solar system with "gravity mirrors," which would reflect the outgoing spheres of gravitational influence back to the box from which they'd originated? Are exotic physical assumptions (like negative-energy matter) needed to make such mirrors work? The motivation, of course, is that if there's no such technology, then at least in dS space, we'd seem to have a phenomenon that we could justifiably call "true, in-principle irreversible decoherence," without having to postulate any Penrose-like "objective reduction" process, or indeed any new physics whatsoever. (And yes, I'm well aware that the AdS/CFT correspondence strongly suggests that this phenomenon, if it existed, would be specific to dS space and wouldn't work in AdS.) [Note: I was surprised that I couldn't find anyone asking this before, since whatever the answer, it must have occurred to lots of people! Vaguely-related questions: Is decoherence even possible in anti de Sitter space? , Do black holes play a role in quantum decoherence? ] | If we do an interference experiment with a (charged) particle coupled to the electromagnetic field or a massive particle coupled to the gravitational field, we can see interference if no information gets stored in the environment about which path the particle followed (or at least, if the states of the environment corresponding to the two paths through the interferometer have a large overlap --- if the overlap is not 1 the visibility of the interference fringes is reduced). The particle is "dressed" by its electromagnetic or gravitational field, but that is not necessarily enough to leave a permanent record behind. For an electron, if it emits no photon during the experiment, the electromagnetic field stays in the vacuum state, and records no "which-way" information. So two possible paths followed by the electron can interfere. But if a single photon gets emitted, and the state of the photon allows us to identify the path taken with high success probability, then there is no interference. What actually happens in an experiment with electrons is kind of interesting. Since photons are massless they are easy to excite if they have long wavelength and hence low energy. Whenever an electron gets accelerated many "soft" (i.e., long wavelength) photons get emitted. But if the acceleration is weak, the photons have such long wavelength that they provide little information concerning which path, and interference is possible. It is the same with gravitons. Except the probability of emitting a "hard" graviton (with short enough wavelength to distinguish the paths) is far, far smaller than for photons, and therefore gravitational decoherence is extremely weak. These soft photons (or gravitons) can be well described using classical electromagnetic (or gravitional) theory. This helps one to appreciate how the intuitive picture --- the motion of the electron through the interferometer should perturb the electric field at long range --- is reconciled with the survival of interference. Yes, it's true that the electric field is affected by the electron's (noninertial) motion, but the very long wavelength radiation detected far away looks essentially the same for either path followed by the electron; by detecting this radiation we can distinguish the paths only with very poor resolution, i.e. hardly at all. In practice, loss of visibility in decoherence experiments usually occurs due to more mundane processes that cause "which-way" information to be recorded (e.g. the electron gets scattering by a stray atom, dust grain, or photon). Decoherence due to entanglement of the particle with its field (i.e. the emission of photons or gravitons that are not very soft) is always present at some level, but typically it is a small effect. | {
"source": [
"https://physics.stackexchange.com/questions/34993",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8940/"
]
} |
35,090 | See The Principle of Relativity here: The Principles of Mathematical Physics . This was written by Poincaré in 1904, a year before Einstein published his theory of relativity . It appears from this and other writings of Poincaré that Poincaré discovered the theory of special relativity before Einstein. So why does Einstein get the credit? | Poincaré was confused on several points. (See the discussion on Wikipedia regarding "mass energy equivalence".) He could never get the mechanical relations straight, since he could not figure out that $E=mc^2$. Einstein followed Poincaré closely in 1905, he was aware of Poincaré's work, but he derived the theory simply as a geometric symmetry, and made a complete system. Einstein did share the credit with Lorentz and Poincaré for special relativity for a while, probably one reason his Nobel prize did not mention relativity. Pauli in the Encyclopædia Britannica article famously credits Einstein alone for formulating the relativity principle, as did Lorentz. Poincaré was less accomodating. He would say "Einstein just assumed that which we were all trying to prove" (namely the principle of relativity). (I could not find a reference for this, and I might be misquoting. It is important, because it shows whether Poincaré was still trying to get relativity from Maxwell's equations, rather than making a new postulate—I don't know.) Special relativity was ripe for discovery in 1905, and Einstein wasn't the only one who could have done it, although he did do it best, and only he got the $E=mc^2$ without which nothing makes sense. Poincaré and Lorentz deserve at least 50% of the credit (as Einstein himself accepted), and Poincaré has most of the modern theory, so Einstein's sole completely original contribution is $E=mc^2$. | {
"source": [
"https://physics.stackexchange.com/questions/35090",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7870/"
]
} |
35,095 | I need to do some calculation to find out whether my design works. I may use oil/water/air in my pneumatic cylinder (or you can call it hydraulic cylinder). Assume I have just a cylinder and I put oil into the chamber to raise a weight. My question is how much of volume will reduce when the weight is x kg (Let assumes x = 20). Also what if I change the oil to water or air. From my knowledge air is easier to calculate, because I can use the ideal gas law. Am I right? Thanks | Poincaré was confused on several points. (See the discussion on Wikipedia regarding "mass energy equivalence".) He could never get the mechanical relations straight, since he could not figure out that $E=mc^2$. Einstein followed Poincaré closely in 1905, he was aware of Poincaré's work, but he derived the theory simply as a geometric symmetry, and made a complete system. Einstein did share the credit with Lorentz and Poincaré for special relativity for a while, probably one reason his Nobel prize did not mention relativity. Pauli in the Encyclopædia Britannica article famously credits Einstein alone for formulating the relativity principle, as did Lorentz. Poincaré was less accomodating. He would say "Einstein just assumed that which we were all trying to prove" (namely the principle of relativity). (I could not find a reference for this, and I might be misquoting. It is important, because it shows whether Poincaré was still trying to get relativity from Maxwell's equations, rather than making a new postulate—I don't know.) Special relativity was ripe for discovery in 1905, and Einstein wasn't the only one who could have done it, although he did do it best, and only he got the $E=mc^2$ without which nothing makes sense. Poincaré and Lorentz deserve at least 50% of the credit (as Einstein himself accepted), and Poincaré has most of the modern theory, so Einstein's sole completely original contribution is $E=mc^2$. | {
"source": [
"https://physics.stackexchange.com/questions/35095",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10427/"
]
} |
35,139 | Basic facts: The world's deepest mine is 2.4 miles deep. Railguns can acheive a muzzle velocity of a projectile on the order of 7.5 km/s. The Earth's escape velocity is 11.2 km/s. It seems to me that a railgun style launch device built into a deep shaft such as an abandoned mine could reasonably launch a vehicle into space. I have not run the calculations and I wouldn't doubt that there might be issues with high G's that limit the potential for astronauts on such a vehicle, but even still it seems like it would be cheaper to build such a launch device and place a powerplant nearby to run it than it is to build and fuel single-use rockets. So, what is the possibility of a railgun assisted orbital launch? What am I missing here? Why hasn't this concept received more attention? | Ok David asked me to bring the rain. Here we go. Indeed it is very feasible and very efficient to use an electromagnetic accelerator to launch something into orbit, but first a look at our alternative: Space Elevator : we don't have the tech Rockets : You spend most of the energy carrying the fuel, and the machinery is complicated, dangerous, and it cannot be reused (no orbital launch vehicle has been 100% reusable. SpaceShipOne is suborbital, more on the distinction in a moment). Look at the SLS that NASA is developing, the specs aren't much better than the Saturn V and that was 50 years ago. The reason is that rocket fuel is the exact same - there is only so much energy you can squeeze out of these reactions. If there is a breakthrough in rocket fuel that is one thing but as there has been none and there is none on the horizon, rockets as an orbital launch vehicle are dead end techs which we have hit the pinnacle of. Cannons : Acceleration by a pressure wave is limited to the speed of sound in the medium, so you cannot use any explosive as you will be limited by this (gunpowder is around $2\text{ km/s}$ , this is why battleship cannons have not increased in range over the last 100 years). Using a different medium you can achieve up to 11km/s velocity using hydrogen. This is the regime of 'light gas guns' and a company wants to use this to launch things into orbit. This requires high accelerations ( something ridiculous like thousands of $\mathrm{m/s^2}$ ) which restricts you to very hardened electronics and material supply such as fuel and water. Maglev : Another company is planning on this ( http://www.startram.com/ ) but if you look at their proposal it requires superconducting loops running something like 200MA generating a magnetic field that will destroy all communications in several states, I find this unlikely to be constructed. Electromagnetic accelerator (railgun) : This is going to be awesome! There is no requirement on high accelerations (A railgun can operate at lower accelerations) and no limit on upper speed. See the following papers: Low-Cost Launch System and Orbital Fuel Depot Launch to Space with Electromagnetic Rail Gun Some quick distinctions, there is suborbital and orbital launch. Suborbital can achieve quite large altitudes which are well into space, sounding rockets can go up to 400miles and space starts at 60miles. The difference is if you have enough tangential velocity to achieve orbit. For $1\text{ kg}$ at $200\text{ km}$ from earth the energy to lift it to that height is $0.5 m g h = 1\text{ MJ}$ , but the tangential velocity required to stay in orbit is $m v^2 / r = G m M / r^2$ yielding a $KE = 0.5 m v^2 = 0.5 G m M / r = 30\text{ MJ}$ , so you need a lot more kinetic energy tangentially. To do anything useful you need to be orbital, so you don't want to aim your gun up you want it at some gentle angle going up a mountain or something. The papers I cited all have the railgun going up a mountain and about a mile long and launching water and cargo. That is because to achieve the $6\text{ km/s}+$ you need for orbital velocity you need to accelerate the object from a standstill over the length of your track. The shorter the track the higher the acceleration. You will need about 100 miles of track to drop the accelerations to within survival tolerances NASA has. Why would you want to do this? You just need to maintain the power systems and the rails, which are on the ground so you can have crews on it the whole time. The entire thing is reusable, and can be reused many times a day. You can also just have a standard size of object it launches and it opens a massive market of spacecraft producers, small companies that can't pay 20 million for a launch can now afford the 500,000 for a launch. The electric costs of a railgun launch drops to about 3\$/kg, which means all the money from the launch goes to maintenance and capital costs and once the gun is paid down prices can drop dramatically. It is the only way that humanity has the tech for that can launch large quantities of object and in the end it is all about mass launched. Noone has considered having a long railgun that is miles long because it sounds crazy right off the bat, so most proposals are for small high-acceleration railguns as in the papers above. The issue is that this limits what they can launch and as soon as you do that noone is very much interested. Why is a long railgun crazy? In reality it isn't, the raw materials (aluminum rails, concrete tube, flywheels, and vacuum pumps) are all known and cheap. If they could make a railroad of iron 2000miles in the 1800s why can't we do 150miles of aluminum in the 2000s? The question is of money and willpower, someone needs to show that this will work and not just write papers about this but get out there and do it if we ever have a hope of getting off this rock as a species and not just as the 600 or so that have gone already. Also the large companies and space agencies now are not going to risk billions into a new project while there is technology which has been perfected and proven for the last 80 years that they could use. There are a lot of engineering challenges, some of which I and others have been working on in our spare time and have solved, some which are still open problems. I and several other scientists who are finishing/have recently finished their PhDs plan on pursuing this course ( jeff ross and josh at solcorporation.com , the website isn't up yet because I finished my PhD 5 days ago but it is coming). CONCLUSIONS Yes it is possible, the tech is here, it is economic and feasible to launch anything from cargo to people. It has not gotten a lot of attention because all the big boys use rockets already, and noone has proposed a railgun that can launch more than cargo. But it has caught the attention of some young scientists who are going to gun for this, so sit back and check the news in a few years. | {
"source": [
"https://physics.stackexchange.com/questions/35139",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5020/"
]
} |
35,674 | I was coding a physics simulation, and noticed that I was using discrete time. That is, there was an update mechanism advancing the simulation for a fixed amount of time repeatedly, emulating a changing system. I though that was interesting, and now believe the real world must behave just like my program does. It is actually advancing forward in tiny but discrete time intervals? | As we cannot resolve arbitrarily small time intervals, what is ''really'' the case cannot be decided. But in classical and quantum mechanics (i.e., in most of physics), time is treated as continuous. Physics would become very awkward if expressed in terms of a discrete time:
The discrete case is essentially untractable since analysis (the tool created by Newton, in a sense the father of modern physics) can no longer be applied. Edit: If time appears discrete (or continuous) at some level, it could still be continuous (or discrete) at higher resolution. This is due to general reasons that have nothing to do with time per se. I explain it by analogy: For example, line spectra look discrete, but upon higher resolution one sees that they have a line width with a physical meaning. Thus one cannot definitely resolve the question with finitely many observations of finite accuracy, no matter how contrived the experiment. | {
"source": [
"https://physics.stackexchange.com/questions/35674",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5686/"
]
} |
36,092 | This diagram from wikipedia shows the gravitational potential energy of the sun-earth two body system, and demonstrates clearly the semi-stability of the $L_1$, $L_2$, and $L_3$ lagrangian points. The blue arrows indicate lower potential energy, red higher - so any movement in the plane perpendicular to the masses would require energy and without it, an object there would settle. However, the $L_4$ and $L_5$ are claimed to be stable, even though the direction arrows indicate it's at the top of a gravity well, and would fall into lower potential energy in any direction. What is it that makes these points stable if not gravity? What am I missing? | When you look at the dynamics in the rotating reference frame, there are 4 forces acting on the particle: the two gravitational pulls from the massive bodies, the centrifugal push away from the center of rotation (located between the massive objects) and the Coriolis force. The first three forces depend on the position of the particle, and can be derived from a potential (that also depends on the position), whose level curves are shown in the picture presented with the question. This potential has local maxima at L4 and L5. The Coriolis force depends on the velocity of the particle: it is perpendicular to it, contained in the plane of motion and proportional to the speed. It curves the motion of the particle to the right (if the massive bodies and the reference system rotate counterclockwise, which is what you see in our Solar System if you stand on the North pole of the Earth). If a particle placed at L4 tries to leave the point with a mild speed, the Coriolis force curves its trajectory. The trajectory is too curly to get anywhere. See the animation at http://demonstrations.wolfram.com/OrbitsAroundTheLagrangePointL4/ . Of course this doesn't prove that the particle will stay near L4 forever. I don't know a proof. I've seen some computations that show that the dynamical equation linearised at L4 is stable if the mass ratio of the massive objects is sufficiently large, but this also is not enough to prove stability in the non-linearised problem. I would be convinced that the equilibrium is stable if I were shown that there exists a conserved quantity (depending on the position and speed) that has a strict local extremum at that point of phase space (position=L4, speed=0). The "energy" (potential discussed above + kinetic energy measured in our non-inertial reference system) is conserved, because the Coriolis force is perpendicular to the trajectory, so it doesn't perform work (in fact, in Lagrangian mechanics it is derived from a potential that depends on the position and speed of the particle). But this quantity doesn't have an extremum at our equilibrium point, because the potential has a local maximum at L4 and the kinetic term is minimum when the speed is 0. So I can't prove that the equilibrium is stable. | {
"source": [
"https://physics.stackexchange.com/questions/36092",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8850/"
]
} |
36,167 | A friend offered me a brain teaser to which the solution involves a $195$ pound man juggling two $3$-pound balls to traverse a bridge having a maximum capacity of only $200$ pounds. He explained that since the man only ever holds one $3$-pound object at a time, the maximum combined weight at any given moment is only $195 + 3=198$ pounds, and the bridge would hold. I corrected him by explaining that the acts of throwing up and catching the ball temporarily make you 'heavier' (an additional force is exerted by the ball to me and by me onto the bridge due to the change in momentum when throwing up or catching the ball), but admitted that gentle tosses/catches (less acceleration) might offer a situation in which the force on the bridge never reaches the combined weight of the man and both balls. Can the bridge withstand the man and his balls? | Suppose you throw the ball upwards at some speed $v$. Then the time it spends in the air is simply: $$ t_{\text{air}} = 2 \frac{v}{g} $$ where $g$ is the acceleration due to gravity. When you catch the ball you have it in your hand for a time $t_{\text{hand}}$ and during this time you have to apply enough acceleration to it to slow the ball from it's descent velocity of $v$ downwards and throw it back up with a velocity $v$ upwards: $$ t_{\text{hand}} = 2 \frac{v}{a - g} $$ Note that I've written the acceleration as $a - g$ because you have to apply at least an acceleration of $g$ to stop the ball accelerating downwards. The acceleration $a$ you have to apply is $g$ plus the extra acceleration to accelerate the ball upwards. You want the time in the hand to be as long as possible so you can use as little acceleration as possible. However $t_{\text{hand}}$ can't be greater than $t_{\text{air}}$ otherwise there would be some time during which you were holding both balls. If you want to make sure you are only ever holding one ball at a time the best you can do is make $t_{\text{hand}}$ = $t_{\text{air}}$. If we substitute the expressions for $t_{\text{hand}}$ and $t_{\text{air}}$ from above and set them equal we get: $$ 2 \frac{v}{g} = 2 \frac{v}{a - g} $$ which simplifies to: $$ a = 2g $$ So while you are holding one 3kg ball you are applying an acceleration of $2g$ to it, and therefore the force you're applying to the ball is $2 \times 3 = 6$ kg. In other words the force on the bridge when you're juggling the two balls (with the minimum possible force) is exactly the same as if you just walked across the bridge holding the two balls, and you're likely to get wet! | {
"source": [
"https://physics.stackexchange.com/questions/36167",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9749/"
]
} |
36,288 | I've been working on some projects lately where it would be very handy to know more about thermodynamics than I do, but sadly I never had a chance to take a proper thermodynamics course in college. Unfortunately, right now I don't have the time to work through a 500-600 page undergraduate text on the subject. Can anyone recommend a book/online resource/PDF that perhaps gives a (calculus based) broad overview of classical thermodynamics, say in something less than 200 pages? | Thermodynamics by Enrico Fermi seems to be what you're looking for. I bought it for less than ten dollars. It's about a hundred and fifty pages and starts from the axioms. And it's very well written. | {
"source": [
"https://physics.stackexchange.com/questions/36288",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12185/"
]
} |
36,289 | I'm somewhat confused. I want to simulate in real-time an intersection where cars have to turn left, right or go straight. What I have are 2 way points: One at the beginning of the intersection on the incoming street and the other at the end of the intersection on the outgoing street. As I know the next way point on the outgoing street, I know which direction the car should be pointing. How would I slow the car down to the optimal speed, calculate its steering angle and correct it in a time interval so that the car drives an optimal curve? A resource I have found, that seems quite good for this is the following paper . I just don't really understand the first part of the paper where the Circular track is calculated. At which point is the steering angle applied? | Thermodynamics by Enrico Fermi seems to be what you're looking for. I bought it for less than ten dollars. It's about a hundred and fifty pages and starts from the axioms. And it's very well written. | {
"source": [
"https://physics.stackexchange.com/questions/36289",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12194/"
]
} |
36,359 | When reading about the twistor uprising or trying to follow a corresponding Nima talk, it always annoys me that I have no clue about how twistor space, the twistor formalism, or twistor theory works. First of all, are these three terms some kind of synonyms or what is the relationship between them? Twistors are just a deep black gap in my education. I've read the Road to Reality but I just did not get it from the relevant chapter therein, maybe because I could not understand better the one or two chapters preceding it either ... :-/ So can somebody point me to a gentle, but nevertheless slightly technical source that explains twistors step by step (similar to a demystified book...) such that even I can get it, if something like this exists? Since I think I'd really have to "meditate" about it a bit, I'd prefer something written I can print out, but nevertheless I would appreciate video lectures or talks too. | :-)
The best gentle introduction to basic twistor theory that I know of is the book by Huggett and Tod If you don't have access to that book and some other answers don't surface in the meantime I'm happy to write a few bits and pieces here, but will have to wait until the weekend. (I may be biased, but I think it's well worth learning, as the MHV amplitude applications are extremely interesting). Edit: Here are a few paragraphs to give a flavor of twistor theory: Twistor theory makes extensive use of Weyl spinors , which form representations of $SL(2;\mathbb{C})$ - the double cover of the (restricted) Lorentz group. These come in two varieties – unprimed spinors $\omega_A$ transforming according to the fundamental representation, and primed spinors $\omega_{A’}$ transforming according to the conjugate representation. (Note in much of the modern literature, primed and unprimed are denoted by dotted $\lambda_{\dot{a}}$ and undotted).
Spinor indices are raised and lowered using the antisymmetric spinor
$$\epsilon_{AB}=\epsilon_{A’B’}=\epsilon^{AB}= \epsilon^{A’B’} = \left(\begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right)$$
Minkowski-space vectors $x^a$ can be put into correspondence with two-index unprimed/primed spinors by writing
$$x^{AA’} = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} x^0+x^1 & x^2+ix^3 \\ x^2-ix^3 & x^0-x^1 \end{array} \right)$$
Now if we take a primed/unprimed spinor pair $(\omega^A, \pi_{A’})$, then the set of Minkowski vectors which satisfy
$$\omega^A=ix^{AA’}\pi_{A’} \ \ \ (1)$$
is a null line in Minkowski space provided we impose the reality condition
$$\omega^A{\bar{\pi}}_{A}+{\bar{\omega}}^{A’}\pi_{A’}=0$$
The pair of spinors is referred to as a twistor $Z^{\alpha} = (\omega^A, \pi_{A’})$. The space of such four-component objects is “twistor space” $\mathbb{T}$, upon which we define a Hermitian form via the conjugation operation
$${\bar{Z}}_0 = \bar{Z^2} = {\bar{\pi}}_0$$
$${\bar{Z}}_1 = \bar{Z^3} = {\bar{\pi}}_1$$
$${\bar{Z}}_2 = \bar{Z^0} = {\bar{\omega}}^{0’}$$
$${\bar{Z}}_3 = \bar{Z^1} = {\bar{\omega}}^{1’}$$ The reality condition above is then expressible as $Z^{\alpha}{\bar{Z}}_{\alpha}=0$ and twistors satisfying this condition are called null twistors. The locus of points in Minkowski space satisfying (1) is unchanged if we multiply the twistor $Z^{\alpha}{\bar{Z}}_{\alpha}=0$ by any non zero complex number. In fact it proves extremely useful to impose this as an equivalence relation on $\mathbb{T}$ and work with its projective version $P\mathbb{T}$.
Projective null twistors, then, correspond to light rays in Minkowski space. The correspondence between (projective) twistor space and Minkowski space is made more complete if we attach to Minkowski space its conformal boundary (light cone at infinity) and if we complexify it. We are then dealing with complexified, compactified Minkowski space $\mathbb{C}M$ and twistors (we will always mean projective twistors) correspond to totally null two-planes (called alpha planes) in $\mathbb{C}M$. The alpha planes corresponding to null twistors (such objects live in a subspace of $P\mathbb{T}$ called $PN$) will intersect the real slice of $\mathbb{C}M$ in null rays. Conversely a point x in real Minkowski space defines a set of null rays – the ones defining the null cone at that point. There is a two-sphere’s worth of such rays (the celestial sphere), and the set of twistors defining these rays defines a subset of $PN$ having the topology of a two-sphere, but more importantly having the complex structure of a $\mathbb{C}P^1$, and known as a projective line (or just “line”). Figure 1 shows a point x in Minkowski space and the corresponding line $L_x$ in $PN$, and also a pair of twistors $Z$ and $W$ on $L_x$ and the null rays $\gamma_Z$ and $\gamma_W$ they correspond to. Now the fun starts when you consider functions on twistor space. Suppose we consider a function homogeneous of degree zero (i.e. $f(\lambda Z^{\alpha}) = f(Z^{\alpha}); \lambda \in \mathbb{C}^*$). We then define the field on spacetime:
$$\phi_{AB}(x) = \oint{\rho_x(\frac{\partial}{\partial \omega^A} \frac{\partial}{\partial \omega^B}f(\omega^A, \pi_{A’}))\pi_{C’}d\pi^{C’}}$$
where $\rho_x$ means “impose the restriction (1)”. To get a non trivial field, the function f needs to have singularities on twistor space, i.e it mustn’t be holomorphic everywhere. For example it can have poles. The contour used is on the projective line $L_x$ and avoids the singularities of f. The field defined in this way satisfies
$$\nabla^{AA'} \phi_{AB} = 0 \ \ \ (2)$$
Where $$\nabla_{AA’} = \frac{\partial}{\partial x^{AA’}}$$
We can decompose an antisymmetric electromagnetic field tensor into its anti self-dual and self-dual parts respectively as
$$ F_{ab} = F_{AA'BB'} = \phi_{AB}\epsilon_{A'B'} +{\tilde{\phi}}_{A'B'}\epsilon_{AB}$$
Then (2) represents the (source free) Maxwell equations (for anti self dual Maxwell fields). The correspondence between twistor functions and anti-self-dual solutions of the Maxwell equations is not unique. However, treating the twistor functions as representatives of certain sheaf cohomology classes does give a unique correspondence. Choosing twistor functions with other homogeneities gives rise to other types of field (symmetric spinors with other numbers of primed or unprimed indices satisfying equations similar to (2)). For example the equations for self dual Maxwell fields
$$\nabla^{AA'} \phi_{A’B’} = 0$$
are given by a (slightly different) contour integral involving twistor functions of homogeneity -4:
$$\phi_{A'B'}(x) = \oint{\rho_x(\pi_{A'}\pi_{B'}f(\omega^D, \pi_{D'}))\pi_{C'}d\pi^{C'}}$$ Other ways of using the twistor correspondence exist, for example a correspondence can be set up for fields on a real space with Euclidean signature. This programme led to the construction of self dual solutions of the Yang Mills equations on $S^4$ (the compactification of $\mathbb{R}^4$). In this case, the correspondence is between self dual Yang Mills fields on $S^4$ and holomorphic bundles on twistor space which are (holomorphically) trivial on projective lines in twistor space (and which have various other conditions depending on the structure group of the Yang Mills theory you’re interested in). Both twistor space and Minkowski space can be “thickened” by adding Grassmannian coordinates and in this way supersymmetric versions of the twistor correspondences of the type illustrated above can be given. This has been used in the treatment of Supersymmetric Yang Mills theory. | {
"source": [
"https://physics.stackexchange.com/questions/36359",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2751/"
]
} |
36,384 | Orthochronous Lorentz transform are Lorentz transforms that satisfy the conditions (sign convention of Minkowskian metric $+---$ ) $$ \Lambda^0{}_0 \geq +1.$$ How to prove they form a subgroup of Lorentz group? All books I read only give this result, but no derivation. Why is this condition $ \Lambda^0{}_0 \geq +1$ enough for a Lorentz transform to be orthochronous? The temporal component of a transformed vector is $$x'^0=\Lambda^0{}_0 x^0+\Lambda^0{}_1 x^1+\Lambda^0{}_2 x^2+\Lambda^0{}_3 x^3,$$ the positivity of $\Lambda^0{}_0$ alone does not seem at first glance sufficient for the preservation of the sign of temporal component. And how to prove that all Lorentz transform satisfying such simple conditions can be generated from $J_i,\ K_i$ ? For those who think that closure and invertibility are obvious, keep in mind that $$\left(\bar{\Lambda}\Lambda \right)^0{}_0\neq \bar{\Lambda}^0{}_0\Lambda^0{}_0,$$ but instead $$\left(\bar{\Lambda}\Lambda \right)^0{}_0= \bar{\Lambda}^0{}_0\Lambda^0{}_0+\bar{\Lambda}^0{}_1\Lambda^1{}_0+\bar{\Lambda}^0{}_2\Lambda^2{}_0+\bar{\Lambda}^0{}_3\Lambda^3{}_0.$$ And I'm looking for a rigorous proof, not physical "intuition". | Let the Minkowski metric $\eta_{\mu\nu}$ in $d+1$ space-time dimensions be $$\eta_{\mu\nu}~=~{\rm diag}(1, -1, \ldots,-1).\tag{1}$$ Let the Lie group of Lorentz transformations be denoted as $O(1,d;\mathbb{R})=O(d,1;\mathbb{R})$ . A Lorentz matrix $\Lambda$ satisfies (in matrix notation) $$ \Lambda^t \eta \Lambda~=~ \eta. \tag{2}$$ Here the superscript " $t$ " denotes matrix transposition. Note that the eq. (2) does not depend on whether we use east-coast or west-coast convention for the metric $\eta_{\mu\nu}$ . Let us decompose a Lorentz matrix $\Lambda$ into 4 blocks $$ \Lambda ~=~ \left[\begin{array}{cc}a & b^t \cr c &R \end{array} \right],\tag{3}$$ where $a=\Lambda^0{}_0$ is a real number; $b$ and $c$ are real $d\times 1$ column vectors; and $R$ is a real $d\times d$ matrix. Now define the set of orthochronous Lorentz transformations as $$ O^{+}(1,d;\mathbb{R})~:=~\{\Lambda\in O(1,d;\mathbb{R}) | \Lambda^0{}_0 > 0 \}.\tag{4}$$ The proof that this is a subgroup can be deduced from the following string of exercises. Exercise 1: Prove that $$ |c|^2~:= ~c^t c~ = ~a^2 -1.\tag{5}$$ Exercise 2: Deduce that $$ |a|~\geq~ 1.\tag{6}$$ Exercise 3: Use eq. (2) to prove that $$ \Lambda \eta^{-1} \Lambda^t~=~ \eta^{-1}. \tag{7}$$ Exercise 4: Prove that $$ |b|^2~:= ~b^t b~ = ~a^2 -1.\tag{8}$$ Next let us consider a product $$ \Lambda_3~:=~\Lambda_1\Lambda_2\tag{9}$$ of two Lorentz matrices $\Lambda_1$ and $\Lambda_2$ . Exercise 5: Show that $$ b_1\cdot c_2~:=~b_1^t c_2~=~a_3-a_1a_2.\tag{10}$$ Exercise 6: Prove the double inequality $$ -\sqrt{a_1^2-1}\sqrt{a_2^2-1} ~\leq~ a_3-a_1a_2~\leq~ \sqrt{a_1^2-1}\sqrt{a_2^2-1},\tag{11}$$ which may compactly be written as $$| a_3-a_1a_2|~\leq~\sqrt{a_1^2-1}\sqrt{a_2^2-1}.\tag{12}$$ Exercise 7: Deduce from the double inequality (11) that $$ a_1\neq 0 ~\text{and}~ a_2\neq 0~\text{have same signs} \quad\Rightarrow\quad a_3>0. \tag{13}$$ $$ a_1 \neq 0~\text{and}~ a_2\neq 0~\text{have opposite signs} \quad\Rightarrow\quad a_3<0. \tag{14}$$ Exercise 8: Use eq. (13) to prove that $O^{+}(1,d;\mathbb{R})$ is stabile/closed under the multiplication map. Exercise 9: Use eq. (14) to prove that $O^{+}(1,d;\mathbb{R})$ is stabile/closed under the inversion map. The Exercises 1-9 show that the set $O^{+}(1,d;\mathbb{R})$ of orthochronous Lorentz transformations form a subgroup. $^{\dagger}$ References: S. Weinberg, Quantum Theory of Fields, Vol. 1, 1995; p. 57-58. $^{\dagger}$ A mathematician would probably say that eqs. (13) and (14) show that the map $$O(1,d;\mathbb{R})\quad \stackrel{\Phi}{\longrightarrow}\quad
\{\pm 1\}~\cong~\mathbb{Z}_2\tag{15}$$ given by $$\Phi(\Lambda)~:=~{\rm sgn}(\Lambda^0{}_0)\tag{16}$$ is a group homomorphism between the Lorentz group $O(1,d;\mathbb{R})$ and the cyclic group $\mathbb{Z}_2$ , and a kernel $$ {\rm ker}(\Phi)~:=~\Phi^{-1}(1)~=~O^{+}(1,d;\mathbb{R}) \tag{17}$$ is always a normal subgroup. For a generalization to indefinite orthogonal groups $O(p,q;\mathbb{R})$ , see this Phys.SE post. | {
"source": [
"https://physics.stackexchange.com/questions/36384",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3887/"
]
} |
37,577 | I know what the dual of a vector means (as a map to its field), and I am also
aware of of the definition a dual of a tensor as, $$F^{*ij} = \frac{1}{2} \epsilon^{ijkl} F_{kl}\tag{1}$$ I just don't understand how to connect this to the definition of the dual
of a vector. Or are they entirely different concepts? If they are different
then why use the word dual for it? I know this is kind of mathematical or may be a stupid question, but I have a
problem with understanding the need for dual of Dual Field strength Tensor
in relativistic ED. I mean you could say that I am going to define another
tensor using eq. (1) call it whatever we like. | The dual of a tensor you refer to is the Hodge dual , and has nothing to do with the dual of a vector. The word "dual" is used in too many different contexts, and in this case it is even used the same $*$ symbol. One usually specifies "Hodge dual", or "Hodge star operator", to avoid confusion. Both these "duals" are isomorphisms between vector spaces endowed with inner product. The dual of a vector space $V$ is the vector space $V^*$ consisting in the linear functions (functionals) $f:V\to \mathbb R$ (or $\mathbb C$ if $V$ is a complex vector space). The dual of a vector $v\in V$ makes sense only if $V$ is endowed with an inner product $g$, and it is defined as $v^\flat\in V^*$, $v^\flat(u)=g(v,u)$. This dual is an isomorphism between the inner product vector space $(V,g_{ab})$ and its dual $(V^*,g^{ab})$. The Hodge dual is defined on totally antisymmetric tensors from $\otimes^k V$, that is, on $\wedge V^k$. It is defined on $\wedge V\to \wedge V$, where $\wedge V=\oplus_{k=0}^n\wedge^k V$. It also requires the existence of an inner product $g$ on $V$. The inner product extends canonically to the entire $\wedge V$. The Hodge dual is defined as follows. We construct a basis on $V$, which is orthonormal with respect to the inner product $g$, say $e_1,\ldots,e_n$. Then, for each $k$, there is a basis of $\wedge^k V$ of the form $e_{i_1}\wedge\ldots \wedge e_{ik}$. This basis is considered to be orthonormal too, and by this, $g$ defines an inner product on $\wedge^kV$. The Hodge dual is defined first between $\wedge^kV\to\wedge^{n-k}V$, Both spaces have the same dimension, which is $C^k_n$. The canonical isomorphism between them is defined first on the elements of the basis: $$*(e_{i_1}\wedge\ldots\wedge e_{i_k})=\epsilon e_{j_i}\wedge\ldots\wedge e_{j_{n-k}}.$$
Here, the indices $i_1,\ldots,i_k,j_1\ldots j_{n-k}$ are a permutation of the numbers between $1$ and $n$. $\epsilon$ is $+1$ if the permutation $i_1,\ldots,i_k,j_1\ldots j_{n-k}$ is even, and $-1$ otherwise. This defines uniquely the isomporphism. It extends uniquely to $\wedge V=\oplus_{k=0}^n\wedge^k V$, since this is a direct sum of vector spaces. Please note that the vectors also admit Hodge duals, but their duals are elements of $\wedge^{n-1}V$. Since we can consider that $\mathbb R=\wedge^0 V$, the Hodge dual of the scalar $1$ is the volume element $*1:=e_1\wedge\ldots\wedge e_n\in\wedge^n V$. In a basis, it is denoted by $\epsilon_{12\ldots n}$. Similarly, the Hodge dual can be defined on the space of exterior forms $\wedge V^*$. In the case of Lorentzian spacetimes of dimension $4$, the Hodge duality establishes isomorphisms between $\mathbb R$ and $\wedge^4 V$, between $V$ and $\wedge^3 V$, and between $\wedge V^2$ and itself. An alternative way to construct the Hodge duality is for the Clifford algebra associated to $V$. In this case, there is an isomorphism (as vector spaces with inner product) between $\wedge V$ and $Cl(V)$. The Hodge dual translated to the Clifford algebra language as Clifford multiplication with the $n$-vector which corresponds to the volume element, $\gamma_1\cdot\gamma_2\cdot\ldots\cdot\gamma_n$. Back to the confusion of terminology. For a vector $v\in V$, the dual is a covector $v^\flat\in V^*$. The Hodge dual can be obtained by constructing an orthonormal basis starting from $v$, then taking the wedge product between the other elements, and dividing by the length of $v$. The result is from $\wedge^{n-1}V$, not from $V^*$. All these three spaces are isomorphic, in a canonical way, and also with $\wedge^{n-1}V^*$, by composing the two kinds of dualities. But the two dualities refer to totally distinct isomorphisms. In connection with Electrodynamics (in Lorentzian spacetime), the Hodge dual of the electromagnetic tensor appears in Maxwell's equations: $$d F=0$$ and $$d *F=*J$$ where $J$ is the current $1$-form. These two equations contain the four Maxwell equations. Here, $*F$ is the Hodge dual of the electromagnetic tensor $F$, and $*J$ of the current $1$-form $J$ (which in turn is the "dual" in the other sense of the current vector). Added To be more close to the question. The equation $$*F_{ij} = \frac{1}{2} \epsilon_{ij}{}^{kl} F_{kl}$$ represents the Hodge duality between $\wedge^2V$ and itself. But $$*F^{ij} = \frac{1}{2} \epsilon^{ijkl} F_{kl}\tag{1}$$ is a duality between $\wedge^2V$ and $\wedge^2V^*$. There is also the duality in the first sense, between $\wedge^2V$ and $\wedge^2V^*$ (extending that between $V$ and $V^*$). So, there are two distinct isomorphisms between the vector spaces with inner product $\wedge^2V$ and $\wedge^2V^*$. | {
"source": [
"https://physics.stackexchange.com/questions/37577",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7785/"
]
} |
37,662 | Why can't we see light ? The thing which makes everything visible is itself invisible. Why is it so? | Because Maxwell's equations are linear. Equivalently there is no elementary photon-photon interaction. If there were, say, a quartic photon interaction then you would be able to see a beam of light directly instead of seeing its interaction with dust particles. | {
"source": [
"https://physics.stackexchange.com/questions/37662",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10811/"
]
} |
37,743 | Temperature conversion: $$273 + \text{degree Celsius} = \text{Kelvin}$$ Actually why is that $273$ ? How does one come up with this? My teacher mentioned Gann's law (not sure if this is the one) but I couldn't find anything related to this, which law should it be? | One Celsius (or Kelvin) degree as a temperature difference was defined as 1/100 of the temperature difference between the freezing point of water and boiling point of water. We call these points 0 °C and 100 °C, respectively. The number 100 arose because we're used to numbers that are powers of ten because we use the base-ten system. The Celsius degree is "one percent" of the temperature difference. There also exists the minimum temperature that one may achieve. In Kelvin degrees, it's sensible to call this minimum 0 K: that's a primary condition on the "natural" temperature scales such as the Kelvin scale. If one insists that one Kelvin degree is equally large as one Celsius degree as a unit of temperature difference, it just turns out that the melting point of ice has to be 273.15 K and the boiling point of water is 373.15 K. It's because the temperature difference between the boiling and freezing points is 2.7315 times smaller than the temperature difference between the minimum allowed temperature, the absolute zero, and the freezing point of water. This number 2.7315 can't be explained in simple words. It is a fact about water, a fact of Nature that may be derived from the laws of quantum mechanics. One has to "simulate" what the water molecules are doing all over this large interval of temperatures and this is just what comes out. We know it is so from experiments. | {
"source": [
"https://physics.stackexchange.com/questions/37743",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12344/"
]
} |
37,772 | Pretty simple question, but not an obvious answer at least not to me. I mean you can't just place a dead fly on the wall and expect it to stay there, he will fall off due to gravity. At first I thought it maybe friction, but that would require a normal force (i.e. perpendicular to the wall), and then I remembered spiders, geckos etc, they like to walk around on the ceiling. How is it possible? What kind of forces are involved? Would these creatures still be able to do it on a hypothetical surface which was perfectly flat? | See http://www.sciencephoto.com/media/100845/enlarge for an absolutely awesome picture of a fly's foot. It has two claws that can grip any irregularities. For smooth surfaces like glass it has a pad covered in tiny hairs, and each hair is coated in tiny oil drops. The capillary attraction of the oil drops holds the tiny hairs, and therefore the fly, to the surface. | {
"source": [
"https://physics.stackexchange.com/questions/37772",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5382/"
]
} |
37,881 | Recently, I was doing my homework and I found out that Torque can be calculated using $\tau = rF$ .
This means the units of torque are Newton meters. Work & Energy are also measured in Newton meters which are Joules. However, torque isn't a measure of energy. I am really confused as why it isn't measured in Joules. | The units for torque, as you stated, are Newton-meters. Although this is algebraically the same units as Joules, Joules are generally not appropriate units for torque. Why not? The simple answer is because $$W = \vec F \cdot \vec d$$ where $W$ is the work done, $\vec F$ is the force, $\vec d$ is the displacement, and $\cdot$ indicates the dot product . However, torque on the other hand, is defined as the cross product of $\vec r$ and $\vec F$ where $\vec r$ is the radius and $\vec F$ is the force. Essentially, dot products return scalars and cross products return vectors. If you think torque is measured in Joules, you might get confused and think it is energy, but it is not energy. It is a rotational analogy of a force. Per the knowledge of my teachers and past professors, professionals working with this prefer the units for torque to remain $N \ m$ (Newton meters) to note the distinction between torque and energy. Fun fact: alternative units for torque are Joules/radian, though not heavily used. | {
"source": [
"https://physics.stackexchange.com/questions/37881",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12393/"
]
} |
38,138 | I see that the weyl transformation is $g_{ab} \to \Omega(x)g_{ab}$ under which Ricci scalar is not invariant. I am a bit puzzled when conformal transformation is defined as those coordinate transformations that effect the above metric transformation i.e
$x \to x' \implies g_{\mu \nu}(x) \to g'_{\mu \nu}(x') = \Omega(x)g_{\mu \nu }(x)$. but any covariant action is clearly invariant under coordinate transformation? I see that what we mean by weyl transformation is just changing the metric by at a point by a scale factor $\Omega(x)$. So my question is why one needs to define these transformations via a coordinate transforms. Is it the case that these two transformations are different things. In flat space time I understand that conformal transformations contain lorentz transformations and lorentz invariant theory is not necessarily invariant under conformal transformations. But in a GR or in a covariant theory effecting weyl transformation via coordinate transformations is going to leave it invariant. Unless we restrict it to just rescaling the metric? I am really confused pls help. | The Weyl transformation and the conformal transformation are completely different things (although they are often discussed in similar contexts). A Weyl transformation isn't a coordinate transformation on the space or spacetime at all. It is a physical change of the metric, $g_{\mu\nu}(x)\to g_{\mu\nu}(x)\cdot \Omega(x)$. It is a transformation that changes the proper distances at each point by a factor and the factor may depend on the place – but not on the direction of the line whose proper distance we measure (because $\Omega$ is a scalar). Note that a Weyl transformation isn't a symmetry of the usual laws we know, like atomic physics or the Standard Model, because particles are associated with preferred length scale so physics is not scale invariant. On the other hand, conformal transformations are a subset of coordinate transformations. They include isometries – the genuine geometric "symmetries" – as a subset. Isometries are those coordinate transformations $x\to x'$ that have the property that the metric tensor expressed as functions of $x'$ is the same as the metric tensor expressed as functions of $x$. Conformal transformations are almost the same thing: but one only requires that these two tensors are equal functions up to a Weyl rescaling. For example, if you have a metric on the complex plane, $ds^2=dz^* dz$, then any holomorphic function, such as $z\to 1/z$, is conformally invariant because the angles are preserved. If you pick two infinitesimal arrows $dz_1$ and $dz_2$ starting from the same point $z$ and if you transform all the endpoints of the arrows to another place via the transformation $z\to 1/z$, then the angle between the final arrows will be the same. Consequently, the metric in terms of $z'=1/z$ will be still given by
$$ ds^2 = dz^* dz = d(1/z^{\prime *}) d (1/z') = \frac{1}{(z^{\prime *}z')^2} dz^{\prime *} dz' $$
which is the same metric up to the Weyl scaling by the fraction at the beginning. That's why this holomorphic transformation is conformal, angle-preserving. But a conformal transformation is a coordinate transformation, a diffeomorphism. The Weyl transformation is something else. It keeps the coordinates fixed but directly changes the values of some fields, especially the metric tensor, at each point by a scalar multiplicative factor. | {
"source": [
"https://physics.stackexchange.com/questions/38138",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12466/"
]
} |
38,151 | I've heard that special relativity makes the concept of magnetic fields irrelevant, replacing them with relativistic effects between charges moving in different velocity frames. Is this true? If so, how does this work? | Special relativity makes the existence of magnetic fields an inevitable consequence of the existence of electric fields. In the inertial system B moving relatively to the inertial system A, purely electric fields from A will look like a combination of electric and magnetic fields in B. According to relativity, both frames are equally fit to describe the phenomena and obey the same laws. So special relativity removes the independence of the concepts (independence of assumptions about the existence) of electricity and magnetism. If one of the two fields exists, the other field exists, too. They may be unified into an antisymmetric tensor, $F_{\mu\nu}$. However, what special relativity doesn't do is question the independence of values of the electric fields and magnetic fields. At each point of spacetime, there are 3 independent components of the electric field $\vec E$ and three independent components of the magnetic field $\vec B$: six independent components in total. That's true for relativistic electrodynamics much like the "pre-relativistic electrodynamics" because it is really the same theory! Magnets are different objects than electrically charged objects. It was true before relativity and it's true with relativity, too. It may be useful to notice that the situation of the electric and magnetic fields (and phenomena) is pretty much symmetrical. Special relativity doesn't really urge us to consider magnetic fields to be "less fundamental". Quite on the contrary, its Lorentz symmetry means that the electric and magnetic fields (and phenomena) are equally fundamental. That doesn't mean that we can't consider various formalisms and approximations that view magnetic fields – or all electromagnetic fields – as derived concepts, e.g. mere consequences of the motion of charged objects in spacetime. But such formalisms are not forced upon us by relativity. | {
"source": [
"https://physics.stackexchange.com/questions/38151",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7670/"
]
} |
38,286 | While deriving Hamiltonian from Lagrangian density, we use the formula
$$\mathcal{H} ~=~ \pi \dot{\phi} - \mathcal{L}.$$
But since we are considering space and time as parameters, why the formula
$$\mathcal{H} ~=~ \pi_{\mu}\partial^{\mu} \phi - \mathcal{L}$$
is not used? Is there any particular book/lecture notes dealing with these kind of issues in theoretical physics, I would love to know them? | Vladimir's answer has the right essence but it is also misleading, so let me clarify. The formula
$$ H = \sum_i p_i\dot q_i - L $$
relating the Hamiltonian and the Lagrangian is completely general. It holds in all theories that admit both Lagrangians and Hamiltonians, whether they're relativistic or not, whether or not they have any other symmetry aside from the Lorentz symmetry. When you have field theory, both the Hamiltonian and the Lagrangian may be written as spatial integrals of their densities.
$$ H = \int d^3x \, {\mathcal H}, \quad L = \int d^3x\, {\mathcal L} $$
Combining that with the first formula, we get the relationship
$$ \mathcal{H} = \sum_i\pi_i \dot{\phi_i} - \mathcal{L} $$
Now, you proposed a different formula and I guess that the reason why you proposed it is that it looks more Lorentz-invariant to you, as appropriate for Lorentz-invariant field theories. That's a nice motivation. However, what's wrong about your reasoning is the assumption that both the Hamiltonian density and the Lagrangian density are Lorentz-invariant. While the Lagrangian density is a nice scalar, so it is Lorentz-invariant (the density at the origin, at least), and it's because the integral of it is the Lorentz-invariant action which should be stationary, the same is not true for the Hamiltonian and its density. The Hamiltonian is intrinsically linked to the time direction: it is the generator of the translations in time (the spatial counterparts of the Hamiltonian are the spatial components of the momentum); it is the energy, the 0th component of a 4-vector, $H\equiv p^0$. So the argument that this formula should be Lorentz-covariant is invalid, your proposed formula is wrong, and the right formula was justified at the beginning of my comment. | {
"source": [
"https://physics.stackexchange.com/questions/38286",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8449/"
]
} |
38,348 | Here is a question that's been bothering me since I was a sophomore in university, and should have probably asked before graduating:
In analytic (Lagrangian) mechanics, the derivation of the Euler-Lagrange equations from the principle of least action assumes that the start and end coordinates at the initial and final times are known. As a consequence, any variation on the physical path must vanish at its boundaries. This conveniently cancels out the contributions of the boundary terms after integration by parts, and setting the requirement for minimal action, we obtain the E.L. equations. This is all nice and dandy, but our intention is finding the location of a particle at a time in the future, which we do not know a priori; after we derive any equations of motion for a system, we solve them by applying initial values instead of boundary conditions. How are these two approaches consistent? | I) Initial value problems and boundary value problems are two different classes of questions that we can ask about Nature. Example: To be concrete: an initial value problem could be to ask about the classical trajectory of a particle if the initial position $q_i$ and the initial velocity $v_i$ are given, while a boundary value problem could be to ask about the classical trajectory of a particle if the initial position $q_i$ and the final position $q_f$ are given (i.e. Dirichlet boundary conditions). II) For boundary value problems, there are no teleology , because we are not deriving a (100 percent certain deterministic) prediction about the final state, but instead we are merely stating that if the final state is such and such, then we can derive such and such. III) First let us discuss the classical case. Typically the evolution equations (also known as the equations of motion(eom), e.g. Newton's 2nd law) are known , and in particular they do not depend on whether we want to pose an initial value question or a boundary value question. Let us assume that the eom can be derived from an action principle. (So if we happen to have forgotten the eom, we could always rederive them by doing the following side-calculation: Vary the action with fixed (but arbitrary) boundary values to determine the eom. The specific fixed values at the boundary doesn't matter because we only want to be reminded about the eom; not to determine an actual solution, e.g. a trajectory.) IV) Next let us consider either an initial value problem or a boundary value problem, that we would like to solve. Firstly, if we have an initial value problem, we can solve the eom directly with the given initial conditions. (It seems that this is where OP might want to set up a boundary value problem, but that would precisely be the side-calculation mentioned in section III, and it has nothing to do with the initial value problem at hand.) Secondly, if we have a boundary value problem, there are two possibilities: We could solve the eom directly with the given boundary conditions. We could set up a variational problem using the given boundary conditions. V) Finally, let us briefly mention the quantum case. If we would try to formulate the path integral $$\int Dq ~e^{\frac{i}{\hbar}S[q]}$$ as an initial value problem, we would face various problems: The concept of a classical path would be ill-defined. This is related to that the concept of the functional derivative $$\frac{\delta S[q]}{\delta q(t)}$$
would be ill-defined, basically because we cannot apply the usual integration-by-part trick when the (final) boundary terms do not vanish. To specify both the initial position $q_i$ and the initial velocity $v_i$ would violate the Heisenberg uncertainty principle . | {
"source": [
"https://physics.stackexchange.com/questions/38348",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12533/"
]
} |
38,459 | What is the difference between Raman scattering and fluorescence ? Both phenomena involve the emission of photons shifted in frequency relative to the incident light, because of some energetic transition between states that the system undergoes. As far as I can tell, fluorescence is caused by ionization of the atomic electrons while Raman scatterings agitate phonon modes - but technically speaking, aren't they effectively the same? | I think that it is important to recognize the practical difference between Raman scattering and fluorescence. If the energy of the photon is resonant with some molecular transition (meaning that it is equal to the energy difference between ground energy state and one of the excited states of the molecule), that the molecule can absorb this photon undergoing energy transition. Now, there are many things that can happen with the molecule in this excited state: it can lose some or all of its extra energy in collision with another molecule, it can fall apart if this extra energy is large enough or - if it avoids these things - it can emit extra energy as a photon which is called fluorescence . The energy of the emitted photon is not necessarily equal to the energy of the absorbed photon as the molecule can end up in a different energy state (for example, having more vibrational energy). There is a reason that Raman process is called scattering . It is a non-resonant process so the energy of the photon is not important (although higher energies are more efficient in inducing Raman scattering). You can imagine molecule as a small antenna which receives electromagnetic radiation and can re-radiate this. In most cases, the molecule will scatter exactly the same energy - this is called Rayleigh scattering. In few cases, small part of this energy is stored in molecular vibration, or, if molecule is vibrationally excited, it can give this extra energy to the photon. When this happens, scattered light has a shift in wavelength and the process is called Raman scattering. In contrast to fluorescence, there is no excited state in Raman scattering, therefore the process is almost instantaneous, whereas fluorescence has characteristic lifetime of nanoseconds. So, no matter how one can play with words, these two are very different processes and they totally deserve different names. | {
"source": [
"https://physics.stackexchange.com/questions/38459",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12533/"
]
} |
38,578 | I have frequently seen this symbol used in advanced books in physics: $$\oint$$ What does the circle over the integral symbol mean? What kind of integral does it denote? | It's an integral over a closed line (e.g. a circle), see line integral . In particular, it is used in complex analysis for contour integrals (i.e closed lines on a complex plane), see e.g. example pointed out by Lubos. Also, it is used in real space, e.g. in electromagnetism, in Faraday's law of induction (part of the Maxwell equations, written in an integral form): $$\oint_{\partial \Sigma} \mathbf{E} \cdot d\boldsymbol{\ell} = - \int_{\Sigma} \frac{\partial \mathbf{B}}{\partial t} \cdot d\mathbf{A} $$
saying that the generated voltage (an integral of electric field along a circle) is the same as the time derivative of the magnetic flux. | {
"source": [
"https://physics.stackexchange.com/questions/38578",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
38,874 | It might be just a simple definition problem but I learned in class that a central force does not necessarily need to be conservative and the German Wikipedia says so too. However, the English Wikipedia states different on their articles for example: A central force is a conservative field, that is, it can always be expressed as the negative gradient of a potential They use the argument that each central force can be expressed as a gradient of a (radial symmetric) potential. And since forces that are gradient fields are per definition conservative forces, central forces must be conservative. As far as I understand, a central force can have a (radial symmetric) potential but this is not necessarily always the case. Update Sep 2017: The english Wikipedia has updated its text and now explicitly states Not all central force fields are conservative nor spherically symmetric. However, it can be shown that a central force is conservative if and only if it is spherically symmetric.[2] | Depends on what you mean by 'central force'. If your central force is of the form ${\vec F} = f(r){\hat r}$ (the force points radially inward/outward and its magnitude depends only on the distance from the center), then it is easy to show that $\phi = - \int dr f(r)$ is a potential field for the force and generates the force. This is usually what I see people mean when they say "central force." If, however, you just mean that the force points radially inward/outward, but can depend on the other coordinates, then you have ${\vec F} = f(r,\theta,\phi){\hat r}$ , and you're going to run into problems finding the potential, because you need $f = - \frac{\partial V}{\partial r}$ , but you will also need to have $\frac{\partial V}{\partial \theta} = \frac{\partial V}{\partial \phi} = 0$ to kill the non-radial components, and this will lead to contradictions. It's logical that a field of this form is going to be nonconservative, because if the force is greater at $\theta = 0$ than it is at $\theta = \pi/2$ , then you can do net work around a closed curve by moving outward from $r_{1}$ to $r_{2}$ at $\theta = 0$ (positive work), then staying at $r_{2}$ constant, going from $\theta =0 $ to $\theta = \pi/2$ (zero work--radial force), going back to $r_{1}$ (less work than the first step), and returning to $\theta = 0$ (zero work). | {
"source": [
"https://physics.stackexchange.com/questions/38874",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8117/"
]
} |
39,165 | A week ago I asked people on this site what mathematical background was needed for understanding Quantum Physics, and most of you mentioned Linear Algebra, so I decided to conduct a self-study of Linear Algebra. Of course, I'm just 1 week in, but I have some questions. How is this going to be applicable to quantum physics? I have learned about matrices (addition, subtraction, multiplication and inversion) and about how to solve multiple equations with 3 unknowns using matrices, and now I am starting to learn about vectors. I am just 1 week in, so this is probably not even the tip of the iceberg, but I want to know how this is going to help me. Also, say I master Linear Algebra in general in half a year (I'm in high school but I'm extremely fast with maths), what other 'types' of math would I need to self-study before being able to understand rudimentary quantum physics mathematically? | Quantum mechanics "lives" in a Hilbert space, and Hilbert space is "just" an infinite-dimensional vector space, so that the vectors are actually functions. Then the mathematics of quantum mechanics is pretty much "just" linear operators in the Hilbert space. Quantum mechanics Linear algebra
----------------- --------------
wave function vector
linear operator matrix
eigenstates eigenvectors
physical system Hilbert space
physical observable Hermitian matrix | {
"source": [
"https://physics.stackexchange.com/questions/39165",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12729/"
]
} |
39,208 | What is the most essential reason that actually leads to the quantization. I am reading the book on quantum mechanics by Griffiths. The quanta in the infinite potential well for e.g. arise due to the boundary conditions, and the quanta in harmonic oscillator arise due to the commutation relations of the ladder operators, which give energy eigenvalues differing by a multiple of $\hbar$. But what actually is the reason for the discreteness in quantum theory? Which postulate is responsible for that. I tried going backwards, but for me it somehow seems to come magically out of the mathematics. | If I'm only allowed to use one single word to give an oversimplified intuitive reason for the discreteness in quantum mechanics, I would choose the word ' compactness '. Examples: The finite number of states in a compact region of phase space. See e.g. this & this Phys.SE posts. The discrete spectrum for Lie algebra generators of a compact Lie group, e.g. angular momentum operators. See also this Phys.SE post. On the other hand, the position space $\mathbb{R}^3$ in elementary non-relativistic quantum mechanics is not compact, in agreement that we in principle can find the point particle in any continuous position $\vec{r}\in\mathbb{R}^3$. See also this Phys.SE post. | {
"source": [
"https://physics.stackexchange.com/questions/39208",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
39,442 | Does anyone have a (semi-)intuitive explanation of why momentum is the Fourier transform variable of position? (By semi-intuitive I mean, I already have intuition on Fourier transform between time/frequency domains in general, but I don't see why momentum would be the Fourier transform variable of position. E.g. I'd expect it to be a derivative instead.) | The other answer are all correct, but I want to give another outlook, which I believe is the most self-consistent one. Clearly, we have to start somewhere. My starting point is to define momentum as the quantity that is conserved as a consequence of translational symmetry. In quantum mechanics the translation operator for a single spin-less particle acts on the states as $$
(T_a \psi)(x)=\psi(x-a)
$$ and its generator is $^1$ $$
G=i\hbar\frac{d}{dx} \quad\iff\quad T_a=e^{iaG/\hbar}.
$$ If the system is translationally symmetric the generator $G$ is conserved, as follows from general arguments in quantum mechanics. We conventionally consider its opposite as the momentum operator, that is clearly conserved as well. Now, the eigenstates of the momentum operator are the states $\lvert p\rangle$ whose wavefunctions are $^2$ $$
\langle x\vert p\rangle=e^{ipx/\hbar}
$$ and therefore the probability of a generic state $\vert\psi\rangle$ to have momentum $p$ is $$
\langle p\vert\psi\rangle=\int\!dx\,\langle p\vert x\rangle\langle x\vert\psi\rangle =\int\!dx\,e^{-ipx/\hbar}\psi(x).
$$ This result says that the probability density of the momentum is the Fourier transform of the probability density of the position $^3$ . This is how position and momentum are related through the Fourier transform. useless nitpicking: $^1$ a way to see this is to compare the expansions \begin{align}
(T_a \psi)(x)
&= \psi(x-a) = \psi(x)-a\frac{d\psi}{dx}(x) + o(a^2)\\
&= (e^{iaG/\hbar}\psi)(x) = \psi(x) + \frac{ia}{\hbar}(G\psi)(x) + o(a^2)
\end{align} $^2$ this is because in position representation the eigenvalue equation for the momentum reads $$
-i\hbar\frac{d\phi_p}{dx}(x)=p\phi_p(x)
$$ that has the (unique) aforementioned solution (note that they are not normalizable so they are actually generalized eigenstates) $^3$ a bit loosely speaking, since the probabilities are their $|\cdot|^2$ | {
"source": [
"https://physics.stackexchange.com/questions/39442",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/853/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.