source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
39,476
I'm reading often that a possible reason to explain why the Nobel committee is coping out from making the physics Nobel related to the higgs could be among other things the fact that the spin of the new particle has not yet been definitively determined, it could still be 0 or 2. This makes me wonder if the spin would (very very surprisingly!) finally be discovered to be 2, this then necessarily would mean that the particle has to be a graviton? Or could there hypothetically be other spin-2 particles? If not, why not and if there indeed exist other possibilities what would they be?
There are theoretical arguments that a massless spin-2 particle has to be a graviton. The basic idea is that massless particles have to couple to conserved currents, and the only available one is the stress-energy tensor, which is the source for gravity. See this answer for more detail. However, the particle discovered at LHC this year has a mass of 125 GeV, so none of these arguments apply. It would be a great surprise if this particle did not have spin 0. But it is theoretically possible. One can get massive spin 2 particles as bound states, or in theories with infinite towers of higher spin particles.
{ "source": [ "https://physics.stackexchange.com/questions/39476", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2751/" ] }
39,602
Position, momentum, energy and other observables yield real-valued measurements. The Hilbert-space formalism accounts for this physical fact by associating observables with Hermitian ('self-adjoint') operators. The eigenvalues of the operator are the allowed values of the observable. Since Hermitian operators have a real spectrum, all is well. However, there are non-Hermitian operators with real eigenvalues, too. Consider the real triangular matrix: $$ \left( \begin{array}{ccc} 1 & 0 & 0 \\ 8 & 4 & 0 \\ 5 & 9 & 3 \end{array} \right) $$ Obviously this matrix isn't Hermitian, but it does have real eigenvalues, as can be easily verified. Why can't this matrix represent an observable in QM? What other properties do Hermitian matrices have, which (for example) triangular matrices lack, that makes them desirable for this purpose?
One problem with the given $3\times 3$ matrix example is that the eigenspaces are not orthogonal. Thus it doesn't make sense to say that one has with 100% certainty measured the system to be in some eigenspace but not in the others, because there may be a non-zero overlap to a different eigenspace. One may prove$^{1}$ that an operator is Hermitian if and only if it is diagonalizable in an orthonormal basis with real eigenvalues. See also this Phys.SE post. $^{1}$We will ignore subtleties with unbounded operators , domains, selfadjoint extensions , etc., in this answer.
{ "source": [ "https://physics.stackexchange.com/questions/39602", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10696/" ] }
40,746
I'm running into an annoying problem I am unable to resolve, although a friend has given me some guidance as to how the resolution might come about. Hopefully someone on here knows the answer. It is known that a superfunction (as a function of space-time and Grassmann coordinates) is to be viewed as an analytic series in the Grassmann variables which terminates. e.g. with two Grassmann coordinates $\theta$ and $\theta^*$, the expansion for the superfunction $F(x,\theta,\theta^*)$ is $$F(x,\theta)=f(x)+g(x)\theta+h(x)\theta^*+q(x)\theta^*\theta.$$ The product of two Grassmann-valued quatities is a commuting number e.g. $\theta^*\theta$ is a commuting object. One confusion my friend cleared up for me is that this product need not be real or complex-valued, but rather, some element of a 'ring' (I don't know what that really means, but whatever). Otherwise, from $(\theta^*\theta)(\theta^*\theta)=0$, I would conclude necessarily $\theta^*\theta=0$ unless that product is in that ring. But now I'm superconfused (excuse the pun). If Dirac fields $\psi$ and $\bar\psi$ appearing the QED Lagrangian $$\mathcal{L}=\bar\psi(i\gamma^\mu D_\mu-m)\psi-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$$ are anticommuting (Grassmann-valued) objects, whose product need not be real/complex-valued, then is the Lagrangian no longer a real-valued quantity, but rather takes a value which belongs in my friend's ring ??? I refuse to believe that!!
A supernumber $z=z_B+z_S$ consists of a body $z_B$ (which always belongs to $\mathbb{C}$) and a soul $z_S$ (which only belongs to $\mathbb{C}$ if it is zero), cf. Refs. 1 and 2. A supernumber can carry definite Grassmann parity. In that case, it is either $$\text{Grassmann-even/bosonic/a $c$-number},$$ or $$\text{Grassmann-odd/fermionic/an $a$-number},$$ cf. Refs. 1 and 2.$^{\dagger}$ The letters $c$ and $a$ stand for commutative and anticommutative, respectively. One can define complex conjugation of supernumbers, and one can impose a reality condition on a supernumber, cf. Refs. 1-4. Hence one can talk about complex, real and imaginary supernumbers. Note that that does not mean that supernumbers belong to the set of ordinary complex numbers $\mathbb{C}$. E.g. a real Grassmann-even supernumber can still contain a non-zero soul. An observable/measurable quantity can only consist of ordinary numbers (belonging to $\mathbb{C}$). It does not make sense to measure a soul-valued output in an actual physical experiment. A soul is an indeterminate/variable , i.e. a placeholder, except it cannot be replaced by a number to give it a value. A value can only be achieved by integrating it out! In detail, a supernumber (that appears in a physics theory) is eventually (Berezin) integrated over the Grassmann-odd (fermionic) variables, say $\theta_1$, $\theta_2$, $\ldots$, $\theta_N$, and the coefficient of the fermionic top monomial $\theta_1\theta_2\cdots\theta_N$ is extracted to produce an ordinary number (in $\mathbb{C}$), which in principle can be measured. E.g. the Grassmann-odd (fermionic) variables $\psi(x,t)$ in the QED Lagrangian should eventually be integrated over in the path integral. References: planetmath.org/supernumber . Bryce DeWitt, Supermanifolds, Cambridge Univ. Press, 1992. Pierre Deligne and John W. Morgan, Notes on Supersymmetry (following Joseph Bernstein). In Quantum Fields and Strings: A Course for Mathematicians, Vol. 1, American Mathematical Society (1999) 41–97. V.S. Varadarajan, Supersymmetry for Mathematicians: An Introduction, Courant Lecture Notes 11, 2004. -- $^{\dagger}$ In this answer, the words bosonic (fermionic) will mean Grassmann-even (Grassmann-odd), respectively.
{ "source": [ "https://physics.stackexchange.com/questions/40746", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10456/" ] }
40,763
When people are asked to match monchromatic violet light with an additive mix of basic colours, they (paradoxically) mix in red. In fact, the CIE 1931 color space chromaticity diagram shows this effect begins at about 510nm (greenish-cyan), where people mix in no red. From that point on, the higher the frequency of the light source, the more red they mix in. This effect is reflected by the red curve of the CIE standard observer color matching functions , which has an additional bump in the area of blue light. However, that curve does not match the actual spectral sensitivity of red cones. So where does this additional perception of red at higher frequencies come from?
In the 19th century, the physicists Young and Helmholtz proposed a trichromatic theory of color, in which the eye was modeled as three filters with overlapping ranges. This is essentially a physical model of the pigments in the eye, and it predicts the response of the nerve cells at the retina. Helmholtz did related work on sound and timbre. Ca. 1950, Hering, Hurvich, and Jameson proposed significant modifications to the trichromatic theory, called opponent processing . This models a later stage in the processing of the signals, after the retinal response but before the more sophisticated stages of processing in the brain. Both the trichromatic model and opponent processing are needed in order to describe certain phenomena in human color perception. The complete theory can be modeled by two functions depending on wavelength. I'll call these $RG(\lambda)$ and $BY(\lambda)$. These functions are drawn here . They both oscillate between positive and negative values. For any given pure wavelength $\lambda$, the net result of pigment-filtering plus the later neurological processing produces these two numbers, which can be thought of as the final signals that go on to later processing in the brain. I'm calling them $RG$ and $BY$ for the following reasons. Let's pretend, for the sake of simplicity, that these functions oscillated between -1 and +1. Then the pair $(RG,BY)=(1,0)$ produces the sensation of red, (-1,0) is green, (0,1) is blue, and (0,-1) is yellow. There is various psychological evidence for this model, e.g., no color is perceived as reddish-green or yellowish-blue. Roughly speaking, what seems to be happening is that the eye-brain system is taking differences between signal levels of different cone cells. This sort of makes sense because, for example, the red and green pigments have response curves that overlap a lot, so if you want to place a pure-wavelength color on the spectrum, the difference between them is more a more direct measure of what you want to know than the individual signals. The $RG$ function actually has two different peaks, one at the red end of the spectrum and one, surprisingly, at the blue end. This implies that by mixing blue and red, you can produce an $(RG,BY)$ pair similar to what you would have gotten with monochromatic violet. If you look at other sources, e.g., this one (figure 3.3), they seem to agree on the secondary short-wavelength peak of the $RG$ function, but the details of how the two functions are drawn at the short wavelengths are different and seem to make for a less convincing explanation of the observed perceptual similarity between violet and a red-blue mixture. I don't know if there is a valid reductionist explanation of the short-wavelength peak of the $RG$ function. Like a lot of things produced by evolution, it may basically be an accident that got frozen in. However, it's possible that it serves the evolutionary purpose of helping us to distinguish different shades of blue and violet. If the $RG$ function was simply zero over the whole short-wavelength end of the spectrum, then the $BY$ function would be the only information we'd get for those wavelengths. But the $BY$ function has a maximum, simply because the eye's sensitivity to light fades out as you get into the UV. Near this maximum, the ability of the $BY$ function to discriminate between colors becomes zero. In the York University graph, it appears that the short-wavelength extrema of the $RG$ and $BY$ functions are offset from one another, which would allow some color discrimination in this region. The physical information being preserved by the $BY$ function would then be the difference in response between the blue and green cones. But the Briggs graphs don't appear to show any such offset of the extrema, so it's possible that the explanation I'm giving is a bogus "just-so story." There may be a good analogy here with sound. The sound spectrum is linear, but there is a psychological phenomenon of octave identification, which makes the spectrum "wrap around," so that frequencies $f$ and $2f$ are perceptually similar and can often be mistaken for one another even by trained musicians. Similarly, the predictive power of the "color wheel" model shows that to some approximation we can think of the trichromatic/opponent process model as resulting in a wrapping around of the visible segment of the EM spectrum into a circle. But in both cases, the wrap-around is only an approximation. In terms of pitch, $f$ and $2f$ are perceptually similar but not indistinguishable. For color, we have the 1976 CIELUV color color diagram , which is a modification of the 1931 diagram meant to represent at least somewhat accurately the degree of perceptual similarity between different points based on the distance between them. The monochromatic spectrum constitutes part of the outer boundary of this diagram, and is more of a "V" than a circle; there is quite a large gap between monochromatic violet and monochromatic red. It is trivially true that any such diagram has a boundary that is a closed curve. If the diagram is not constrained to give any accurate depiction of the sizes of the perceptual differences between colors, then it can be distorted arbitrarily, and we can arbitrarily define it such that its boundary is a circle. In this sense, the success of the color wheel model is guaranteed, and it follows from nothing more than the fact that humans are trichromats, so that the color space is three-dimensional, and controlling for luminance produces a two-dimensional space. But this fails to explain why there is some degree of perceptual similarity between the red and violet ends of the monochromatic spectrum; for that you need the opponent processing model. There is also a slight variation in the absorbance of the pigment in the red cones at the blue end of the spectrum. I don't think this is sufficient to explain the perceptual similarity between violet and red, or the even closer similarity between violet and a mixture of red and blue light, i.e., I don't think you can explain these facts using only the trichromatic theory without opponent processing. The classic direct measurements of the filter curves of cone-cell pigments were done with cone cells from carp by Tomita ca. 1965, but AFAIK the only direct measurement using human cone cells was Bowmaker 1981. Bowmaker's red-cell absorbance curve has a very slight rise at short wavelengths, but it's not very pronounced at all. You will see various other curves on the internet, often without any attribution or explanation of where they came from, and some of these show a much more pronounced bump rather than Bowmaker's slight rise. Possibly some of these are from people using the CIE 1931 curves, which were never intended to be physical models of the actual human cone-cell pigments. It should be clear, however, that the red and green pigments' curves must have some variation near the violet end of the spectrum. If they did not, then the dimensionality of the color space would be reduced there, and the human eye would be unable to distinguish different wavelengths in this region, which is contrary to fact. Bowmaker, "Visual pigments and colour vision in man and monkeys," J R Soc Med. 1981 May; 74(5): 348, freely accessible at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1438839/
{ "source": [ "https://physics.stackexchange.com/questions/40763", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2415/" ] }
40,886
Felix Baumgartner just completed his breathtaking free-fall skydiving jump from $120,000\,\text{feet} = 39\,\text{km}$ above the Earth, breaking the speed of sound during the process. I was wondering if the next step could be jumping from the international space station. The person would have to overcome the orbital velocity of the station, re-enter the Earth's atmosphere, and land on the Earth with their feet, assisted by a parachute. Would such a stunt be survivable by a human? Could the person wear a space suit like the one depicted in the movie " Sunshine "? Is such a space suit possible? UPDATE : I came across an interesting news article about a company that is working on a skydiving suite that would survive freefall from outer space. Read More Here
As other answers say, if someone just jumps off of the international space station(ISS), they would still be in orbit around the earth since the ISS is traveling at 17,000 miles per hour (at an altitude of 258 miles). Instead of just jumping, imagine the astronaut had a jet pack that could cancel that speed of 17,000 miles per hour in a very short time (that would take 77 seconds at 10 Gs of deceleration). So, there the astronaut is, at 258 miles above the earth's surface, stationary and starting to accelerate at 1 G towards the earth. From the web I find that many meteors burn up at around 30 miles above the earth where the atmosphere gets thick enough to decelerate the meteor due to the air compression in front of the meteor and air friction - this compression and friction also heats up the meteor and melts it. Note that this is approximately the height that Felix jumped from! How fast will the astronaut be going when he gets to 30 miles? The answer is he would be traveling at about 6000 miles per hour (assuming no air friction till he gets to 30 miles). Now, that is roughly 1/3 of orbital velocity and when satellites de-orbit, they need extensive heat shielding to avoid being incinerated. So that is the first problem - an ordinary space suit would not protect the astronaut - he would need very significant heat shielding - such as a Mercury capsule used by America's first manned space program. So it would not be someone just "jumping off" the ISS for sure. For now, assume he is not burned up somehow. What about the G forces of deceleration? When satellites de-orbit they have to carefully control the angle at which they are coming in - too shallow and they could skip off back into orbit, too steep and the heat load would be too high and the deceleration would also be too high to survive. But our astronaut is falling straight in - perpendicular to the atmosphere! This is just a guess, but if he has to decelerate from 6000 MPH to a terminal velocity of something like 600 MPH within about 5 miles or so, the G forces would be something like 30 Gs, so he would not survive and there is no way to protect yourself from that many Gs. Felix started at 0 MPH at about that height which is why he survived.
{ "source": [ "https://physics.stackexchange.com/questions/40886", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14079/" ] }
40,960
Is there a simple account of why technetium is unstable? From the Isotopes section of Wikipedia's article on Technetium : Technetium, with atomic number (denoted Z) 43, is the lowest-numbered element in the periodic table that is exclusively radioactive. The second-lightest, exclusively radioactive element, promethium, has an atomic number of 61. Atomic nuclei with an odd number of protons are less stable than those with even numbers, even when the total number of nucleons (protons + neutrons) are even. Odd numbered elements therefore have fewer stable isotopes. It would seem that simply its atomic number is part of the reason why it is unstable, though this just pushes back the mystery back one step for me: why are nuclei with even atomic number more stable? And why then are all of the elements from 45 through 59 stable — notably including silver (Z=47) and iodine (Z=53) — not to mention higher odd-proton nuclei such as gold (Z=79)? Even the most stable isotope of technetium has a half-life less than a hundredth that of uranium-235, which has a half life of 703.8Ma: The most stable radioactive isotopes are technetium-98 with a half-life of 4.2 million years (Ma), technetium-97 (half-life: 2.6 Ma) and technetium-99 (half-life: 211,000 years) [...] Technetium-99 (99Tc) is a major product of the fission of uranium-235 (235U), making it the most common and most readily available isotope of technetium. It's perhaps an unfair comparison, as uranium has an even atomic number (however that is suppose to help mitigate its instability); but it also has nearly twice the number of protons. This deepens the mystery for me. Even granted that Tc has no stable isotopes, how does it come to be so unstable that all of its isotopes are essentially absent naturally, compared for instance to uranium-235? (This question is a specific case of an earlier question on synthetic isotopes .)
This is really a comment, since I don't think there is an answer to your question, but it got a bit long to put in as a comment. If you Google for "Why is technetium unstable" you'll find the question has been asked many times in different forums, but I've never seen a satisfactory answer. The problem is that nuclear structure is much more complex than electronic structure and there are few simple rules. Actually the question isn't really "why is technetium unstable", but rather "why is technetium less stable than molybdenum and ruthenium", those being the major decay products. Presumably given enough computer time you could calculate the energies of these three nuclei, though whether that would really answer the "why" question is debatable. Response to comment: The two common (relatively) simple models of the nucleus are the liquid drop and the shell models. There is a reasonably basic description of the shell model here , and of the liquid drop model here (there's no special significance to this site other than after much Googling it seemed to give the best descriptions). However if you look at the sction of this web site on beta decay , at the end of paragraph 14.19.2 you'll find the statement: Because the theoretical stable line slopes towards the right in figure 14.49, only one of the two odd-even isotopes next to technetium-98 should be unstable, and the same for the ones next to promethium-146. However, the energy liberated in the decay of these odd-even nuclei is only a few hundred keV in each case, far below the level for which the von Weizsäcker formula is anywhere meaningful. For technetium and promethium, neither neighboring isotope is stable. This is a quali­tative failure of the von Weizsäcker model. But it is rare; it happens only for these two out of the lowest 82 elements. So these models fail to explain why no isotopes of Tc are stable, even though they generally work pretty well. This just shows how hard the problem is.
{ "source": [ "https://physics.stackexchange.com/questions/40960", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4976/" ] }
41,138
The action is defined as $S = \int_{t_1}^{t_2}L \, dt$ where $L$ is Lagrangian. I know that using Euler-Lagrange equation , all sorts of formula can be derived, but I remain unsure of the physical meaning of action.
The Hamiltonian H and Lagrangian L which are rather abstract constructions in classical mechanics get a very simple interpretation in relativistic quantum mechanics. Both are proportional to the number of phase changes per unit of time. The Hamiltonian runs over the time axis (the vertical axis in the drawing) while the Lagrangian runs over the trajectory of the moving particle, the t’-axis. The Illustration shows the relativistic de Broglie wave in a Minkowski diagram. The triangle represents the relation between the Lagrangian an the Hamiltonian, which holds in both relativistic and non-relativistic physics. $$L ~=~pv-H$$ The Hamiltonian counts the phase-changes per unit of time on the vertical axis while the term pv counts the phase-changes per unit on the horizontal axis representing distance: v is the distance traveled per unit of time while p is proportional with the phase-changes per unit of distance, hence the term pv. The Action can now be seen as being proportional to the total number of phase changes over the trajectory of the particle. The principle of least action is thus equivalent to the principle of least phase change . In the theory of special relativity the latter is equivalent to the principle of least proper time since the 'proper time' as experienced by the particle is proportional to the number of phase changes over the trajectory. Hans
{ "source": [ "https://physics.stackexchange.com/questions/41138", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14134/" ] }
41,352
If time is treated as a fourth dimension of spacetime, what is relation between length and time units? Or in other words, how can I convert time units to length units, for instance seconds to meters?
The length of one second in meters is the distance travelled by light in one second. $1\ \mathrm s=c\times1\ \mathrm s= 299\,792\,458\ \mathrm m$ The reason we use the same units for time and distance is special relativity , whose foundation rests on the speed of light (in vacuum) being constant in all inertial frames of reference. Its universality allows us to use the same units for both time and distance.
{ "source": [ "https://physics.stackexchange.com/questions/41352", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14250/" ] }
41,355
If an air cylinder is pushing two platens apart with a force of $100\: \mathrm{lbs}$, do the platens need to push back at $100\: \mathrm{lbs}$ or $50\: \mathrm{lbs}$ each to keep the cylinder from moving? Assume no friction and both platens are not fixed.
The length of one second in meters is the distance travelled by light in one second. $1\ \mathrm s=c\times1\ \mathrm s= 299\,792\,458\ \mathrm m$ The reason we use the same units for time and distance is special relativity , whose foundation rests on the speed of light (in vacuum) being constant in all inertial frames of reference. Its universality allows us to use the same units for both time and distance.
{ "source": [ "https://physics.stackexchange.com/questions/41355", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14252/" ] }
41,360
Electrostatics basically means dealing with time independent electric fields (which was produced by stationary charges) Now consider a neutral conductor. We know that putting a net negative charge on the conductor, the charge will very quickly spread over the surface till electrostatic equilibrium is reached. What does this exactly mean? does it mean that electrons on the surface are not "moving" anymore and they become stationary? So in this situation we have 2 kinds of electrons: 1-electrons inside the meat of the conductor whose dynamics is described by quantum mechanics 2-electron on the surface which are not moving in the classical sense?
The length of one second in meters is the distance travelled by light in one second. $1\ \mathrm s=c\times1\ \mathrm s= 299\,792\,458\ \mathrm m$ The reason we use the same units for time and distance is special relativity , whose foundation rests on the speed of light (in vacuum) being constant in all inertial frames of reference. Its universality allows us to use the same units for both time and distance.
{ "source": [ "https://physics.stackexchange.com/questions/41360", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4521/" ] }
41,363
I am trying to calculate the work done on an ideal gas in a piston set up where temperature is kept constant. I am given the volume, pressure and temperature. I know from Boyle's law that volume is inversely proportional to pressure that is, $$V \propto \frac{1}{p}$$ using this I can calculate the two volumes I need for this equation to calculate work done: $$\Delta W = - \int^{V_2}_{V_1} p(V)dV$$ but what I do not understand is how to use this equation to help me calculate the work done, I think I am confused by the fact that I need to have $p(V)$ but I am not sure what this is. If you could help me to understand this, that would be great
The length of one second in meters is the distance travelled by light in one second. $1\ \mathrm s=c\times1\ \mathrm s= 299\,792\,458\ \mathrm m$ The reason we use the same units for time and distance is special relativity , whose foundation rests on the speed of light (in vacuum) being constant in all inertial frames of reference. Its universality allows us to use the same units for both time and distance.
{ "source": [ "https://physics.stackexchange.com/questions/41363", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10569/" ] }
41,373
I'm sure this question is a bit gauche for this site, but I'm just a mathematician trying to piece together some physical intuition. * Question: *Is the statistical interpretation of Quantum Mechanics still, in any sense, viable? Namely, is it completely ridiculous to regard the theory as follows: Every system corresponds to a Hilbert space, to each class of preparations of a system corresponds to a state functional and to every class of measurement procedure there is a self-adjoint operator, and finally, a state functional evaluated at one of these self-adjoint operators yields the expected value of numerical outcomes of measurements from the class of measurement procedures , taken over the preparations represented by the state? I am aware of Bell's inequalities and the fact that the statistical interpretation can survive in the absence of locality, and I am aware of the recent work (2012) which establishes that the psi-epistemic picture of quantum mechanics is inconsistent with quantum predictions (so the quantum state must describe an actual underlying physical state and not just information about nature). Nevertheless, I would really like a short summary of the state of the art with regard to the statistical interpretation of QM, against the agnostic (Copenhagen interpretation) of QM, at present. Is the statistical interpretation dead, and if it isn't...where precisely does it stand? An expert word on this from a physicist would be very, very much appreciated. Thanks, in advance. EDIT: I have changed the word "mean" to "expected" above, and have linked to the papers that spurred this question. Note, in particular, that the basic thing in question here is whether the statistical properties prescribed by QM can be applied to an individual quantum state, or necessarily to an ensemble of preparations. As an outsider, it seems silly to attach statistical properties to an individual state, as is discussed in my first link. Does the physics community share this opinion? EDIT: Emilio has further suggested that I replace the word "statistical" by "operational" in this question. Feel free to answer this question with such a substitution assumed (please indicate that you have done this, though).
The statistical interpretation of quantum mechanics is alive, healthy, and very robust against attacks. The statistical interpretation is precisely that part of the foundations of quantum mechanics where all physicists agree. In the foundations, everything beyond that is controversial. In particular, the Copenhagen interpretation implies the statistical interpretation, hence is fully compatible with it. Whether a state can be assigned to an individual quantum system is still regarded as controversial, although nowadays people work routinely with single quantum systems. The statistical interpretation is silent about properties of single systems, one of the reasons why it can be the common denominator of all interpretations. [Added May 2016:] Instead of interpreting expectations as a concept meaningful only for frequent repetition under similar conditions, my thermal interpretation of quantum mechanics interprets it for a single system in the following way, consistent with the practice of thermal statistical mechanics, with the Ehrenfest theorem in quantum mechanics, and with the obvious need to ascribe to particles created in the lab an approximate position even though it is not in a position eigenstate (which doesn't exist). The basic thermal interpretation rule says: Upon measuring a Hermitian operator $A$ , the measured result will be approximately $\bar A=\langle A\rangle$ with an uncertainty at least of the order of $\sigma_A=\sqrt{\langle(A−\bar A)^2\rangle}$ . If the measurement can be sufficiently often repeated (on an object with the same or sufficiently similar state) then $\sigma_A$ will be a lower bound on the standard deviation of the measurement results. Compared to the Born rule (which follows in special cases), this completely changes the ontology: The interpretation applies now to a single system, has a good classical limit for macroscopic observables, and obviates the quantum-classical Heisenberg cut. Thus the main problems in the interpretation of quantum mechanics are neatly resolved without the need to introduce a more fundamental classical description.
{ "source": [ "https://physics.stackexchange.com/questions/41373", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14279/" ] }
41,680
How can light be called electromagnetic if it doesn't appear to be electric nor magnetic? If I go out to the sunlight, magnets aren't affected (or don't seem to be). And there is no transfer of electric charge/electrons (as there is in AC/DC current in space). In particular, the photons (which light is supposed to be composed of) have no electric charge (nor do they have magnetic charge ). I'm looking for an explanation that can be appreciated by the average non-physicist Joe.
Light is an oscillating electric and magnetic field, so it is electrical and magnetic. Later: re the edit to your question, I think there are two issues. Firstly the interaction with electric charge and secondly the interaction with magnets. Light does not carry any charge itself, so it does not attract or repel charged particles like electrons. Instead light is an oscillating electric and magnetic field. If you take an electron and put it in a static electric field (e.g. around a Van de Graaff Generator) then the electron feels a force due to the field and will move. This happens when an electron interacts with a light wave, but because the light wave is an oscillating field the electron moves to and fro and there is no net motion. If you could watch an electron as light passes by you'd see it start oscillating to and fro, but it's net position wouldn't change. This is exactly what happens in your TV aerial. The light (i.e. radio frequency EM) causes electrons in the TV aerial to oscillate and this oscillation generates an oscillating electric current. The voltage this generates is amplified by your TV. At the TV transmitter the same happens in reverse: an oscillating voltage is applied to the TV transmitter, the electrons oscillate in response and the oscillation generates an electromagnetic wave. So the process is oscillating electrons -> light -> oscillating electrons. I'm not entirely sure what you mean by there is no transfer of electric charge/electrons (as there is in AC/DC current in space) . If the above doesn't satisfactorily explain what's going on maybe you could expand on your question. And finally on to the interaction with magnets. The big difference between electric and magnetic fields is that (as far as we know) there are no isolated magnetic charges. If there were isolated magnetic charges e.g. if you could watch a magnetic monopole as a light wave passed by then you'd see similar behaviour to an electron. But there aren't, so you don't.
{ "source": [ "https://physics.stackexchange.com/questions/41680", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/735/" ] }
41,719
I asked this question for many people/professors without getting a sufficient answer, why in QM Lebesgue spaces of second degree are assumed to be the one that corresponds to the Hilbert vector space of state functions, from where this arises? and why 2-order space that assumes the following inner product: $\langle\phi|\psi\rangle =\int\phi^{*}\psi\,dx$ While there is many ways to define the inner product. In Physics books, this always assumed as given, never explains it, also I tried to read some abstract math books on this things, and found some concepts like "Metric weight" that will be minimized in such spaces, even so I don't really understand what is behind that, so why $L_2$? what special about them? Who and how physicists understood that those are the one we need to use?
Here we will assume that OP is not questioning the fundamental physical principles/postulates/axioms of quantum mechanics , such as, e.g., the need to have a Hilbert space $H$ in the first place, etc; and that OP is only pondering the role of $L^2$-spaces (as opposed to, e.g., $L^1$-spaces). Let us for concreteness and simplicity consider the 3-dimensional position space $\mathbb{R}^3$. One uses the $L^2$-space $H=L^2(\mathbb{R}^3)$ as a Hilbert space for various reasons: To have a well-defined norm $$\tag{1} ||\psi||_p~:=~\left(\int d^3x ~ |\psi(x)|^p\right)^{\frac{1}{p}}, \qquad p~=~2.$$ [The norm (1) actually works for any $L^p$-space $L^p(\mathbb{R}^3)$ with $p\geq 1$.] To have a well-defined inner product/sesqui-linear form, $$\tag{2} \langle \phi, \psi\rangle ~:=~\int d^3x ~ \phi^*(x)\psi(x).$$ In particular, the integrand $\phi^*\psi$ should be integrable , i.e. i) Lebesgue measurable , and ii) the absolute-valued integrand should have a finite integral: $$\tag{3} \int d^3x ~ |\phi^*(x)\psi(x)|~<~\infty.$$ Proof of eq.(3): Notice the inequality $$\tag{4} (|\phi(x)|-|\psi(x)|)^2 \geq 0\qquad \Leftrightarrow\qquad 2|\phi(x)^*\psi(x)| \leq |\phi(x)|^2+|\psi(x)|^2,$$ so that the integrand $\phi^*\psi$ in the inner product (2) becomes integrable $$\tag{5} 2\int d^3x ~ |\phi^*(x)\psi(x)|~\stackrel{(1,4)}{\leq}~ ||\phi||^2_2+||\psi||^2_2~<~\infty, $$ because we demand that $\phi$ and $\psi$ are square integrable, i.e. that $\phi,\psi\in L^2(\mathbb{R}^3)$. Note in particular that eq.(3) does not hold in general for $\phi,\psi\in L^p(\mathbb{R}^3)$ with $p \neq 2$. To ensure that the normed vector space $H$ is complete . See also this Phys.SE answer. [This actually works for any $L^p$-space $L^p(\mathbb{R}^3)$ with $p\geq 1$.] To make sure that e.g. the set $C^{\infty}_c(\mathbb{R}^3)$ of infinitely many times differentiable functions with compact support are included in the space $H$. [This actually works for any $L^p$-space $L^p(\mathbb{R}^3)$ with $p\geq 1$.] Note that all the other $L^p$-spaces $L^p(\mathbb{R}^3)$ with $p\neq 2$ are not Hilbert spaces (although they are Banach spaces). This is related to the fact that the dual $L^p$-space is $L^p(\mathbb{R}^3)^*\cong L^q(\mathbb{R}^3)$ where $\frac{1}{p}+\frac{1}{q}=1$. Hence an $L^p$-space is only selfdual if $p=2$. Selfduality implies that there is an isomorphism between kets and bras. It is true that other Hilbert spaces (modeled over the position space $\mathbb{R}^3$) do exist, but they would typically rely on additional structure. (E.g., one could use another integration measure $d\mu$ than the Lebesgue measure $d^3x$.) In conclusion, the $L^2$-space $H=L^2(\mathbb{R}^3)$ is the simplest and most natural/canonical choice.
{ "source": [ "https://physics.stackexchange.com/questions/41719", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/12094/" ] }
41,721
I've been talking to a friend, and he said that it's impossible to travel at exactly the same speed as the speed of sound is. He argued that it's only possible to break through the sound barrier using enough acceleration, but it's impossible to maintain speed exactly equal to that of sound. Is it true? And if it's true, why?
It's true that at the speed of sound, you will have a huge amount of drag. The reason is that the air in front of you has to move out of the way, and if you are moving at the speed of sound, the pressure wave that pushes the air out of the way is moving at exactly the same speed as you. So in the continuum mechanics limit, you can't push the air out of the way, and you might as well be plowing into a brick wall. But we don't live in a continuum mechanics universe, we live in a world made of atoms, and the atoms in a gas bounce off your airplane. At the speed of sound, you get a large finite push-back which is a barrier, and above this, you still have to do the work to push a mass of air out of the way equal to your plane's cross section with ballistic particles. As you go faster, the amount of drag decreases, since the atomic collisions don't lead to a pile-up on the nose-cone. But if you look at wikipedia's plot here , the maximum drag at the supersonic transition is only a factor of 2 or 3 higher than the drag at higher supersonic speed, so it is possible to travel at Mach 1, it is just not very fuel efficient.
{ "source": [ "https://physics.stackexchange.com/questions/41721", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14387/" ] }
41,765
Conservation of information seems to be a deep physical principle. For instance, Unitarity is a key concept in Quantum Mechanics and Quantum Field Theory. We may wonder if there is an underlying symmetry, in some space, which may explain this conservation of information.
1) If you want a Noether theorem for information, there is no such thing . Trying to obtain it from a symmetry law, by Noether's theorem can't work, simply because information is not a quantity that can be obtained for instance by the derivative of the Lagrangian with respect to some variable. Information is not scalar, vector, tensor, spinor etc. 2) Another way to obtain conservation laws can be found in quantum mechanics. The observables that commute with the Hamiltonian are conserved. Again, you don't have an observable, in the sense of quantum mechanic, for information. Trying to obtain conservation of information from commutation with Hamiltonian can't work, because there is no observable (hermitian operator on the Hilbert space) associated to information. Information is not the eigenvalue of such an operator. 3) The only way, which also is the simplest and the most direct, is the following: to have information conservation, when you reverse the evolution laws, you have to obtain evolution laws that are deterministic. This ensures conservation of information, in fact, they are equivalent. In particular, most classical laws are deterministic and reversible. Also, in quantum mechanics, unitary evolution is reversible, giving you the conservation of information. I don't say that the evolution laws have to be deterministic, or that they have to be invariant to time reversal. Just that, when you apply time reversal, the evolution equations you obtain (which are allowed to be different than the original ones) are deterministic. Simplest way to think about this is by using dynamical systems. Trajectories in phase space are not allowed to merge, because if they merge, the information about what trajectory was before merging is lost. They are allowed to branch, because you can still go back and see what any previous state was. Branching breaks determinism, but not preservation of information. Old information is preserved at branching, but, as WetSavannaAnimal mentioned, new information is added. Therefore, if we want strict conservation, we should forbid both merging and branching, and in this case determinism is required.
{ "source": [ "https://physics.stackexchange.com/questions/41765", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/6316/" ] }
41,779
What makes it a good idea to use RMS rather than peak values of current and voltage when we talk about or compute with AC signals.
Attempts to find an average value of AC would directly provide you the answer zero ... Hence, RMS values are used. They help to find the effective value of AC (voltage or current). This RMS is a mathematical quantity (used in many math fields) used to compare both alternating and direct currents (or voltage). In other words (as an example), the RMS value of AC (current) is the direct current which when passed through a resistor for a given period of time would produce the same heat as that produced by alternating current when passed through the same resistor for the same time. Practically, we use the RMS value for all kinds of AC appliances. The same is applicable to alternating voltage also. We're taking the RMS because AC is a variable quantity (consecutive positives and negatives). Hence, we require a mean value of their squares thereby taking the square root of sum of their squares... Peak value is $I_0^2$ is the square of sum of different values. Hence, taking an average value (mean) $I_0^2/2$ and then determining the square root $I_0/\sqrt{2}$ would give the RMS. It's example time: (I think you didn't ask for the derivation of RMS) Consider that both the bulbs are giving out equal-level of brightness. So, They're losing the same amount of heat (regardless the fact of AC or DC). In order to relate both, we have nothing to use better than the RMS value. The direct voltage for the bulb is 115 V while the alternating voltage is 170 V. Both give the same power output. Hence, $V_{rms}=V_{dc}=\frac{V_{ac}}{\sqrt{2}}=115 V$ (But Guys, Actual RMS is 120 V). As I can't find a good image, I used the same approximating 120 to 115 V. To further clarify your doubt regarding the peak value, It's simply similar to finding the distance between two points $(x_1,y_1)$ and $(x_2,y_2)$ in Cartesian system represented as, (sum of squares & then "root") $$d=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}$$
{ "source": [ "https://physics.stackexchange.com/questions/41779", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14414/" ] }
41,816
The Mini 4WD 's tires aren't full of air, and it can run. Also, the tank doesn't have tires with air. So, the question is: why do real cars on the road need to be filled with air? What is the idea behind it?
You don't want to lose energy – not only because of energy efficiency but mainly because the desire to achieve high speeds and reduce the deterioration of the wheels – when the wheels are changing their shape due to the pressure caused by the weight of the vehicle. If you want to squeeze the wheels with rubber by a centimeter, you need a substantially greater force if the tires are filled with air whose pressure is many atmospheres than if you have "just the rubber". So inflated wheels at 4 atmospheres guarantee that the wheels are more rigid and the ride is smoother. This advantage becomes more important at higher speeds because the wheel could get deformed many times a second at higher speeds. And it becomes more important for greater vehicles (relatively to toys) because their mass grows as $R^3$ with the radius $R$ while the area of the wheel is expected to grow as $R^2$ only. So the pressure – force per unit area of wheels – becomes greater for greater vehicles. Not an issue for toys but large vehicles need to have wheels that are resilient under much higher pressures. On the contrary, the need for high pressure in the wheels decreases if the area of the wheels is large (the wheels are "wide") and if the car is relatively light. So formula one cars only use about 1 atmospheres in the wheels. Also, mountain bikes (thick wheels, bumpy roads, expected low speeds) often have 2 atmospheres only while racing bikes (thin wheels, smooth roads, expected high speeds) may be pumped to 15 atmospheres. Tanks are a different issue because the rubber is relatively very thin (relatively to "whole wheels made out of rubber") and it's the metal "right beneath" the rubber that is resilient and doesn't resist the pressure. Moreover, their speed is lower and they are going through a non-uniform terrain in which the adjustment of the shape of wheels or their counterparts may be a good thing. Another question, kind of opposite to the original one, would be why the wheels are not just made out of metals. Well, that would be an uneasy ride and the metal would be damaged very quickly, too. One needs some "spare room" so that the tires' thickness may change by a few centimeters if it's really needed. On the other hand, one doesn't want the tires to change easily. A layer of the air at high pressure is a great and simple answer to both conditions.
{ "source": [ "https://physics.stackexchange.com/questions/41816", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/8607/" ] }
41,858
Waves are generally classified as either transverse or longitudinal depending on the they way the propagated quantity is oriented with respect to the direction of propagation. Then what is a gravitational wave ? It doesn't make sense to me that a disturbance in the curvature of spacetime has a "direction", so I would say they're neither, much like a wave packet in quantum mechanics.
Gravitational waves are transverse waves but they are not dipole transverse waves like most electromagnetic waves, they are quadrupole waves. They simultaneously squeeze and stretch matter in two perpendicular directions. Gravitational waves definitely propagate in a given direction but the effect that they have on matter is completely perpendicular to the direction of motion. Below is a picture of what the metric of a passing wave does to space (the wave traveling is perpendicular to the screen). If you imagine a free particle sitting at each grid intersection point, the particle would move sinusoidally right along with the grid: This diagram is from this paper
{ "source": [ "https://physics.stackexchange.com/questions/41858", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14454/" ] }
43,068
I learned about the coronal discharge, and the common explanation is because the electric field is strong where radius of curvature is small. But I haven't found anything yet that explains why electrons like to crowd at the peaks, and escape from the holes. My intuition suggests electrons try to distribute on the surface as uniform as they can, but they don't. Why?
Electrons just don't like each other, a point captured by the phrase that "like charges repel." So, imagine a gymnasium full of students pretending to be electrons, staying as far away from others as possible. Anyone near the center of the crowd will feel badly pressed and will try to work there way towards the edge of the gym, where at least one side will no longer have fellow students milling about. The result? Most of the students will gravitate towards the edge of the gym and hover there, to take advantage of that lack of other students on the wall side of the gym. Now imagine a narrow corridor leading out of the gym. Even better! Students in that corridor will only have fellow students on their left and right. Now imagine the very end of that corridor, a sort of point. Even better! Now, the student who finds that spot will benefit from having only one student nearby. But somewhat ironically, that same effect will cause other students to pack themselves into the long, narrow corridor more tightly, since pretty much anywhere in the corridor makes them less exposed to the full set of students than being in the gym does. This is the kind of effect that makes edges, wires, and points more attractive to electrons, which similarly just don't want other electrons too nearby. The electric field gradient is the rate at which the electric field falls off, and it is strongest on such edges and lines and points. You can use the gym analogy to see why that is. Imagine the mutual disdain of the students for each other as behaving like spooky spiky hair that extends ghost-like for many meters out from each student. The strands extend easily through ordinary bricks and such, but like Star Wars light sabers they absolutely refuse to move through each other. What happens? For students lined up against a straight wall, the spooky spikes push against each other and wind up extending almost straight through the gym wall. The gradient in that case is actually quite small, since you end up with about the same number of spooky spikes per square meter far outside the gym wall as right up against it. But what about the opposite case of that one student at the very end of the long, narrow dead-end corridor? Her spooky spikes are free to expand outward almost like giant ball, very quickly becoming quite sparse even a few meters beyond the corridor. That's a very steep gradient, and with electrons it's what leads to all sorts of interesting effects. One effect in particular that I should note is that because the only repulsion that the student at the end of the long corridor is from other electrons in that corridor, her desire to move away from that corridor becomes far more directed and acute. She wants to escape! And if there is any weakness in the wall at the end of that corridor, she will succeed, escaping out into free space. And others will then follow! This is why electrons can escape from very sharp points, even at room temperature. The repulsion of electrons for each other is so strong that even the strong binding force of metals may fail if the electrons manage to find a point sufficiently isolated from the main body of free electrons in the metal. Finally, I should point out that these two perspectives -- mutual avoidance and spooky spikes -- are really just two ways of describing the same thing, which is the way the repulsion of electrons falls off with distance. Calculus provides the machinery needed to make precise predictions from such models, but it's still important to keep in mind that the mechanisms by which such effects occur are by themselves not nearly as exotic as you might think. They have real analogies in events as simple as students following "electron rules" in a gymnasium.
{ "source": [ "https://physics.stackexchange.com/questions/43068", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/7743/" ] }
43,069
I have started to study quantum mechanics. I know linear algebra,functional analysis, calculus, and so on, but at this moment I have a problem in Dirac bra-ket formalism . Namely, I have problem with "translation" from ordinary linear algebra "language" to this formalism. For better understanding of the problem, I'll give some definitions which I use: 1) Assume, that $\mid u\rangle$ is a vector in Hilbert vector space $V$. Bra $\langle v \mid$ is a vector of dual vector space $V^{*}$ $\left( \langle v \mid: V \longrightarrow C \right)$, defined by $\langle v \mid u \rangle=g\left(\mid v\rangle, \mid u\rangle \right) $, where $g: V \times V \longrightarrow C$ is metric on $V$. 2)$A$ is linear operator on $V$. Consider bilinear form $(\quad)$: $\left(f,x\right )=f(x)$. In this notation we can define adjoint operator $A^{*}$ on $V^{*}$:$(f,Ax)=(A^*f,x)$. I tried to understand two following equations: There is expression $\langle v \mid A \mid u \rangle$. In my text book was written the following phrase: "Operator $A$ acts on ket from the left and on bra on the right". But according to the difenitions that i use, adjoint operator $A^{*}$ acts on $V^{*}$.But in this case operator $A$ acts on $V^{*}$. I dont absolutely get it. One possible solution which i see is that this is just a notation of the next thing:$\langle v \mid A \mid u \rangle=(v,Au)=(A^{*}v,u)= \langle A^{*}v \mid u \rangle; \quad A^*\langle v \mid :=\langle v \mid A$ The second way is to use isomorphism between $V$ and $V^{*}$, and then operator $A$ is able to act on $V$ and $V^{*}$ (Dual Correspondence). The third way is that we use a matrix representation everywhere;and in expression $\langle v \mid A \mid u \rangle$ we multiply a row $v$ on matrix of operator $A$ on column $u$. Then this expression absolutely clear because the multiplication of matrix's is associative. The same difficulties i have with expression $(A \mid v \rangle)^{*}=\langle v \mid A^ \dagger$. Could you explain it to? I would be happy, if you will say which way is right and if all of my suggestions are wrong, please, tell me the right one.
The wording used in your textbook was sloppy. $A$ acts as $A^*$ on a bra, as $\langle u\rvert A\lvert v\rangle:=\langle u\lvert Av\rangle~$ is the same as $\langle u\rvert A\lvert v\rangle=\langle A^*u\lvert v\rangle~$, by definition of the adjoint. The latter formula also shows that $\langle A^*u\rvert=\langle u\rvert A$. Everything becomes very simple in linear algebra terms when interpreting a ket as a colum vector, the corresponding bra as the conjugate transposed row vector, an operator as a square matrix, and the adjoint as the conjugate transpose. This is indeed the special case when the Hilbert space is $C^n$.
{ "source": [ "https://physics.stackexchange.com/questions/43069", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14494/" ] }
43,195
What is the difference between the weight of an object and the mass of an object?
Weight is the force with which gravity pulls on a mass. Maybe the simplest way to explain the difference is that on the Moon or on Mars, your weight is reduced because gravity is weaker there, but your mass is still the same.
{ "source": [ "https://physics.stackexchange.com/questions/43195", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/15576/" ] }
43,515
Are there any comprehensive texts that discuss QM using the notion of rigged Hilbert spaces ? It would be nice if there were a text that went through the standard QM examples using this structure.
I don't know of any books which use this language exclusively, but the basic idea is pretty straightforward: All Hilbert spaces are isomorphic (if their dimensions match). This would present conceptual problems in quantum mechanics if we ever talked about the Hilbert space alone; how could we distinguish them? But it's OK because we are actually interested in a Hilbert space $\mathcal{H}$ equipped with an algebra of operators $\mathcal{A}$. For example, the real difference between $\mathcal{H} = L^2(\mathbb{R})$ and $\mathcal{H} = L^2(\mathbb{R}^3)$: When we talk about the former, we're talking about $L^2(\mathbb{R})$ with the natural action of the 1d Heisenberg algebra $\mathcal{A}_1$ (generated by $P$ and $Q$ such that $[Q,P] = i\hbar$). When we talk about the latter, we're talking about the Hilbert space with the natural action of the 3d Heisenberg algebra $\mathcal{A}_3$. Neither algebra actually acts on the entirety of $\mathcal{H}$. $(Q\psi)(x)= x\psi(x)$ doesn't necessarily lie in $L^2$. Likewise, the action of the differentiation operator $P = -i\hbar \frac{\partial}{\partial x}$ on a vector $v \in \mathcal{H}$ isn't defined if $v$ is not a differentiable function. And $P^2$ is only defined on twice-differentiable functions. However, there are some functions on which the action of any power $P^nQ^m$ is defined: If $v$ and all of its derivatives vanish faster at infinity than any polynomial, the action of any element of $\mathcal{A}_1$ is defined. Likewise, $\mathcal{A}_3$ really acts on the set $\mathcal{S}$ of functions in $L^2(\mathbb{R}^3)$ whose partial derivatives all vanish fast enough at infinity. In general, if you have a Hilbert space and an algebra $\mathcal{A}$ of operators with continuous spectrum, there's a maximal subspace $\mathcal{S} \subset \mathcal{H}$ on which $\mathcal{A}$ acts. This is the subspace of $v \in \mathcal{H}$ for which $av$ is defined and $||a v|| < \infty$ for any $a \in \mathcal{A}$. It is called the space of smooth vectors for $\mathcal{A}$. (Exercise: $\mathcal{S}$ is dense in $\mathcal{H}$.) $\mathcal{S}$ gets a topology from being a subspace of $\mathcal{H}$, but it actually has a much stronger topology from the family of seminorms $v \mapsto ||a v||$ (for $a \in \mathcal{A}$). This topology makes it a nuclear vector space. Given $\mathcal{S} \subset \mathcal{H}$, you can construct the space $\mathcal{S}^* \supset \mathcal{H}$ of continuous (wrt the nuclear topology) complex-linear linear functionals on $\mathcal{S}$. (Here we are using the Riesz representation theorem to identify $\mathcal{H}$ with its dual $\mathcal{H}^*$.) This space should be thought of as the space of bras, in the Dirac bra-ket sense. The bra $\langle x |$ is the linear function which maps $\psi \in \mathcal{S}$ to $\psi(x) =\langle x | \psi \rangle $, aka, the Dirac delta function $\delta_x$ with support at $x$. (The space of kets is the conjugate space, consisting of conjugate-linear functionals on $\mathcal{S}$. The ket $|x \rangle$ maps a state $\psi \in \mathcal{S}$ to $\psi^*(x) = \langle \psi| x\rangle$.) This space $\mathcal{S}$ is worth considering because it gives rigorous meaning to the idea that elements of $\mathcal{A}$ with continuous spectrum have eigenvectors, and that you can expand some states in these eigenbases. The elements of the algebra $\mathcal{A}$ can't have eigenvectors in $\mathcal{H}$ if they have continuous spectrum. But they do have eigenvectors in the space of bras. The definition is a standard extension-by-duality trick: $v \in \mathcal{S}^*$ is an eigenvector of $a \in \mathcal{A}$ with eigenvalue $\lambda$ if $(av)(\psi) = \lambda v(a^*\psi)$ for all $\psi \in \mathcal{S}$. (Exercise: $\langle x|$ is the eigenbra with eigenvalues $x$ of the position operator $Q$.) The triplet $(\mathcal{S}, \mathcal{H}, \mathcal{S}^*)$ is a rigged Hilbert space. The language of rigged Hilbert spaces was invented to capture the ideas I've outlined above: the smooth vectors of an algebra of operators with continuous spectrum, and the dual vector space where the eigenbases of these operators live. The language actually matches the physics very nicely -- especially the bra-ket formalism -- but it provides a level of precision that's not really necessary for most calculations (e.g., with floating point arithmetic).
{ "source": [ "https://physics.stackexchange.com/questions/43515", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/15717/" ] }
43,714
Consider a theory of one complex scalar field with the following Lagrangian. $$ \mathcal{L}=\partial _\mu \phi ^*\partial ^\mu \phi +\mu ^2\phi ^*\phi -\frac{\lambda}{2}(\phi ^*\phi )^2. $$ The potential is $$ V(\phi )=-\mu ^2|\phi |^2+\frac{\lambda}{2}|\phi|^4. $$ The classic stable minimum of this potential is given by $\phi =\frac{\mu}{\sqrt{\lambda}}e^{i\theta}=:v$ for any $\theta \in \mathbb{R}$. We then define a new field $\psi: =\phi -|v|$, rewrite the Lagrangian in terms of the new field $\psi$, et voilà: out pops the mass term $$ 2|v|^2\lambda \psi ^*\psi . $$ People usually explain that this shows the existence of a field whose quanta are particles of mass $|v|\sqrt{2\lambda}$. In this sense, the original field $\phi$ has acquired a mass. I don't buy it. I don't see where this argument has actually used the fact that $v$ is the vacuum expectation value of the field. Yes, it is natural to expand about the classical minima, but why not do something stupid instead and define a new field $\psi :=\phi -7$. Once again, after rewriting the Lagrangian in terms of the new field, you should find that it has indeed acquired has a mass term. Playing this trick over and over, by picking a number different than $7$, you should be able to find a mass term with any mass you like. Obviously, this doesn't make any physical sense. There is something special about the substitution $\psi :=\phi -|v|$. There must be more to it than a tedious algebraic manipulation, but what is it? Why does nature possess particles of the mass that this substitution yields, as opposed to any other substitution? On another note, what exactly does this have to do with the global $U(1)$ symmetry present? It seems that the only thing that has played a role so far is that the vacuum expectation value of the field is non-zero, but yet I've always seen this mass generation presented alongside symmetry breaking. What precisely is the relationship between the two?
As is easily checked, fields linear in creation and annihilation operators (and hence amenable to a particle interpretation) have zero vacuum expectation value. Thus the $\phi$ field with its nonvanishing vacuum expectation value cannot be given a particle interpretation. But the field $\psi=\phi-v$ has such an interpretation as its vacuum expectation value is zero. This works only if $v$ is the vacuum expectation of $\phi$. Note that the field $\phi$ is and remains massless; it is the field $\psi$ that had acquired a mass term. The 1-loop approximation to a quantum field theory is given by the saddle-point approximation of the functional integral. For that you have to expand around a stationary point, and for stability reasons this stationary point has to be a local minimizer. If the local minimum is not global, the vacuum state is metastable only; so one usually expands around the global minimizer. A mass term breaks the scaling symmetry of a previously scale-invariant theory. It may or may not break other symmetries. In the above case, the symmetry $\phi\to-\phi~~~$ of the action is broken in the stable vacuum.
{ "source": [ "https://physics.stackexchange.com/questions/43714", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3397/" ] }
43,722
When I eat hot pizza or a melted cheese sandwich, the cheese feels a lot hotter than the crust or bread: in particular, the cheese might scald the roof of my mouth. but the crust will not. Is this my imagination, or because the crust cools a little faster than the cheese, so has already cooled a bit by the time I eat it, or because the cheese cools a little faster than the crust, so transfers heat to the roof of my mouth a bit more, or what?
Two reasons: the cheese has a higher specific heat capacity than the crust; the cheese has a higher thermal conductivity than the crust. When you cool a given weight of cheese or crust from the oven temperature to your mouth temperature, the amount of heat it gives up depends on its specific heat. So the cheese, with its high specific heat, gives up more heat than the crust and hence heats your mouth more. The cheese also conducts heat better so it can deliver the heat to your mouth more quickly, and again this makes your mouth hotter. All of which is fine, but actually it may simply be that the crust cools faster than the cheese while the pizza is sitting on your plate. I can't say if this is the case because I've never done the experiment!
{ "source": [ "https://physics.stackexchange.com/questions/43722", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/902/" ] }
43,733
In QFT a polynomial (of degree >2) in the fields is said to be an interaction term, Ex.: $\lambda\phi^4$. Question Is it possible to give an interpretation to terms like $\frac{1}{\phi^n}$? (for $n\in\mathbb{N}$) Cheers
Two reasons: the cheese has a higher specific heat capacity than the crust; the cheese has a higher thermal conductivity than the crust. When you cool a given weight of cheese or crust from the oven temperature to your mouth temperature, the amount of heat it gives up depends on its specific heat. So the cheese, with its high specific heat, gives up more heat than the crust and hence heats your mouth more. The cheese also conducts heat better so it can deliver the heat to your mouth more quickly, and again this makes your mouth hotter. All of which is fine, but actually it may simply be that the crust cools faster than the cheese while the pizza is sitting on your plate. I can't say if this is the case because I've never done the experiment!
{ "source": [ "https://physics.stackexchange.com/questions/43733", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10335/" ] }
43,813
Most people know the famous equation: $$E=mc^2$$ What were his steps of thinking for this equation that helped us discover so much about our world?
You can find the shortest and easiest derivation of this result in the paper where it was released by Einstein himself (what better reference can you find?) in 1905. It is not the main paper of Special Relativity, but a short document he added shortly afterwards. A. Einstein, Ist die Trägheit eines Körpers von seinem Energieinhalt Abhängig?, Annalen der Physik 18 (1905) 639 . A pdf file of the English translation Does the Inertia of a Body Depend upon its Energy-Content? is available here . (hattip: user53209.) It is a delightful document to read. There is no dramatic references to huge power release nor anything similar. He simply states after the derivation "If a body gives the energy away $L$ in form of radiation, then its mass decreases in an amount $L/V^{2}$ (...) the mass of a body is a measure for its energy content (...) One can not exclude the possibility that, with the bodies whose energy content changes rapidy, for example radium salts, a proof of the theory will be found (...) If the theory adjusts to the facts, then the radiation transports inertia between emitters and absorbers." Google for that short paper and see the derivation yourself, it is very easy. The Minkowsky four-dimensional spacetime had not yet been incorporated to special relativity, so the equations are formally very simple, easy to follow with little mathematical training.
{ "source": [ "https://physics.stackexchange.com/questions/43813", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/15576/" ] }
43,853
I have been wondering about the axiom of choice and how it relates to physics. In particular, I was wondering how many (if any) experimentally-verified physical theories require the axiom of choice (or well-ordering) and if any theories actually require constructability. As a math student, I have always been told the axiom of choice is invoked because of the beautiful results that transpire from its assumption. Do any mainstream physical theories require AoC or constructability? If so, how do they require AoC or constructability?
No, nothing in physics depends on the validity of the axiom of choice because physics deals with the explanation of observable phenomena. Infinite collections of sets – and they're the issue of the axiom of choice – are obviously not observable (we only observe a finite number of objects), so experimental physics may say nothing about the validity of the axiom of choice. If it could say something, it would be very paradoxical because axiom of choice is about pure maths and moreover, maths may prove that both systems with AC or non-AC are equally consistent. Theoretical physics is no different because it deals with various well-defined, "constructible" objects such as spaces of real or complex functions or functionals. For a physicist, just like for an open-minded evidence-based mathematician, the axiom of choice is a matter of personal preferences and "beliefs". A physicist could say that any non-constructible object, like a particular selected "set of elements" postulated to exist by the axiom of choice, is "unphysical". In mathematics, the axiom of choice may simplify some proofs but if I were deciding, I would choose a stronger framework in which the axiom of choice is invalid. A particular advantage of this choice is that one can't prove the existence of unmeasurable sets in the Lebesgue theory of measure. Consequently, one may add a very convenient and elegant extra axiom that all subsets of real numbers are measurable – an advantage that physicists are more likely to appreciate because they use measures often, even if they don't speak about them.
{ "source": [ "https://physics.stackexchange.com/questions/43853", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/9920/" ] }
44,017
Could it be possible that the mass of the proton can be calculated by a series of integer sequences? Or is this just a curiosity? Edit September 18, 2019 --- The most recent mass of the proton has diverged from this summation. It's a curiosity! $$\sum_{m=1}^{\infty } \frac{1}{(m^2+1)_{2m}}=$$ NSum[1/Pochhammer[m^2+1,2m], {m,1,\[Infinity]}, WorkingPrecision -> 50] Edit first eight digits match as of 2016. Question at math.SE First seven digits match the proton's mass in kilograms. $1.6726218229590580987863882056891582636342622102204\times10^{-27}$ $1.672621\times10^{-27}$ - from OEIS revised 11/15/12 $1.672621777\times10^{-27}$ - from Wikipedia What's to say that sometime in the future, the proton's mass won't be made more accurate by adding $4.5\times10^{-35}$ to the current number? Edit to explain motivation Whenever I get a result I don't recognize, I look it up on OEIS. I found this number. I posted on Mathematica.SE with the intention of asking for advice on how to prove that it converges. That would make this number a constant. If this is a "fluke" or the result of "small numbers," it's still worth exploring. Edit: It does converge. Final Thoughts $f_{p}=0.16726218229590580987863882056891582636342622102204$ is the 0-dimensional value of a fractal know as the Hilbert Curve . To get the minimal 3-dimensional value: $f \times 10^{((dimension+1)!)}$ where $0\le dimension \le 3$ . This results in the value for a $1\times 1\times 1$ cube (coincidentally, the definition of the gram.) To get kilograms: $f \times 10^{((dimension+1)!+3)}$ . I posit that the fractalness is the stabilizing influence on the proton. Coda I agree with everyone that I have been wrong-headed about the importance of this constant. I have posted the constant on OEIS A219733 . Thanks for your patience.
To formalize dushya's comment as an answer: Since the kilogram is an arbitrary, man-made unit, the actual numerical value of the proton mass in kilograms is meaningless (i.e. it's as good as its value in pounds, ounces, stones, solar masses, $\textrm{MT}/c^2$, etc.). The true fundamental constants of nature are dimensionless: they have the same value in every unit system. Thus dimensional constants like $c$, $\hbar$, $G$, and indeed $m_p$ and $m_e$, are not very meaningful and can be set to $1$ with a judicious choice of units (which is done quite often). True fundamental constants are often ratios of dimensional quantities such as the fine structure constant, $$\alpha=\frac{e^2/4\pi\epsilon_0}{\hbar c},$$ which quantifies how strong, on a quantum scale, the electromagnetic interaction is. In terms of mass, the constants you'd like to predict are things like the ratio $m_p/m_e\approx 1800$, and so on. Given that, the formula you have found is just a fluke: a consequence of the fact that we chose as our basic unit of mass the mass of a cube of water whose sides measure one hundred-millionth of a quarter of a meridian. EDIT, given the long comment thread: @Fred, let me try and rephrase this a bit to see if I can bring out the arbitrariness we're talking about well up to the surface. The real number you have discovered is the inverse of the one you posted: $$\frac{10^{26}}{\sum_{m=1}^\infty \frac{1}{(m^2+1)_{2m}}}\approx 5.978638 \times 10^{26},$$ which appears to approximate within experimental error the number of protons and neutrons that will fit - at sea level and at "room" temperature - a cubical box about yea big in side containing that particular common chemical that you find in drinking fountains, kitchen sinks, lakes, and even falling out of the sky (on Earth) rather often. Since the proton really is quite fundamental, any stabilizing influence of the fractalness needs to account for the size of the Earth, its predominant climate a hundred years ago, the abundance of water in it, and the detailed chemical state of the brains of a number of mainly French gentlemen that sat down a while ago to try and make unit systems (which are always arbitrary) at least simple to work with.
{ "source": [ "https://physics.stackexchange.com/questions/44017", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/15901/" ] }
44,196
I am a student studying Mathematics with no prior knowledge of Physics whatsoever except for very simple equations. I would like to ask, due to my experience with Mathematics: Is there a set of axioms to which it adheres? In Mathematics, we have given sets of axioms, and we build up equations from these sets. How does one come up with seemingly simple equations that describe physical processes in nature? I mean, it's not like you can see an apple falling and intuitively come up with an equation for motion... Is there something to build up hypotheses from, and how are they proven, if the only way of verifying the truth is to do it experimentally? Is Physics rigorous?
No, physics is not rigorous in the sense of mathematics. There are standards of rigor for experiments, but that is a different kind of thing entirely. That is not to say that physicists just wave their hands in their arguments [only sometimes ;) ], but rather that it does not come even close to a formal axiomatized foundation like in mathematics. Here's an excerpt from R.Feynman's lecture The Relation of Mathematics and Physics , available on youtube, which is also present in his book, Character of Physical Law (Ch. 2): There are two kinds of ways of looking at mathematics, which for the purposes of this lecture, I will call the the Babylonian tradition and the Greek tradition. In Babylonian schools in mathematics, the student would learn something by doing a large number of examples until he caught on to the general rule. Also, a large amount of geometry was known... and some degree of argument was available to go from one thing to another. ... But Euclid discovered that there was a way in which all the theorems of geometry could be ordered from a set of axioms that were particularly simple... The Babylonian attitude... is that you have to know all the various theorems and many of the connections in between, but you never really realized that it could all come up from a bunch of axioms... [E]ven in mathematics, you can start in different places. ... The mathematical tradition of today is to start with some particular ones which are chosen by some kind of convention to be axioms and then to build up the structure from there. ... The method of starting from axioms is not efficient in obtaining the theorems. ... In physics we need the Babylonian methods, and not the Euclidean or Greek method. The rest of the lecture is also interesting and I recommend it. He goes on (with an example of deriving conservation of angular momentum from Newton's law of gravitation and having it generalized): We can deduce (often) from one part of physics, like the law of gravitation, a principle which turns out to be much more valid than the derivation. This doesn't happen in mathematics, that the theorems come out in places where they're not supposed to be.
{ "source": [ "https://physics.stackexchange.com/questions/44196", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/15963/" ] }
44,732
String Theory is formulated in 10 or 11 (or 26?) dimensions where it is assumed that all of the space dimensions except for 3 (large) space dimensions and 1 time dimension are a compact manifold with very small dimensions. The question is what is assumed about the curvature of the large 3 space and 1 time dimensions? If these dimensions are assumed to be flat, then how is String Theory ever able to reproduce the equations of General Relativity which require curved space time in the presence of mass-energy (of course, the actual source term for General Relativity is the stress energy tensor). On the other hand if String Theory is formulated on a general curved space-time with an unknown metric (usually signified by $g_{\mu\nu}$) how do the equations of General Relativity that puts constraints on $g_{\mu\nu}$ arise from string theory? It is well known that General Relativity requires a spin-2 massless particle as the "force mediation" particle (similar to the photon as the spin-1 massless force mediation particle of electromagnetism). It is also well know that String Theory can accommodate the purported spin-2 massless particle as the oscillation of a closed string. But how does this graviton particle relate to the curvature of the large dimensions of space-time? I am aware that " How does String Theory predict Gravity? " is somewhat similar to this question, but I do not think it actually contains an answers to this question so please don't mark it as a duplicate question. I would especially appreciate an answer that could be understood by a non-theoretical ("String Theory") physicist - hopefully the answer would be at a level higher than a popular non-mathematical explanation. In other words, assume the reader of the answer understands General Relativity and particle physics, but not String Theory. Update from Comment to Clarify: If you start with flat space then $g_{\mu\nu}$ isn't the metric tensor since you assumed flat space. If you start with arbitrarily curved space why would and how could you prove that the components of the graviton give you the metric tensor? I am interested in the strongly curved space case since that is where GR differs the most from Newtonian gravity. In flat space you could sort of consider weak Newtonian gravity to be the result of the exchange of massless spin-2 particles. But strong gravity needs actual space curvature to be equivalent to GR.
String theory may be considered as a framework to calculate scattering amplitudes (or other physically meaningful, gauge-invariant quantities) around a flat background; or any curved background (possibly equipped with nonzero values of other fields) that solves the equations of motion. The curvature of spacetime is physically equivalent to a coherent state (condensate) of closed strings whose internal degrees of freedom are found in the graviton eigenstates and whose zero modes and polarizations describe the detailed profile $g_{\mu\nu}(X^\alpha)$. Einstein's equations arise as equations for the vanishing of the beta-functions – derivatives of the (continuously infinitely many) world sheet coupling constants $g_{\mu\nu}(X^\alpha)$ with respect to the world sheet renormalization scale – which is needed for the scaling conformal symmetry of the world sheet (including the quantum corrections), a part of the gauge symmetry constraints of the world sheet theory. Equivalently, one may realize that the closed strings are quanta of a field and calculate their interactions in an effective action from their scattering amplitudes at any fixed background. The answer is, once again, that the low-energy action is the action of general relativity; and the diffeomorphism symmetry is actually exact. It is not a surprise that the two methods produce the same answer; it is guaranteed by the state-operator correspondence, a mathematical fact about conformal field theories (such as the theory on the string world sheet). The relationship between the spacetime curvature and the graviton mode of the closed string is that the former is the condensate of the latter. They're the same thing. They're provably the same thing. Adding closed string excitations to a background is the only way to change the geometry (and curvature) of this background. (This is true for all of other physical properties; everything is made out of strings in string theory.) On the contrary, when we add closed strings in the graviton mode to a state of the spacetime, their effect on other gravitons and all other particles is physically indistinguishable from a modification of the background geometry. Adjustment of the number and state of closed strings in the graviton mode is the right and only way to change the background geometry. See also http://motls.blogspot.cz/2007/05/why-are-there-gravitons-in-string.html?m=1 Let me be a more mathematical here. The world sheet theory in a general background is given by the action $$ S = \int d^2\sigma\,g_{\mu\nu}(X^\alpha(\sigma)) \partial_\alpha X^\mu(\sigma)\partial^\alpha X^\nu(\sigma) $$ It is a modified Klein-Gordon action for 10 (superstring) or 26 (bosonic string theory) scalar fields in 1+1 dimensions. The functions $g_{\mu\nu}(X^\alpha)$ define the detailed theory; they play the role of the coupling constants. The world sheet metric may always be (locally) put to the flat form, by a combination of the 2D diffeomorphisms and Weyl scalings. Now, the scattering amplitudes in (perturbative) string theory are calculated as $$ A = \int {\mathcal D} h_{\alpha\beta}\cdots \exp(-S)\prod_{i=1}^n \int d^2\sigma V_i $$ We integrate over all metrics on the world sheet, add the usual $\exp(-S)$ dependence on the world sheet action (Euclideanized, to make it mathematically convenient by a continuation), and insert $n$ "vertex operators" $V_i$, integrated over the world sheet, corresponding to the external states. The key thing for your question is that the vertex operator for a graviton has the form $$V_{\rm graviton} = \epsilon_{\mu\nu}\partial_\alpha X^\mu (\sigma)\partial^\alpha X^\nu(\sigma)\cdot \exp(ik\cdot X(\sigma)).$$ The exponential, the plane wave, represents (the basis for) the most general dependence of the wave function on the spacetime, $\epsilon$ is the polarization tensor, and each of the two $\partial_\alpha X^\mu(\sigma)$ factors arises from one excitation $\alpha_{-1}^\mu$ of the closed string (or with a tilde) above the tachyonic ground state. (It's similar for the superstring but the tachyon is removed from the physical spectrum.) Because of these two derivatives of $X^\mu$, the vertex operator has the same form as the world sheet Lagrangian (kinetic term) itself, with a more general background metric. So if we insert this graviton into a scattering process (in a coherent state, so that it is exponentiated), it has exactly the same effect as if we modify the integrand by changing the factor $\exp(-S)$ by modifying the "background metric" coupling constants that $S$ depends upon. So the addition of the closed string external states to the scattering process is equivalent to not adding them but starting with a modified classical background. Whether we include the factor into $\exp(-S)$ or into $\prod V_i$ is a matter of bookkeeping – it is the question which part of the fields is considered background and which part is a perturbation of the background. However, the dynamics of string theory is background-independent in this sense. The total space of possible states, and their evolution, is independent of our choice of the background. By adding perturbations, in this case physical gravitons, we may always change any allowed background to any other allowed background. We always need some vertex operators $V_i$, in order to build the "Fock space" of possible states with particles – not all states are "coherent", after all. However, you could try to realize the opposite extreme attitude, namely to move "all the factors", including those from $\exp(-S)$, from the action part to the vertex operators. Such a formulation of string theory would have no classical background, just the string interactions. It's somewhat singular but it's possible to formulate string theory in this way, at least in the cubic string field theory (for open strings). It's called the "background-independent formulation of the string field theory": instead of the general $\int\Psi*Q\Psi+\Psi*\Psi*\Psi$ quadratic-and-cubic action, we may take the action of string field theory to be just $\int\Psi*\Psi*\Psi$ and the quadratic term (with all the kinetic terms that know about the background spacetime geometry) may be generated if the string field $\Psi$ has a vacuum condensate. Well, it's a sort of a singular one, an excitation of the "identity string field", but at least formally, it's possible: the whole spacetime may be generated purely out of stringy interactions (the cubic term), with no background geometry to start with.
{ "source": [ "https://physics.stackexchange.com/questions/44732", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5563/" ] }
45,053
What are some elegant ways to calculate $$[\hat{a}^{M},\hat{a}^{\dagger N}]\qquad\text{with} \qquad[\hat{a},\hat{a}^{\dagger}]=1,$$ other than brute force calculation? ( EDIT ) I got the same result as Qmechanic. I think Prathyush's suggestion should be equivalent to mine of the correspondence up to a canonical transformation. Here is my calculation: $\begin{array}{c} \mbox{representation of }\left(\hat{a},\hat{a}^{\dagger}\right)\mbox{ on polynomial space }span\left\{ \frac{x^{n}}{\sqrt{n!}}\right\} _{n\ge0}\\ \hat{a}\left[f\left(x\right)\right]=\frac{d}{dx}f\left(x\right)\;;\;\hat{a}^{\dagger}\left[f\left(x\right)\right]=xf\left(x\right)\;;\;\left[\hat{a},\hat{a}^{\dagger}\right]\left[f\left(x\right)\right]=id\left[f\left(x\right)\right]\\ \left|0\right\rangle \sim 1\;;\;\left|n\right\rangle \sim x^{n}/\sqrt{n!} \end{array}$ $\begin{array}{c} \mbox{calculate the normal ordering }\left[\hat{a}^{M},\hat{a}^{\dagger}{}^{N}\right]\mbox{:}\\ \sim\left[\frac{d^{M}}{dx^{M}},x^{N}\right]=\frac{d^{M}}{dx^{M}}\left(x^{N}\star\right)-x^{N}\frac{d^{M}}{dx^{M}}\left(\star\right)\\ \sim\left\{ \overset{min\left\{ M,N\right\} }{\underset{k=0}{\sum}}\frac{N!}{\left(N-k\right)!}C_{M}^{k}\left(\hat{a}^{\dagger}\right)^{N-k}\left(\hat{a}\right)^{M-k}\right\} -\left(\hat{a}^{\dagger}\right)^{N}\left(\hat{a}\right)^{M}\\ \end{array}$ One comment on 02-12-2012: The representation I was using is actually related to Bergmann representation with the inner product for Hilbert space (polynomials) being: $$\left\langle f\left(x\right),g\left(x\right)\right\rangle :=\int dxe^{-x^{2}}\overline{f\left(x\right)}g\left(x\right)\,,x\in\mathbb{R}\,,\, f,g\in\mathbb{C}\left[x\right]$$
The standard way is to use generating functions (in this case a la coherent states). Usually one would like the resulting formula to be normal-ordered . Recall the following version $$\tag{1} e^Ae^B~=~e^{[A,B]}e^Be^A$$ of the Baker-Campbell-Hausdorff formula . The formula (1) holds if the commutator $[A,B]$ commutes with both the operators $A$ and $B$. Put $A=\alpha a $ and $B=\beta a^{\dagger}$, where $\alpha,\beta\in\mathbb{C}$. Let $[a, a^{\dagger}]=\hbar {\bf 1}$, so that the commutator $[A,B]=\alpha\beta\hbar {\bf 1}$ is a $c$-number. Now Taylor-expand the exponential factors in eq. (1). For fixed orders $n,m\in \mathbb{N}_0$, consider terms in eq. (1) proportional to $\alpha^n\beta^m$. Deduce that the the antinormal-ordered operator $a^n(a^{\dagger})^m$ can be normal-ordered as $$\tag{2} a^n(a^{\dagger})^m~=~\sum_{k=0}^{\min(n,m)} \frac{n!m!\hbar^k}{(n-k)!(m-k)! k!}(a^{\dagger})^{m-k}a^{n-k}. $$ Finally, deduce that the normal-ordered commutator is $$\tag{3} [a^n,(a^{\dagger})^m]~=~\sum_{k=1}^{\min(n,m)} \frac{n!m!\hbar^k}{(n-k)!(m-k)! k!}(a^{\dagger})^{m-k}a^{n-k}. $$
{ "source": [ "https://physics.stackexchange.com/questions/45053", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10209/" ] }
45,514
I simply know that a single free quark does not exist. What is the reason that we can not get a free quark? If we can't get a free quark then what is single-top-quark?
A free quark is like the free end of a rubber band. If you want to make the ends of a rubber band free you have to pull them apart, however the farther apart you pull them the more energy you have to put in. If you wanted to make the ends of the rubber band truly free you'd have to make the separation between them infinite, and that would require infinite energy. What actually happens is that the rubber band snaps and you get four ends instead of the two you started with. Similarly, if you take two quarks and try and pull them apart the force between them is approximately independent of distance, so to pull them apart to infinity would take infinite energy. What actually happens is that at some distance the energy stored in the field between them gets high enough to create more quarks, and in stead of two separated quarks you get two pairs of quarks. This doesn't happen when you pull apart a proton and electron because the force between them falls according to the inverse square law. The difference between the electron/proton pair and a pair of quarks is that the force between the quarks doesn't fall according to the inverse square law. Instead at sufficiently long distances it becomes roughly constant. I don't think this is fully understood (it certainly isn't fully understood by me :-), but it's thought to be because the lines of force in the quark-quark field represent virtual gluons, and gluons attract each other. This means the lines of force collect together to form a flux tube . By contrast the electron-proton force is transmitted by virtual photons and photons do not attract each other. Finally, top quarks are usually produced as a top anti-top pair. It is possible to create a single top quark , but it's always paired with a quark of a different type so you aren't creating a free quark.
{ "source": [ "https://physics.stackexchange.com/questions/45514", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/12502/" ] }
45,578
I was watching a youtube video the other day where an economist said that he challenged his physics professor on this question back when he was in school. His professor said each scenario is the same, while he said that they are different, and he said he supplied a proof showing otherwise. He didn't say whether or not the cars are the same mass, but I assumed they were. To state it more clearly, in the first instance each car is traveling at 50mph in the opposite direction and they collide with each other. In the second scenario, a car travels at 100 mph and crashes into a brick wall. Which one is "worse"? When I first heard it, I thought, "of course they're the same!" But then I took a step back and thought about it again. It seems like in the first scenario the total energy of the system is the KE of the two cars, or $\frac{1}{2}mv^2 + \frac{1}{2}mv^2 = mv^2$. In the second scenario, it's the KE of the car plus wall, which is $\frac{1}{2}m(2v)^2 + 0 = 2mv^2$. So the car crashing into the wall has to absorb (and dissipate via heat) twice as much energy, so crashing into the wall is in fact worse. Is this correct? To clarify, I'm not concerned with the difference between a wall and a car, and I don't think that's what the question is getting at. Imagine instead that in the second scenario, a car is crashing at 100mph into the same car sitting there at 0mph (with it's brakes on of course). First scenario is the same, two of the same cars going 50mph in opposite directions collide. Are those two situations identical? PS: This scenario is also covered in an episode of mythbusters .
I don't think any of the other answers have made the following point clear enough, so I am going to give it a try. Both scenarios are very similar before the collision, but they differ greatly afterwards... From a stationary reference, you see the cars driving towards each other at 50mph, but of course if you choose a reference frame moving with the first car, then the second will be headed toward it at 100 mph. How is this different from the wall scenario? Well, from a stationary reference frame, after the crash both cars remain at rest, so the kinetic energy dissipated is $2\times \frac{1}{2}mv^2$. From the reference frame moving with the first car, the kinetic energy before the crash is $\frac{1}{2}m(2v)^2=4\times\frac{1}{2}mv^2$, but after the crash the cars do not remain at rest, but keep moving in the direction of the second car at half the speed. So of course the kinetic energy after the crash is $2\times\frac{1}{2}mv^2$, and the total kinetic energy lost in the crash is the same as when considering a stationary reference frame. In the car against a wall, you do have the full dissipation of a kinetic energy of $4\times\frac{1}{2}mv^2$.
{ "source": [ "https://physics.stackexchange.com/questions/45578", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/15684/" ] }
45,644
It is generally assumed that there is no limit on how many bosons are allowed to occupy the same quantum mechanical state. However, almost every boson encountered in every-day physics is not a fundamental particle (with the photon being the most prominent exception). They are instead composed of a number of fermions, which can not occupy the same state. Is it possible for more than one of these composite bosons to be in the same state even though their constituents are not allowed to be in the same state? If the answer is "yes", how does this not contradict the more fundamental viewpoint considering fermions?
This is a nice puzzle--- but the answer is simple: the composite bosons can occupy the same state when the state is spatially delocalized on a scale larger than the scale of the wavefunction of the fermions inside, but they feel a repulsive force which prevents them from being at the same spatial point, so that they cannot sit at the same point at the same time. The potential energy of this force is always greater than the excitation energy of the composite system, so if you force the bosons to sit at the same point, you will excite one of them, so that the composing fermions are no longer in the same state, and the two particles become distinguishable. The scale for this effective repulsion is the decay-length of the wavefunction of the composing fermions, and this repulsion is what leads matter to feel hard. The reason you haven't heard this is somewhat political--- there are people who say that the exclusion principle is not the cause of the repulsive contact forces in ordinary matter, that this force is electrostatic, and despite this being ridiculously false, nobody wants to get into the mud and argue with them. So people don't explain the fermionic exclusion principle forces properly. If you have a two-fermion composite which is net bosonic, like a H atom with a proton nucleus and spin-polarized electron, when you bring the H-atoms close, the energy of the electronic ground state is the effective Hamiltonian potential energy for the nuclei. When the nuclei are close enough so that the electronic wavefunctions have appreciable overlap, you get a strong repulsion. You can see that this repulsion is pure Pauli, because if the electrons have opposite spins, you don't get repulsion at short distances, you get attraction, and the result is that you form an H2 molecule of the two H atoms. You can see this exclusion force emerge in an exactly solvable toy model. Consider a 1d line with two attractive unit delta function pontetials at positions a and -a, each with a fermion attached in the ground state. Each one has an independent ground state wavefunction that has the shape $exp(-|x|)$, but when the two are together at separation 2a, the two states are deformed, and the ground state energy for the fermions goes up. The effect is quadratic in the separation, because the ground state (one fermion) goes down in energy, and the first excited state goes up in energy, and to leading order in perturbations, the two are cancelling when both states are occupied. To next leading order, the effect is positive potential energy, a repulsion. This potential is the effective potnetial of the two delta functions when you make them dynamical instead of fixed. The maximum value of the repulsive potential in this model is exactly where the model breaks down, which is at a=1. At this point, the ground state is exp(-2x) to the left of -1, constat between the two delta functions, then exp(2x) to the right, with energy -2, and the first excited state is constant to the left of -1, a straight line from -1 to 1, and constant past 1, with energy 0. The result is a net energy of -1 unit. This is half the binding energy of the two separated delta functions, which is -2. This effect is the exclusion repulsion, and it reconciles the fermionic substructure with the net bosonic behavior of the particle. You can only see the substructure when the wavefunction of the boson is concentrated enough to have appreciable overlap on the scale of the composing fermion wavefunctions, and this is why you need high energies to probe the compositeness of the Higgs (or for that matter, the alpha particle). To get the wavefunctions to sit at the same point to this accuracy, you need to localize them at high energy.
{ "source": [ "https://physics.stackexchange.com/questions/45644", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4206/" ] }
45,653
Given Newton's third law, why is there motion at all? Should not all forces even themselves out, so nothing moves at all? When I push a table using my finger, the table applies the same force onto my finger like my finger does on the table just with an opposing direction, nothing happens except that I feel the opposing force. But why can I push a box on a table by applying force ( $F=ma$ ) on one side, obviously outbalancing the force the box has on my finger and at the same time outbalancing the friction the box has on the table? I obviously have the greater mass and acceleration as for example the matchbox on the table and thusly I can move it, but shouldn't the third law prevent that from even happening? Shouldn't the matchbox just accommodate to said force and applying the same force to me in opposing direction?
I think it's a great question, and enjoyed it very much when I grappled with it myself. Here's a picture of some of the forces in this scenario.$^\dagger$ The ones that are the same colour as each other are pairs of equal magnitude, opposite direction forces from Newton's third law. (W and R are of equal magnitude in opposite directions, but they're acting on the same object - that's Newton's first law in action.) While $F_{matchbox}$ does press back on my finger with an equal magnitude to $F_{finger}$, it's no match for $F_{muscles}$ (even though I've not been to the gym in years). At the matchbox, the forward force from my finger overcomes the friction force from the table. Each object has an imbalance of forces giving rise to acceleration leftwards. The point of the diagram is to make clear that the third law makes matched pairs of forces that act on different objects. Equilibrium from Newton's first or second law is about the resultant force at a single object. $\dagger$ (Sorry that the finger doesn't actually touch the matchbox in the diagram. If it had, I wouldn't have had space for the important safety notice on the matches. I wouldn't want any children to be harmed because of a misplaced force arrow. Come to think of it, the dagger on this footnote looks a bit sharp.)
{ "source": [ "https://physics.stackexchange.com/questions/45653", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16458/" ] }
45,759
My girlfriend and I were watching Cosmos , and something Carl Sagan said got us wondering what the farthest-away visible star is. Obviously "visible to the naked eye" is a fuzzy concept that might have many defensible answers, but hopefully not too many. To make the question a little more interesting, let's restrict to individually distinguishable stars; otherwise the answer is pretty clearly some Local Group galaxy, and there aren't many of them to check. The closest thing to a reasonable answer we came up with was this wikipedia list of stars that are more luminous than any closer star. The farthest-away star on that list with a plausibly visible apparent magnitude is Eta Carinae (7500 ly away, magnitude 4.55). However, there are several reasons why I'm not willing to consider this definitively answered: It's a wikipedia article, and a poorly sourced one at that. So I don't entirely trust it. It sorts stars by bolometric luminosity rather than visual luminosity, so perhaps there's some farther-away star whose spectrum is better-centered in the visible range. The farthest-away visible star isn't actually guaranteed to be on a list of that sort, even assuming the other two points are cleared up. Perhaps the farthest-away visible star is only barely visible, and there's some star both closer and absolutely brighter than it which makes the list. Given all these points, is it actually the case that Eta Carinae is the farthest-away visible star, or is there some visible star that's farther from us?
Following the questions raised by Rob Jeffries, I have completely rewritten my answer: What is the farthest-away star visible to the naked eye? Indeed, this question has many defensible answers. It is not just the concept " visible to the naked eye " that is fuzzy. The stars we see are seldom single objects, but rather binary and multiple star systems. Do we allow doubles and multiples? The wording of your question (" let's restrict to individually distinguishable stars ") suggests to exclude multiple star systems. But the candidate farthest visible star put forward ( Eta Carinae ) is exactly that: a multiple. And like most highly luminous stars Eta Carinae is also a variable star. Should variable stars be allowed? Should we allow variable stars that in the recent past were visible to the human aye, but currently are not? If so, do we also allow cataclysmic variable stars? Do we allow novae and supernovae? Apart from all these ambiguities, as stressed by Rob Jeffries in below comments, there is also the issue of (often considerable) uncertainty in cosmic distances. How do we handle these uncertainties? Let's first define what we mean by " visible to the naked eye ". Which stars are visible to your naked eye depends on the light pollution of the site you are observing from and the atmospheric conditions (and obviously also on you eyesight). A so-called " magnitude 6 sky " is often taken as the standard for a good dark site with no light pollution. The threshold stars you can see in such a night sky have apparent magnitude 6. So we can eliminate a key ambiguity by changing the question into "which star brighter than 6th magnitude is farthest away?". According to this article : "The farthest star we can see with our naked eye is V762 Cas in Cassiopeia at 16,308 light-years away. Its brightness is magnitude 5.8 or just above the 6th magnitude limit." This answer puts forward a variable star, but clearly excludes supernovae as that would have resulted in much larger distances (more about that later). Rob questions the apparent five-digit accuracy in this answer. A bit of research reveals that the distance figure is derived from the central value in the measured parallax of 0.22 +/- 0.59 mas (milli-arcseconds). This means that we have no more than a 50% confidence that the distance is indeed 16 kly (kilo lightyear) or more. We should not blindly accept a 50% confidence level. Rather, we should agree on a confidence level that is deemed sufficiently strict for the intended purpose of selecting the most distant star. Yet another ambiguity to resolve! I propose to use the one standard deviation upper range of the parallax measurement (in the case of V762 Cas 0.22 + 0.59 = 0.81 mas) to derive distances. This gives us an estimated distance of 4.0 kly with a confidence of about 85% that the actual distance is at least this value. (As Rob points out, a more recent parallax measurement for V762 Cas results in 1.18 +/- 0.45 mas. If we would combine both parallax measurements to derive a chi-square estimate of the actual distance, we arrive at a value compatible with 4 kly.) This results in the conclusion that the often quoted V762 Cas (see e.g. here and here ) is unlikely the most distant naked-eye-visible star. For instance, HIP 107418 , put forward by Rob as candidate most distance star, has a lower one standard deviation upper range of parallax of 0.62 mas, corresponding to a 85% confidence distance of 5.3 kly. I do not have the means to analyze extensive star data bases, but offer this candidate most distant naked-eye-visible star: AH Sco , with a one standard deviation upper range of parallax of 0.48 mas , leading to a 85% confidence that its distance exceeds 6.8 kly. Finally, what answer do we arrive at if we allow for a broader range of variable stars, including supernovae? I propose SN 1885A at a distance of 2.6 million light years (!) as the most distant single star that was once (almost 130 years ago) visible to the naked eye.
{ "source": [ "https://physics.stackexchange.com/questions/45759", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16493/" ] }
46,015
Imagine you're teaching a first course on quantum mechanics in which your students are well-versed in classical mechanics, but have never seen any quantum before. How would you motivate the subject and convince your students that in fact classical mechanics cannot explain the real world and that quantum mechanics, given your knowledge of classical mechanics, is the most obvious alternative to try? If you sit down and think about it, the idea that the state of a system, instead of being specified by the finitely many particles' position and momentum, is now described by an element of some abstract (rigged) Hilbert space and that the observables correspond to self-adjoint operators on the space of states is not at all obvious. Why should this be the case, or at least, why might we expect this to be the case? Then there is the issue of measurement which is even more difficult to motivate. In the usual formulation of quantum mechanics, we assume that, given a state $|\psi \rangle$ and an observable $A$, the probability of measuring a value between $a$ and $a+da$ is given by $|\langle a|\psi \rangle |^2da$ (and furthermore, if $a$ is not an eigenvalue of $A$, then the probability of measuring a value in this interval is $0$). How would you convince your students that this had to be the case? I have thought about this question of motivation for a couple of years now, and so far, the only answers I've come up with are incomplete, not entirely satisfactory, and seem to be much more non-trivial than I feel they should be. So, what do you guys think? Can you motivate the usual formulation of quantum mechanics using only classical mechanics and minimal appeal to experimental results? Note that, at some point, you will have to make reference to experiment. After all, this is the reason why we needed to develop quantum mechanics. In principle, we could just say "The Born Rule is true because its experimentally verified.", but I find this particularly unsatisfying. I think we can do better. Thus, I would ask that when you do invoke the results of an experiment, you do so to only justify fundamental truths, by which I mean something that can not itself just be explained in terms of more theory. You might say that my conjecture is that the Born Rule is not a fundamental truth in this sense, but can instead be explained by more fundamental theory, which itself is justified via experiment. Edit : To clarify, I will try to make use of a much simpler example. In an ideal gas, if you fix the volume, then the temperature is proportional to pressure. So we may ask "Why?". You could say "Well, because experiment.", or alternatively you could say "It is a trivial corollary of the ideal gas law.". If you choose the latter, you can then ask why that is true. Once again, you can just say "Because experiment." or you could try to prove it using more fundamental physical truths (using the kinetic theory of gases, for example). The objective, then, is to come up with the most fundamental physical truths, prove everything else we know in terms of those, and then verify the fundamental physical truths via experiment. And in this particular case, the objective is to do this with quantum mechanics.
I am late to this party here, but I can maybe advertize something pretty close to a derivation of quantum mechanics from pairing classical mechanics with its natural mathematical context, namely with Lie theory . I haven't had a chance yet to try the following on first-year students, but I am pretty confident that with just a tad more pedagogical guidance thrown in as need be, the following should make for a rather satisfactory motivation for any student with a little bit of mathematical/theoretical physics inclination. For more along the following lines see at nLab:quantization . Quantization of course was and is motivated by experiment, hence by observation of the observable universe: it just so happens that quantum mechanics and quantum field theory correctly account for experimental observations, where classical mechanics and classical field theory gives no answer or incorrect answers. A historically important example is the phenomenon called the “ultraviolet catastrophe”, a paradox predicted by classical statistical mechanics which is not observed in nature, and which is corrected by quantum mechanics. But one may also ask, independently of experimental input, if there are good formal mathematical reasons and motivations to pass from classical mechanics to quantum mechanics. Could one have been led to quantum mechanics by just pondering the mathematical formalism of classical mechanics? (Hence more precisely: is there a natural Synthetic Quantum Field Theory?) The following spells out an argument to this extent. It will work for readers with a background in modern mathematics, notably in Lie theory, and with an understanding of the formalization of classical/prequantum mechanics in terms of symplectic geometry. So to briefly recall, a system of classical mechanics/prequantum mechanics is a phase space, formalized as a symplectic manifold $(X,ω)$ . A symplectic manifold is in particular a Poisson manifold, which means that the algebra of functions on phase space $X$ , hence the algebra of classical observables, is canonically equipped with a compatible Lie bracket: the Poisson bracket. This Lie bracket is what controls dynamics in classical mechanics. For instance if $H\in C^{∞}(X)$ is the function on phase space which is interpreted as assigning to each configuration of the system its energy – the Hamiltonian function – then the Poisson bracket with $H$ yields the infinitesimal time evolution of the system: the differential equation famous as Hamilton's equations. To take notice of here is the infinitesimal nature of the Poisson bracket. Generally, whenever one has a Lie algebra $\mathfrak{g}$ , then it is to be regarded as the infinitesimal approximation to a globally defined object, the corresponding Lie group (or generally smooth group) $G$ . One also says that $G$ is a Lie integration of $\mathfrak{g}$ and that $\mathfrak{g}$ is the Lie differentiation of $G$ . Therefore a natural question to ask is: Since the observables in classical mechanics form a Lie algebra under Poisson bracket, what then is the corresponding Lie group? The answer to this is of course “well known” in the literature, in the sense that there are relevant monographs which state the answer. But, maybe surprisingly, the answer to this question is not (at time of this writing) a widely advertized fact that would have found its way into the basic educational textbooks. The answer is that this Lie group which integrates the Poisson bracket is the “quantomorphism group”, an object that seamlessly leads over to the quantum mechanics of the system. Before we say this in more detail, we need a brief technical aside: of course Lie integration is not quite unique. There may be different global Lie group objects with the same Lie algebra. The simplest example of this is already the one of central importance for the issue of quantization, namely the Lie integration of the abelian line Lie algebra $\mathbb{R}$ . This has essentially two different Lie groups associated with it: the simply connected translation group, which is just $\mathbb{R}$ itself again, equipped with its canonical additive abelian group structure, and the discrete quotient of this by the group of integers, which is the circle group $$ U(1) = \mathbb{R}/\mathbb{Z} \,. $$ Notice that it is the discrete and hence “quantized” nature of the integers that makes the real line become a circle here. This is not entirely a coincidence of terminology, but can be traced back to be at the heart of what is “quantized” about quantum mechanics. Namely one finds that the Poisson bracket Lie algebra $\mathfrak{poiss}(X,ω)$ of the classical observables on phase space is (for X a connected manifold) a Lie algebra extension of the Lie algebra $\mathfrak{ham}(X)$ of Hamiltonian vector fields on $X$ by the line Lie algebra: $$ \mathbb{R} \longrightarrow \mathfrak{poiss}(X,\omega) \longrightarrow \mathfrak{ham}(X) \,. $$ This means that under Lie integration the Poisson bracket turns into a central extension of the group of Hamiltonian symplectomorphisms of $(X,ω)$ . And either it is the fairly trivial non-compact extension by $\mathbb{R}$ , or it is the interesting central extension by the circle group $U(1)$ . For this non-trivial Lie integration to exist, $(X,ω)$ needs to satisfy a quantization condition which says that it admits a prequantum line bundle. If so, then this $U(1)$ -central extension of the group $Ham(X,\omega)$ of Hamiltonian symplectomorphisms exists and is called… the quantomorphism group $QuantMorph(X,\omega)$ : $$ U(1) \longrightarrow QuantMorph(X,\omega) \longrightarrow Ham(X,\omega) \,. $$ While important, for some reason this group is not very well known. Which is striking, because there is a small subgroup of it which is famous in quantum mechanics: the Heisenberg group. More exactly, whenever $(X,\omega)$ itself has a compatible group structure, notably if $(X,\omega)$ is just a symplectic vector space (regarded as a group under addition of vectors), then we may ask for the subgroup of the quantomorphism group which covers the (left) action of phase space $(X,\omega)$ on itself. This is the corresponding Heisenberg group $Heis(X,\omega)$ , which in turn is a $U(1)$ -central extension of the group $X$ itself: $$ U(1) \longrightarrow Heis(X,\omega) \longrightarrow X \,. $$ At this point it is worthwhile to pause for a second and note how the hallmark of quantum mechanics has appeared as if out of nowhere from just applying Lie integration to the Lie algebraic structures in classical mechanics: if we think of Lie integrating $\mathbb{R}$ to the interesting circle group $U(1)$ instead of to the uninteresting translation group $\mathbb{R}$ , then the name of its canonical basis element 1∈ℝ is canonically ”i”, the imaginary unit. Therefore one often writes the above central extension instead as follows: $$ i \mathbb{R} \longrightarrow \mathfrak{poiss}(X,\omega) \longrightarrow \mathfrak{ham}(X,\omega) $$ in order to amplify this. But now consider the simple special case where $(X,\omega)=(\mathbb{R}^{2},dp∧dq)$ is the 2-dimensional symplectic vector space which is for instance the phase space of the particle propagating on the line. Then a canonical set of generators for the corresponding Poisson bracket Lie algebra consists of the linear functions p and q of classical mechanics textbook fame, together with the constant function. Under the above Lie theoretic identification, this constant function is the canonical basis element of $i\mathbb{R}$ , hence purely Lie theoretically it is to be called ”i”. With this notation then the Poisson bracket, written in the form that makes its Lie integration manifest, indeed reads $$ [q,p] = i \,. $$ Since the choice of basis element of $i\mathbb{R}$ is arbitrary, we may rescale here the i by any non-vanishing real number without changing this statement. If we write ”ℏ” for this element, then the Poisson bracket instead reads $$ [q,p] = i \hbar \,. $$ This is of course the hallmark equation for quantum physics, if we interpret ℏ here indeed as Planck's constant. We see it arise here by nothing but considering the non-trivial (the interesting, the non-simply connected) Lie integration of the Poisson bracket. This is only the beginning of the story of quantization, naturally understood and indeed “derived” from applying Lie theory to classical mechanics. From here the story continues. It is called the story of geometric quantization. We close this motivation section here by some brief outlook. The quantomorphism group which is the non-trivial Lie integration of the Poisson bracket is naturally constructed as follows: given the symplectic form $ω$ , it is natural to ask if it is the curvature 2-form of a $U(1)$ -principal connection $∇$ on complex line bundle $L$ over $X$ (this is directly analogous to Dirac charge quantization when instead of a symplectic form on phase space we consider the the field strength 2-form of electromagnetism on spacetime). If so, such a connection $(L,∇)$ is called a prequantum line bundle of the phase space $(X,ω)$ . The quantomorphism group is simply the automorphism group of the prequantum line bundle, covering diffeomorphisms of the phase space (the Hamiltonian symplectomorphisms mentioned above). As such, the quantomorphism group naturally acts on the space of sections of $L$ . Such a section is like a wavefunction, instead that it depends on all of phase space, instead of just on the “canonical coordinates”. For purely abstract mathematical reasons (which we won’t discuss here, but see at motivic quantization for more) it is indeed natural to choose a “polarization” of phase space into canonical coordinates and canonical momenta and consider only those sections of the prequantum line bundle which depend on just the former. These are the actual wavefunctions of quantum mechanics, hence the quantum states. And the subgroup of the quantomorphism group which preserves these polarized sections is the group of exponentiated quantum observables. For instance in the simple case mentioned before where $(X,ω)$ is the 2-dimensional symplectic vector space, this is the Heisenberg group with its famous action by multiplication and differentiation operators on the space of complex-valued functions on the real line. For more along these lines see at nLab:quantization .
{ "source": [ "https://physics.stackexchange.com/questions/46015", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3397/" ] }
46,237
I often hear about the wave-particle duality, and how particles exhibit properties of both particles and waves. However, I wonder, is this actually a duality? At the most fundamental level, we 'know' that everything is made up out of particles, whether those are photons, electrons, or maybe even strings. That light for example, also shows wave-like properties, why does that even matter? Don't we know that everything is made up of particles? In other words, wasn't Young wrong and Newton right, instead of them both being right?
Duality is the relationship between two entities that are claimed to be fundamentally equally important or legitimate as features of the underlying object. The precise definition of a "duality" depends on the context. For example, in string theory, a duality relates two seemingly inequivalent descriptions of a physical system whose physical consequences, when studied absolutely exactly, are absolutely identical. The wave-particle duality (or dualism) isn't far from this "extreme" form of duality. It indeed says that the objects such as photons (and electromagnetic waves composed of them) and electrons exhibit both wave and particle properties and they are equally natural, possible, and important. In fact, we may say that there are two equivalent descriptions of particles – in the position basis and the momentum basis. The former corresponds to the particle paradigm, the latter corresponds to the wave paradigm because waves with well-defined wavelengths are represented by simple objects. It's certainly not true that Young was wrong and Newton was right. Up to the 20th century, it seemed obvious that Young was more right than Newton because light indisputably exhibits wave properties, as seen in Young's experiments and interference and diffraction phenomena in general. The same wave phenomena apply to electrons that are also behaving as waves in many contexts. In fact, the state-of-the-art "theory of almost everything" is called quantum field theory and it's based on fields as fundamental objects while particles are just their quantized excitations. A field may have waves on it and quantum mechanics just says that for a fixed frequency $f$, the energy carried in the wave must be a multiple of $E=hf$. The integer counting the multiple is interpreted as the number of particles but the objects are more fundamentally waves. One may also adopt a perspective or description in which particles look more elementary and the wave phenomena are just a secondary property of them. None of these two approaches is wrong; none of them is "qualitatively more accurate" than the other. They're really equally valid and equally legitimate – and mathematically equivalent, when described correctly – which is why the word "duality" or "complementarity" is so appropriate.
{ "source": [ "https://physics.stackexchange.com/questions/46237", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14445/" ] }
46,318
I just had this idea of orbiting a planet just by jumping and then flying upon it on its orbit kind of like superman. So, Would it be theoretically possible or is there a chance of that small body to be & remain its unity?
Let's assume mass of the person plus spacesuit to be $m_1$=100kg Asteroid density: $\rho=$2g/cm$^3$ (source) that is 2 000kg/m$^3$ 15km/hour is a good common run. That's roughly v=4m/s The orbital height is negligible comparing to the radius, assume 0 over surface. Linear to angular velocity (1): $$ \omega = {v \over r } $$ Centripetal force (2): $$ F = m r \omega ^2 $$ Gravity force (3): $$ F= G \frac{m_1 m_2}{r^2} $$ Volume of a sphere (4): $$ V = \frac{4}{3}\pi r^3 $$ Mass of a sphere (5): $$ m_2 = V \rho = \frac{4}{3}\pi r^3 \rho $$ Combining (1),(2),(3), reducing: $$ { m_1 r v^2 \over r^2 } = G { m_1 * m_2 \over r^2 } $$ $$ r v^2 = G m_2 $$ Combining with (5) $$ r v^2 = G \frac{4}{3}\pi r^3 \rho $$ $$ r^2 = \frac{v^2}{\rho G \frac{4}{3}\pi} $$ $$ r = v ({\frac{4}{3}\pi G \rho})^{-{1 \over 2}} $$ Substituting values: $$ r = 4 ({1.33333*3.14159* 6.67300*10^{-11} * 2000})^{-{1 \over 2}} $$ That computes to roughly 5.3 kilometers More interestingly, the radius is directly proportional to the velocity, $$ r[m] = 1337[s] * v [m/s] = 371.51[h/1000] * v[km/h] = 597[m*h/mile] * v[mph] $$ So, a good walk on a 2km radius asteroid will get you orbiting. Something to fit your bill would be Cruithne , a viable target for a space mission thanks to a very friendly orbit. Note, while in rest on Cruithne, the astronaut matching the m_1=100kg would be pulled down with force of 4.5N while not in motion. That is like weighing about 450g or 1lbs on Earth.
{ "source": [ "https://physics.stackexchange.com/questions/46318", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/6761/" ] }
46,573
This is a follow-up to an intriguing question last year about tension in string theory . What are the strings in string theory composed of? I am serious. Strings made of matter are complex objects that require a highly specific form of long-chain inter-atomic bonding (mostly carbon based) that would be difficult to implement if the physics parameters of our universe were tweaked even a tiny bit. That bonding gets even more complicated when you add in elasticity. The vibration modes of a real string are the non-obvious emergent outcome of a complex interplay of mass, angular momentum, various conservation laws, and convenient linearities inherent in of our form of spacetime. In short, a matter-based vibrating real string is the outcome of the interplay of most of the more important physics rules of our universe. Its composition -- what is is made of -- is particularly complex. Real strings are composed out of a statistically unlikely form of long-chain bonding, which in turn depend on the rather unlikely properties that emerge from highly complex multiparticle entities called atoms. So how does string theory handle all of this? What are the strings in string theory made of, and what is it about this substance that makes string-theories simple in comparison to the emergent and non-obvious complexities required to produce string-like vibrations in real, matter-based strings? Addendum 2012-12-28 (all new as of 2012-12-29): OK, I'm trying to go back to my original question after some apt complaints that my addendum yesterday had morphed it into an entirely new question. But I don't want to trash the great responses that addendum produced, so I'm trying to walk the razor's edge by creating an entirely new addendum that I hope expands on the intent of my question without changing it in any fundamental way. Here goes: The simplest answer to my question is that strings are pure mathematical abstractions, and so need no further explanation. All of the initial answers were variants of that answer. I truly did not expect that to happen! While such answers are sincere and certainly well-intended, I suspect that most people reading my original question will find them a bit disappointing and almost certainly not terribly insightful. They will be hoping for more, and here's why. While most of modern mathematical physics arguably is derived from materials analogies, early wave analogies tended towards placing waves within homogeneous and isotropic "water like" or "air like" media, e.g. the aether of the late 1800s. Over time and with no small amount of insight, these early analogies were transformed into sets of equations that increasingly removed the need for physical media analogies. The history of Maxwell's equations and then SR is a gorgeous example. That one nicely demonstrates the remarkable progress of the associated physics theories away from using physical media, and towards more universal mathematical constructs. In those cases I understand immediately why the outcomes are considered "fundamental." After all, they started out with clunky material-science analogies, and then managed over time to strip away the encumbering analogies, leaving for us shiny little nuggets of pure math that to this day are gorgeous to behold. Now in the more recent case of string theory, here's where I think the rub is for most of us who are not immersed in it on a daily basis: The very word "string" invokes the image of a vibrating entity that is a good deal more complicated and specific than some isotropic wave medium. For one thing the word string invokes (perhaps incorrectly) an image of an object localized in space. That is, the vibrations are taking place not within some isotropic field located throughout space, but within some entity located in some very specific region of space. Strings in string theory also seem to possess a rather complicated and certainly non-trivial suite materials-like properties such as length, rigidity, tension, and I'm sure others (e.g. some analog of angular momentum?). So, again trying to keep to my original question: Can someone explain what a string in string theory is made of in a way that provides some insight into why such an unusually object-like "medium of vibration" was selected as the basis for building all of the surrounding mathematics of string theory? From one excellent comment (you know who you are!), I can even give an example of the kind of answer I was hoping for. Paraphrasing, the comment was this: "Strings vibrate in ways that are immediately reminiscent of the harmonic oscillators that have proven so useful analytically in wave and quantum theory." Now I like that style of answer a lot! For one thing, anyone who has read Feynman's section on such oscillators in his lectures will immediately get the idea. Based on that, my own understanding of the origins of strings has now shifted to something far more specific and "connectable" to historical physics, which is this: Making tuning forks smaller and smaller has been been shown repeatedly in the history of physics to provide an exceptionally powerful analytical method for analyzing how various types of vibrations propagate and interact. So, why not take this idea to the logical limit and make space itself into what amounts to a huge field of very small, tuning-fork-like harmonic oscillators? Now that I can at least understand as an argument for why strings "resonated" well with a lot of physicists as an interesting approach to unifying physics. Addendum 2018-03-28: The Answer (no kidding!) This year for the first time I submitted an essay, Fundamental as Fewer Bits , to the annual FQXi foundational questions essay contest . In the essay I propose that Kolmogorov complexity provides a more automated way, less human-biased way to apply Occam's Razor to physics theories, literally by trying to find the least-bits representation of the Kolmogorov sense of program-like data compression. (My thanks to Garrett Lisi for noticing the connection to Occam's Razor; I had not thought of my essay that way.) The contest, which this year goes on until May 1, 2018, proved to be much more interesting and interactive than I had anticipated. In the course of looking at other essays I dove into the details of how string theory originated. I was amazed to find out that the concept has some very solid experimental data behind it... at a scale about $10^{20}$ times larger than the one at which it is now described! As it turns out, string theory originated in some extremely interesting 1960s and 1970s experimental research on hadrons . A hadron is any particle composed of quarks , and includes both two-quark bosonic mesons and three-quark fermionic baryons such as protons , neutrons , and the more exotic $\Lambda$ particles . Being composed of quarks, all hadrons are of course bound together by the strong force , and therein likes the real, experimentally meaningful Answer to what the questions of what strings are composed of: All real, experimentally meaningful strings are composed of the strong force. It seems that most (perhaps all) hadrons have excited states in which their spins are augmented in increments of 2. For example, both the proton and neutron normally has a spin of $\frac{1}{2}$, but both also have higher spins states of e.g. $\frac{5}{2}=\frac{1}{2}+2$ and $\frac{9}{2}=\frac{1}{2}+4$. These higher spins states also have higher masses. Amazingly, when all the possible states are plotted in a spin-versus-mass-squared graph, the result is beautiful set of straight lines with even spacing between the 2-spin additions. These lovely and highly unexpected lines are called Regge trajectories , and they are the true origins of string theory. Theoretical analyses of these remarkable regularities could be explained by assuming them to be the stationary vibration modes of a string. In fact, if you think in terms of how a skip rope can have one, two, or even more loops in it when handled by an expert, you are not too far off the mark. At the time there was hope that these remarkable string-like vibration models might lead to a deeper understanding of both fundamental and composite particles. However, quantum chromodynamics (QCD) instead began to dominate, while Regge trajectories continued to pose theoretical problems. It looked like the end for hadron-level strings and string vibrations, despite the truly remarkable and still unexplained regularities seen in Regge trajectories. Then something very strange happened, an event that to my way of thinking was one of the least rational and most bizarre events in the entire history of physics. I call it the Deep Dive . It has features that I would more typically associate with the ancient and fascinating history of religious revelation and the founding of new religions than I would with scientific analysis. While they were not the only people involved, in 1974 physicists Scherk and Schwarz wrote a conventional looking paper, Dual Models for Non-Hadrons , with an extremely unconventional conclusion tucked away inside. The conclusion was this: Because the two-spin increments of hadron strings bore several mathematical resemblances to the proposed properties of spin-2 gravitons (the still-hypothetical quantized particles of gravity), they were in some way one and the same thing , and the concept of string vibrations should therefore be moved from out of hadrons and into the domain of quantum gravity. This enormous leap of faith was the origin of what we now call "string theory". There was of course a "tiny" problem, in the most literal since of the word: This abrupt leap from very real, experimentally meaningful string-like vibration modes in hadrons to gravitons plunged the needed size scales down to the experimentally inaccessible Planck foam level. This was a drop of about 20 orders of magnitude, with a comparable increase in the energy levels needed to access the proposed vibrations. Even worse, all of the severe constraints on vibration modes imposed by hadron "architectures" were instantly removed by this proposed drop, allowing the number of vibration modes of the now-abstract strings composed of a now-abstract substance (maybe mass-energy?) to explode into at least $10^{500}$ possibile vacuum states . I explore all of this a bit more -- actually hmm, less than I just did here -- in a mini-essay attached to my FQXi essay discussion. In that mini-essay I argue that It's Time to Get Back to Real String Theory . That is, there remains to this day a very real and extraordinarily interesting data puzzle in the very existence of Regge trajectories. This is a mystery that still needs to be resolved! This data is another example of how spin is a remarkably deep and fundamental concept, one for which physics still seems to be missing some kind of critical piece or pieces. Regarding the question "What are the strings in string theory made of?", the answer could not be any clearer: In the real, experimentally meaningful strings found back in hadron research in the 1960s and 1970s, they are a function of the strong force, constrained in interesting and limiting ways by the quarks that enable a string-like topology to exist in the first place. This is all very real, very meaningful physics. For the Planck-level strings that were proposed essentially by revelation, that is, by a leap of faith from experimentally meaningful physics down 20 orders of magnitude to the inaccessible level of the Planck foam, based on no more than a superficial mathematical resemblance, and with utter abandonment of any of the original tight constraints on both substance (the strong force) and vibration modes (the "topologies" of mesons and baryons), the epistemological nature of the Deep Dive now also allows me to provide a more logically precise and self-consistent answer: The substance of which Planck-level strings are composed is exactly the same as the substance that angels use to bind themselves to each other while dancing on the head of a pin. If you think that's an unfair comparison for a scientific discussion, no problem: Just state exactly what scientific experiment should be performed to prove that Planck-scale strings are not composed of the same substance that angels use bind themselves to each other while dancing on the head of a pin. If string theory is truly science, and if the half-billion dollars of critically important research funding that has been spent on it over four decades has not been a complete waste of money, then defining a simple test to prove that Planck-level string theory is more than just an untestable religious revelation gussied up with loads of equations should be no problem at all. Right?
OP wrote(v4): [...] Strings in string theory also seem to possess a rather complicated and certainly non-trivial suite materials-like properties such as length, rigidity, tension, and I'm sure others (e.g. some analog of angular momentum?). [...] Well, the relativistic string should not be confused with the non-relativistic material string, compare e.g. chapter 6 and 4 in Ref. 1, respectively. In contrast, the relativistic string is e.g. required to be world-sheet reparametrization-invariant, i.e. the world-sheet coordinates are no longer physical/material labels of the string, but merely unphysical gauge degree of freedom. Moreover, in principle, all dimensionless continuous constants in string theory may be calculated from any stabilized string vacuum, see e.g. this Phys.SE answer by Lubos Motl. OP wrote(v1): What are strings made of? One answer is that it is only meaningful to answer this question if the answer has physical consequences. Popularly speaking, string theory is supposed to be the innermost Russian doll of modern physics, and there are no more dolls inside that we can explain it in terms. However, we may be able to find equivalent formulations. For instance, Thorn has proposed in Ref. 2 that strings are made of point-like objects that he calls string bits. More precisely, he has shown that this string bit formulation is mathematically equivalent to the light-cone formulation of string theory; first in the bosonic string and later in the superstring. The corresponding formulas are indeed quadratic a la harmonic oscillators (cf. a comment by anna v) with the twist that the "Newtonian mass" of the string bit oscillators are given by light-cone $P^+$ momentum. Thorn was inspired by fishnet Feynman diagrams (think triangularized world-sheets), which were discussed in Refs. 3 and 4. However, string bit formulation does not really answer the question What are strings made of?; it merely adds a dual description. References: B. Zwiebach, A first course in String Theory. C.B. Thorn, Reformulating String Theory with the 1/N Expansion, in Sakharov Memorial Lectures in Physics, Ed. L. V. Keldysh and V. Ya. Fainberg, Nova Science Publishers Inc., Commack, New York, 1992; arXiv:hep-th/9405069 . H.B. Nielsen and P. Olesen, Phys. Lett. 32B (1970) 203. B. Sakita and M.A. Virasoro, Phys. Rev. Lett. 24 (1970) 1146.
{ "source": [ "https://physics.stackexchange.com/questions/46573", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/7670/" ] }
46,643
If photons are spin-1 bosons, then doesn't quantum mechanics imply that the allowed values for the z-component of spin (in units of $\hbar$) are -1, 0, and 1? Why then in practice do we only use the $\pm 1$ states? I have been told that this is directly related to the two polarizations of the photon. This seems to be more of a classical argument however, arising from the fact that Maxwell's equations do not permit longitudinal EM waves in a vacuum. I have also heard that it is related to the fact that photons have no rest mass, although I understand far less of this reasoning. What I'm looking for are elaborations on these two arguments (if I have them correct), and perhaps an argument as to how these two are equivalent (if one exists).
I can't improve on KDN's answer, but given Todd's comments this is an attempt to rephrase KDN's answer in layman's terms. A system is only in an eigenstate of spin around an axis if a rotation about the axis doesn't change the system. Take $z$ to be the direction of travel, then for a spin 1 system the $S_z$ = 0 state would be symmetric to a rotation about an axis normal to the direction of travel. But this can only be the case if the momentum is zero i.e. in the rest frame. If the system has a non-zero momentum any rotation will change the direction of the momentum so it won't leave the system unchanged. For a massive particle we can always find a rest frame, but for a massless particle there is no rest frame and therefore it is impossible to find a spin eigenfunction about any axis other than along the direction of travel. This applies to all massless particles e.g. gravitons also have only two spin states.
{ "source": [ "https://physics.stackexchange.com/questions/46643", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16785/" ] }
47,084
It seems paradoxical that the strength of so many phenomena ( Newtonian gravity , Coulomb force ) are calculable by the inverse square of distance. However, since volume is determined by three dimensions and presumably these phenomena have to travel through all three, how is it possible that their strengths are governed by the inverse of the distance squared ? The gravitational force and intensity of light is merely 4 times weaker at 2 times the distance, but the volume of a sphere between the two is 8 times larger . Since presumably these phenomena would affect all objects in a spherical shell surrounding the source with equal intensity, they travel in all three dimensions. How come these laws do not obey an inverse-cube relationship while traveling through space?
This is not paradoxical and it is not necessary for any physical phenomenon to a priori have to obey any particular law. Some phenomena do have to obey inverse-square laws (such as, particularly, the light intensity from a point source) but they are relatively limited (more on them below). Even worse, gravity and electricity don't even follow this in general! For the latter, it is only point charges in the electrostatic regime that obey an inverse-square law. For more complicated systems you will have magnetic interactions as well as corrections that depend on the shape of the charge distributions. If the systems are (globally) neutral, there will still be electrostatic interactions which will fall off as the inverse cube or faster! The van der Waals forces between molecules, for instance, are electrostatic in origin but go down as $1/r^6$. It is for systems with a conserved flux that the inverse-square law must hold, at least at large distances. If a point light source emits a fixed amount of energy per unit time, then this energy must go through every imaginary spherical surface we think up. Since their area goes up as $r^2$, the power per unit area (a.k.a. the irradiance ) must go down as $1/r^2$. In a simplified picture, this is also true for the electrostatic force, where it is the flow of virtual photons that must be conserved.
{ "source": [ "https://physics.stackexchange.com/questions/47084", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16982/" ] }
47,091
I am confused about phase-volume contraction in dissipative systems. Please help me catch the flaw in my understanding. From a macroscopic point of view I understand that a dynamic system tends to go to an equilibrium state or a limit cycle if its not chaotic. But now trying to understand it in phase space: 1) Consider a system in complete thermodynamic equilibrium. It is a non-dissipative system (it cannot dissipate anymore). Therefore it could be in any microstate allowed by constraints (e.g., conserved quantities) and by Liouville theorem the probability density does not change. Therefore isn't the phase space volume accessible to this system the whole phase space (allowed by constraints)? 2) Now consider a non-equilibrium system. Its location in the space has some peaked probability distribution based on the initial conditions, i.e., it occupies a small space volume, and when it approaches equilibrium the probability of finding it becomes uniform and spreads over the complete phase space. Doesn't that mean the phase volume has expanded?
This is not paradoxical and it is not necessary for any physical phenomenon to a priori have to obey any particular law. Some phenomena do have to obey inverse-square laws (such as, particularly, the light intensity from a point source) but they are relatively limited (more on them below). Even worse, gravity and electricity don't even follow this in general! For the latter, it is only point charges in the electrostatic regime that obey an inverse-square law. For more complicated systems you will have magnetic interactions as well as corrections that depend on the shape of the charge distributions. If the systems are (globally) neutral, there will still be electrostatic interactions which will fall off as the inverse cube or faster! The van der Waals forces between molecules, for instance, are electrostatic in origin but go down as $1/r^6$. It is for systems with a conserved flux that the inverse-square law must hold, at least at large distances. If a point light source emits a fixed amount of energy per unit time, then this energy must go through every imaginary spherical surface we think up. Since their area goes up as $r^2$, the power per unit area (a.k.a. the irradiance ) must go down as $1/r^2$. In a simplified picture, this is also true for the electrostatic force, where it is the flow of virtual photons that must be conserved.
{ "source": [ "https://physics.stackexchange.com/questions/47091", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4952/" ] }
47,092
I have a device like this one: http://www.youtube.com/watch?v=kA_hw_lY-OY (All of them work the same) Here's an image: I want to make it spin without touching it with hands. I tried using magnets of several kinds, and tried to put them in many places around it but it didn't make it spin. It just made it to vibrate a little. I guess there is a point in which the "pen" will spin and spin. Basically, I prefer to make it rotate not with magnets, but with an electric current. Most likely, I will use an AC generator, and it also makes sense, because I thought that in some point there will be same poles, and then the pen will spin a little bit, and then there will be different poles, but the alternate current changes the poles each time, so the poles will always be the same. My question is: in what point should it happen?
This is not paradoxical and it is not necessary for any physical phenomenon to a priori have to obey any particular law. Some phenomena do have to obey inverse-square laws (such as, particularly, the light intensity from a point source) but they are relatively limited (more on them below). Even worse, gravity and electricity don't even follow this in general! For the latter, it is only point charges in the electrostatic regime that obey an inverse-square law. For more complicated systems you will have magnetic interactions as well as corrections that depend on the shape of the charge distributions. If the systems are (globally) neutral, there will still be electrostatic interactions which will fall off as the inverse cube or faster! The van der Waals forces between molecules, for instance, are electrostatic in origin but go down as $1/r^6$. It is for systems with a conserved flux that the inverse-square law must hold, at least at large distances. If a point light source emits a fixed amount of energy per unit time, then this energy must go through every imaginary spherical surface we think up. Since their area goes up as $r^2$, the power per unit area (a.k.a. the irradiance ) must go down as $1/r^2$. In a simplified picture, this is also true for the electrostatic force, where it is the flow of virtual photons that must be conserved.
{ "source": [ "https://physics.stackexchange.com/questions/47092", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/13007/" ] }
47,105
Given a light pulse in vacuum containing a single photon with an energy $E=h\nu$, what is the peak value of the electric / magnetic field?
The electric and magnetic fields of a single photon in a box are in fact very important and interesting. If you fix the size of the box, then yes, you can define the peak magnetic or electric field value. It's a concept that comes up in cavity QED, and was important to Serge Haroche's Nobel Prize this year (along with a number of other researchers). In that experiment, his group measured the electric field of single and a few photons trapped in a cavity. It's a very popular field right now. However, to have a well defined energy, you need to specify a volume. In a laser, you find an electric field for a flux of photons ( n photons per unit time), but if you confine the photon to a box you get an electric field per photon. I'll show you the second calculations because it's more interesting. Put a single photon in a box of volume $V$. The energy of the photon is $\hbar \omega$ (or $\frac{3}{2} \hbar \omega$, if you count the zero-point energy, but for this rough calculation let's ignore that). Now, equate that to the classical energy of a magnetic and electric field in a box of volume $V$: $$\hbar \omega = \frac{\epsilon_0}{2} |\vec E|^2 V + \frac{1}{2\mu_0} |\vec B|^2 V = \frac{1}{2} \epsilon_0 E_\textrm{peak}^2 V$$ There is an extra factor of $1/2$ because, typically, we're considering a standing wave. Also, I've set the magnetic and electric contributions to be equal, as should be true for light in vacuum. An interesting and related problem is the effect of a single photon on a single atom contained in the box, where the energy of the atom is $U = -\vec d \cdot \vec E$. If this sounds interesting, look up strong coupling regime , vacuum Rabi splitting , or cavity quantum electrodynamics . Incidentally, the electric field fluctuations of photons (or lack thereof!) in vacuum are responsible for the Lamb shift, a small but measureable shift in energies of the hydrogen atom.
{ "source": [ "https://physics.stackexchange.com/questions/47105", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16995/" ] }
47,253
Why does the nature always prefer low energy and maximum entropy? I've just learned electrostatics and I still have no idea why like charges repel each other. http://in.answers.yahoo.com/question/index?qid=20061106060503AAkbIfa I don't quite understand why U has to be minimum. Can someone explain?
Nature has no preferences, and therefore entropy tends to increase. Sounds paradoxical? The point is that each microscopic state (describing the exact position and velocity of each atom in the system) is equally likely. However, what we typically observe is not a micro state, but a course-grained description corresponding to incredibly many micro states. Certain macro states correspond to far fewer micro states than other macro states. As nature has no preference for any of these micro states, the latter macro states are far more likely to occur. The evolution to ever more likely macro states (until the most likely macro state, the equilibrium state, is reached) is called the second law of thermodynamics. The decrease of potential energy is the consequence of the first (energy conservation) and second (evolution to more likely macro states) law of thermodynamics. As macro states with a lot of energy stored in heat (random thermal motion) contain many more micro states and are therefore much more likely, energy tends to get transferred from potential energy to thermal energy. This is observed as a decrease in potential energy.
{ "source": [ "https://physics.stackexchange.com/questions/47253", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16378/" ] }
47,368
Someone once incorrectly told me that, given the speed of light is the speed limit of the universe, aliens would have to live for hundreds of years if they are to travel distances of hundreds of light years to reach Earth. In a "special relativistic" and non-expanding universe however, this is not the case. As velocity approaches the speed of light, say $v = 0.999c$, then we have $$\gamma = \frac{1}{\sqrt{1-\frac{(0.999c)^2}{c^2}}} = \frac{1}{\sqrt{1-\frac{0.998001c^2}{c^2}}} = 22.37$$ Let us assume that an alien wishes to travel 100 light years from his planet to Earth. If the alien is travelling at $v = 0.999c$, he will observe the distance between his planet and the Earth to contract, and will measure the contracted distance to be: $$\text{Distance} = \frac{100 \; \mathrm{ly}}{\gamma} = \frac{100 \; \mathrm{ly}}{22.37} = 4.47 \; \text{light years}$$ The Alien will be able to travel this distance in a time of : $$\text{Time} = \text{distance}/\text{speed} = 4.47/0.999 = 4.47 \; \text{years}$$ It is easy to show that as the alien's speed increases, the time taken to travel the 100 light year distance approaches 0. It can thus be shown that thanks to length contraction and time dilation of special relativity, all parts of a special relativistic universe are accessible to an observer with a finite life time. We however don't live in a purely special relativistic universe. We live in an expanding universe. Given the universe is expanding, are some parts of the universe no longer theoretically accessible to observers with finite life times?
Your question can be translated into "if right now we would send a powerful omnidirectional light pulse from earth into space, would there be galaxies that never see this light pulse?" The answer is "yes" . Due to the accelerated expansion of the universe, as described by the lambda-CDM model, only galaxies currently less than about 16 billion light years (the difference between the cosmological event horizon and the current distance to the particle horizon) away from us will at some time observe the light pulse. A nice visual representation of this can be found in figure 1 of this publication .
{ "source": [ "https://physics.stackexchange.com/questions/47368", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/12350/" ] }
47,379
The gravitational force on your body, called your weight, pushes you down onto the floor. $$W=mg$$ So, what is the weight equation through general relativity?
Start with the Schwarzschild metric $$ds^2 = (1-\frac{r_S}{r})c^2dt^2-(1-\frac{r_S}{r})^{-1}dr^2-r^2d\Omega^2 $$ where $$r_S=\frac{2GM}{c^2} $$ A particle at rest at radius $r$ and angular parameters zero from the centre of mass has worldline $$ x^{\mu}=(t, r, 0, 0)$$ Its four velocity is thus $$ u^{\mu}=\frac{dx^{\mu}}{d\tau}=((1-\frac{r_S}{r})^{-\frac{1}{2}}, 0, 0, 0)$$ Its four-acceleration is $$a^{\mu}= \frac{du^{\mu}}{d\tau}+\Gamma^{\mu}_{\alpha \beta}u^{\alpha}u^{\beta} $$ After looking up the Christoffel symbols, because I'm lazy, I get $$ a^{\mu} = (0, \frac{c^2r_S}{2r^2}, 0, 0)$$ So the Lorentz norm squared of the four-acceleration is $$g_{\mu \nu}a^{\mu}a^{\nu}= \frac{c^4r_S^2}{4r^4(1-\frac{r_S}{r})}=\frac{G^2M^2}{r^4(1-\frac{2GM}{c^2r})}$$ Now the proper acceleration of an object at time t is the acceleration relative to an observer in free fall, who is momentarily at rest w.r.to the object at time t. The free fall guy is the one who is not accelerating - the object held at rest at radius r is the one who is accelerating. As we've shown, his acceleration is $$\frac{GM}{r^2}\frac{1}{\sqrt{1-\frac{2GM}{c^2r}}} $$ So if you want to define a force, it would be $$F=ma=\frac{GMm}{r^2}\frac{1}{\sqrt{1-\frac{2GM}{c^2r}}} $$ As $c\rightarrow \infty$ we recover the Newtonian definition, but nobody bothers phrasing it in these terms.
{ "source": [ "https://physics.stackexchange.com/questions/47379", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
47,919
I've consulted several books for the explanation of why $$\nabla _{\mu}g_{\alpha \beta} = 0,$$ and hence derive the relation between metric tensor and affine connection $\Gamma ^{\sigma}_{\mu \beta} $ $$\Gamma ^{\gamma} _{\beta \mu} = \frac{1}{2} g^{\alpha \gamma}(\partial _{\mu}g_{\alpha \beta} + \partial _{\beta} g_{\alpha \mu} - \partial _{\alpha}g_{\beta \mu}).$$ But I'm getting nowhere. May be I've to go through the concepts of manifold much deeper.
The connection is chosen so that the covariant derivative of the metric is zero. The vanishing covariant metric derivative is not a consequence of using "any" connection, it's a condition that allows us to choose a specific connection $\Gamma^{\sigma}_{\mu \beta}$. You could in principle have connections for which $\nabla_{\mu}g_{\alpha \beta}$ did not vanish. But we specifically want a connection for which this condition is true because we want a parallel transport operation which preserves angles and lengths.
{ "source": [ "https://physics.stackexchange.com/questions/47919", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/8491/" ] }
47,934
In Griffiths' Intro to QM [1] he gives the eigenfunctions of the Hermitian operator $\hat{x}=x$ as being $$g_{\lambda}\left(x\right)~=~B_{\lambda}\delta\left(x-\lambda\right)$$ (cf. last formula on p. 101). He then says that these eigenfunctions are not square integrable because $$\int_{-\infty}^{\infty}g_{\lambda}\left(x\right)^{*}g_{\lambda}\left(x\right)dx ~=~\left|B_{\lambda}\right|^{2}\int_{-\infty}^{\infty}\delta\left(x-\lambda\right)\delta\left(x-\lambda\right)dx ~=~\left|B_{\lambda}\right|^{2}\delta\left(\lambda-\lambda\right) ~\rightarrow~\infty$$ (cf. second formula on p. 102). My question is, how does he arrive at the final term, more specifically, where does the $\delta\left(\lambda-\lambda\right)$ bit come from? My total knowledge of the Dirac delta function was gleaned earlier on in Griffiths and extends to just about understanding $$\tag{2.95}\int_{-\infty}^{\infty}f\left(x\right)\delta\left(x-a\right)dx~=~f\left(a\right)$$ (cf. second formula on p. 53). References: D.J. Griffiths, Introduction to Quantum Mechanics, (1995) p. 101-102.
Well, the Dirac delta function $\delta(x)$ is a distribution , also known as a generalized function. One can e.g. represent $\delta(x)$ as a limit of a rectangular peak with unit area, width $\epsilon$, and height $1/\epsilon$; i.e. $$\tag{1} \delta(x) ~=~ \lim_{\epsilon\to 0^+}\delta_{\epsilon}(x), $$ $$\tag{2} \delta_{\epsilon}(x)~:=~\frac{1}{\epsilon} \theta(\frac{\epsilon}{2}-|x|) ~=~\left\{ \begin{array}{ccc} \frac{1}{\epsilon}&\text{for}& |x|<\frac{\epsilon}{2}, \\ \frac{1}{2\epsilon}&\text{for}& |x|=\frac{\epsilon}{2}, \\ 0&\text{for} & |x|>\frac{\epsilon}{2}, \end{array} \right. $$ where $\theta$ denotes the Heaviside step function with $\theta(0)=\frac{1}{2}$ . The product $\delta(x)^2$ of the two Dirac delta distributions does strictly speaking not$^1$ make mathematical sense, but for physical purposes, let us try to evaluate the integral of the square of the regularized delta function $$\tag{3} \int_{\mathbb{R}}\! dx ~\delta_{\epsilon}(x)^2 ~=~\epsilon\cdot\frac{1}{\epsilon}\cdot\frac{1}{\epsilon} ~=~\frac{1}{\epsilon} ~\to~ \infty \quad \text{for} \quad \epsilon~\to~ 0^+. $$ The limit is infinite as Griffiths claims. It should be stressed that in the conventional mathematical theory of distributions, the eq. (2.95) is a priori only defined if $f$ is a smooth test-function. In particular, it is not mathematically rigorous to use eq. (2.95) (with $f$ substituted with a distribution) to justify the meaning of the integral of the square of the Dirac delta distribution. Needless to say that if one blindly inserts distributions in formulas for smooth functions, it is easy to arrive at all kinds of contradictions! For instance, $$ \frac{1}{3}~=~ \left[\frac{\theta(x)^3}{3}\right]^{x=\infty}_{x=-\infty}~=~\int_{\mathbb{R}} \!dx \frac{d}{dx} \frac{\theta(x)^3}{3} $$ $$\tag{4} ~=~\int_{\mathbb{R}} \!dx ~ \theta(x)^2\delta(x) ~\stackrel{(2.95)}=~ \theta(0)^2~=~\frac{1}{4}.\qquad \text{(Wrong!)} $$ -- $^1$ We ignore Colombeau theory . See also this mathoverflow post.
{ "source": [ "https://physics.stackexchange.com/questions/47934", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4075/" ] }
48,019
In all thermodynamics texts that I have seen, expressions such as $\operatorname{ln}T$ and $\operatorname{ln}S$ are used, where $T$ is temperature and $S$ is entropy, and also with other thermodynamic quantities such as volume $V$ etc. But I have always thought that this is incorrect because the arguments $x$ in expessions such as $\operatorname{ln}x$ and $e^x$ ought to be dimensionless. Indeed at undergraduate level I always tried to rewrite these expressions in the form $\operatorname{ln}\frac{T}{T_0}$. So is it correct to use expressions such as $\operatorname{ln}T$ at some level?
You are absolutely right about the dimensional analysis. The use of $ \ln T $ etc. is always a shorthand for $ \ln \left(\frac{T}{T_0}\right) $ which is okay to use if for some reason you don't care about $ T_0 $, i.e. because it cancels out or you are interested in the asymptotic behaviour only. In any expression where you have to take derivatives to get observable quantities (partition function, generating functional etc.), it's okay to leave off the scale: $$ \mathrm{d} \ln \left(\frac{T}{T_0}\right) = T^{-1} \mathrm{d}T $$ independent of $ T_0 $. So: it's a lazy shorthand - the kind of thing much beloved by physicists. :)
{ "source": [ "https://physics.stackexchange.com/questions/48019", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/17324/" ] }
48,211
My train of thought was the following: The Earth orbiting the Sun is at times 5 million kilometers closer to it than others, but this is almost irrelevant to the seasons. Instead, the temperature difference between seasons is due to the attack angle of the rays, so basically the amount of atmosphere they have to pass through . Actually, it makes sense, heat comes from the photons that collide with the surface of the earth (and a bit with the atmosphere) and gets reflected, and there's nothing between the earth and the sun that would make a photon lose energy over a 5 million km travel on vacuum. Or is it? (Note I'm not wondering about the possible lose of energy related to the redshift of the expanding universe.) Which made me wonder… So why then are the planets closer to the sun warmer? It seems silly, the closer you are to a heat source, the warmer it feels, but that's because of the dispersion of the heat in the medium, right? If there's no medium, what dissipates the energy?
The reason being closer to a heat source makes you warmer is the inverse square law . Think of it this way: If you have a $1~\mathrm{m}^2$ piece of material facing the Sun and located at Mercury's orbit, it will be quite hot. What does the shadow of this square look like at Earth's orbit (about $2.5$ times further away than Mercury)? Well, it will be $2.5$ times bigger in both directions, covering about $6~\mathrm{m}^2$. So the same amount of power can be delivered either to $1~\mathrm{m}^2$ on Mercury or to $6~\mathrm{m}^2$ on Earth. Every square meter of Earth gets about $6$ times less Solar power than every square meter on Mercury. The light is not losing energy to the surrounding medium, even if the medium exists.
{ "source": [ "https://physics.stackexchange.com/questions/48211", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/17399/" ] }
48,224
Is it possible that there's a color that our eye couldn't see? Like all of us are color blind to it. If there is, is it possible to detect/identify it?
As mentioned in a number of other answers, there are three different color receptors in a typical person's eye. They respond to different wavelengths of light, as can be seen in the below diagram from wikimedia . The $x$-axis is wavelength in nanometers, and the three curves represent the three receptors' response at those wavelengths. Any incoming light will affect each of these to a certain degree. Thus the range of theoretically perceivable colors is basically the set of all different triplets of response values for these receptors. (Think "blue is at 25%, red is at 97.3%, green is at 12%.") When all three are firing near full strength, the result is something like white. If the blue receptor is firing and red and green are basically off, well then you see blue. There are two important points to make, though. First, one often sees reference to a connection between wavelength and color. Indeed, you cannot see any wavelengths outside approximately 400 to 700 nanometers . [Note that other animals have different ranges: Bees can see into the ultraviolet (below 400 nanometers), while some snakes can "see" into the infrared (above 700 nanometers).] Be careful not to take this connection too far, however. In particular, there is more to color than a single wavelength. For instance, light could be hitting your eye with two overlaid wavelengths - one of which resonates with the green receptor very well, and the other of which resonates particularly well with the blue. The resulting perception is likely to be a teal that simply cannot be reproduced with a single wavelength . This is exactly analogous to sound, where a monochromatic "pure" pitch will never, at any frequency, sound like a trumpet or a viola - those instruments' timbres are defined by the varying strengths of the overtones. In other words, "all the colors of the rainbow" does not encompass all colors. The other point is that there are valid combinations of receptor stimulation levels that cannot be achieved by any combination of wavelengths . This is partly due to how your receptors' ranges are not separate. Note for instance how the "red" (L) and "green" (M) receptors are actually quite close. It is hard to stimulate one without the other. You can never, for example, get "100% green, 0% red and blue" as a signal from your eye to your brain. Such theoretical colors that cannot be reproduced with any source of light are called imaginary colors . Supposedly, you can actually see some imaginary colors by first saturating one or more receptors (say by looking at nothing but lots of pure green for a few minutes), thus wearing them out, and then looking at another source of light. The response you get won't be quite the same as you normally would with that light source, since some of your receptors are not up to full capacity. (I have not had too much luck with this experiment myself, but perhaps you may fare better.) Finally, regarding detection : When it comes to light, all it is scientifically is different wavelengths of electromagnetic radiation. We have spectrometers for pretty much every wavelength out there, well beyond visible. Thus you can always tell the exact composition of some light ("12% in the 550-553 nanometer range, 80% evenly distributed between 600 and 700 nanometers, 8% focused at 350 nanometers," for instance). We don't need to rely on our eyes' physiology.
{ "source": [ "https://physics.stackexchange.com/questions/48224", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/17410/" ] }
48,328
Minutephysics has a popular YouTube video called "How to break the speed of light". In the video it states that if you flick your wrist while pointing a laser that reaches the moon, that the spot of light on the moon will travel 20 times the speed of light. Now don't get me wrong, I do like their videos, just this one seemed a bit fishy to me. At first I thought it all practically made sense, then I realised something... In my mind, I would think that light particles (photons) travel from the laser to the moon and bounces off the moon and back to your eye (it doesn't just stay there, in place, so you can't move it around). Now, what he is stating is that if you flick your wrist these photons that have travelled to the moon will move along with your wrist. Wouldn't these photons be bouncing off of other objects or still travelling to the moon by the time you flick your wrist? i.e. dissipating, therefore new photons will be travelling to the moon (from the laser directly). For example: let's say you point the laser at the moon, and once it reaches the moon, you wait a couple of seconds and then flick your wrist. The laser that you have flicked will emit photons in every direction that your wrist was in, correct? i.e. The photons would shoot out in a straight line (unless disrupted) continuously, with your wrist taking no affect on the speed of the photons. So back to the question, is this video wrong?
The photons move at the speed of light in a straight line from the laser to the moon and back. The spot on the moon can move faster than light. There is no law against that. The spot is not a physical object, just an image. When you turn your wrist nothing happens to the photons which are already on the way to the moon - they continue on the same trajectory. But new photons are emitted in the new direction of your laser. It's like waving a garden hose back and forth.
{ "source": [ "https://physics.stackexchange.com/questions/48328", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16067/" ] }
48,334
According to Dirac equation we can write, \begin{equation} \left(i\gamma^\mu( \partial_\mu +ie A_\mu)- m \right)\psi(x,t) = 0 \end{equation} We seek an equation where $e\rightarrow -e $ and which relates to the new wave functions to $\psi(x,t)$ . Now taking the complex conjugate of this equation we get \begin{equation} \left[-i(\gamma^\mu)^* \partial_\mu -e(\gamma^\mu)^* A_\mu - m \right] \psi^*(x,t) = 0 \end{equation} If we can identify a matrix U such that \begin{equation} \tilde{U} (\gamma^\mu)^* ( \tilde{U} )^{-1} = -\gamma^\mu \end{equation} where $ 1 =U^{-1} U$. I want to know that, why and how did we do the last two equation. More precisely, I want to know more details and significance of the last two equations .
The Dirac equation for a particle with charge $e$ is $$ \left[\gamma^\mu (i\partial_\mu - e A_\mu) - m \right] \psi = 0 $$ We want to know if we can construct a spinor $\psi^c$ with the opposite charge from $\psi$. This would obey the equation $$ \left[\gamma^\mu (i\partial_\mu + e A_\mu) - m \right] \psi^c = 0 $$ If you know about gauge transformations $$ \psi \rightarrow \exp\left( i e \phi\right) \psi $$ (together with the compensating transformation for $A_\mu$, which we don't need here), this suggests that complex conjugation is the thing to do: $$ \psi^\star \rightarrow \exp\left( i (-e) \phi\right) \psi^\star $$ So it looks like $\psi^\star$ has the opposite charge. Let's take the complex conjugate of the Dirac equation: $$ \left[-\gamma^{\mu\star} (i\partial_\mu + e A_\mu) - m \right] \psi^\star = 0 $$ Unfortunately this isn't what we want. But remember that spinors and $\gamma$ matrices are only defined up to a change of basis $\psi \rightarrow S \psi$ and $\gamma^\mu \rightarrow S \gamma^\mu S^{-1}$. Possibly we can find a change of basis that brings the Dirac equation into the form we want. Introduce an invertible matrix $C$ by multiplying on the left and inserting $ 1 = C^{-1}C $ (note that $C$ is the more common notation for your $\tilde{U}$): $$ \begin{array}{lcl} 0 &= & C \left[-\gamma^{\mu\star} (i\partial_\mu + e A_\mu) - m \right] C^{-1} C\psi^\star \\ &= & \left[-C\gamma^{\mu\star}C^{-1} (i\partial_\mu + e A_\mu) - m \right] C\psi^\star \end{array}$$ Note that if we can find a $C$ which obeys $-C\gamma^{\mu\star}C^{-1} = \gamma^\mu$ then $C\psi^\star$ makes a perfectly good candidate for $\psi^c$! It turns out that one can indeed construct $C$ satisfying the condition and define charge conjugation as $$ \psi \rightarrow \psi^c = C\psi^\star $$ You can see this more explicitly in terms of two component spinors in the Weyl basis: $$ \psi = \left( \begin{matrix} \chi_\alpha \\ \eta^{\dagger}_{\dot{\alpha}} \end{matrix} \right) $$ (the notation follows the tome on the subject ). The charge conjugate spinor in this representation is $$ \psi^c = \left( \begin{matrix} \eta_\alpha \\ \chi^{\dagger}_{\dot{\alpha}} \end{matrix} \right) $$ So charge conjugation is $$ \eta \leftrightarrow \chi $$ This representation explicitly brings out the two oppositely charged components of the Dirac spinor, $\eta$ and $\chi$, and shows that charge conjugation acts by swapping them. To recap: we want to define a charge conjugation operation so that given a $\psi$ with some electric charge $e$, we can get a $\psi^c$ with charge $-e$. Complex conjugating the Dirac equation gets us there, but the resulting spinor $\psi^\star$ is in a different spinor basis so the Dirac equation is not in standard form. We introduce a change of basis $C$ to get the Dirac equation back in standard form. The necessary conditions for this to work are that $C$ is invertible (otherwise it wouldn't be a change of basis and bad things would happen) and $-C\gamma^{\mu\star}C^{-1} = \gamma^\mu$.
{ "source": [ "https://physics.stackexchange.com/questions/48334", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
48,335
http://www.youtube.com/watch?v=UrN99RELqwo (Video Title: "Keshe Foundation Promo Intro Video (english with multiple subtitles) === PLEASE SHARE ===") In this video they claim that they can create free energy using plasma reactors. Is this fake? M.T. Keshe said that several entities, individuals, groups, and governments, have been able to replicate the energy technology phenomenon by referring to the books and patents [citation needed - old link outdated]. One scientist allegedly could power three towns pounds with his device.
Yes, the claims in the video are totally absurd from the viewpoint of science. It's enough to listen for roughly 70 seconds to be sure that the narrator doesn't have the slightest clue about physics and the remaining 302 seconds make this fact even more self-evident. I won't try to answer the question whether the authors of the video realize that what they present is nonsense and they're deliberately cheating the viewers or they don't realize because they believe their own stuff, due to their totally inadequate education or intelligence. First of all, one can't create "free energy" out of nothing as it violates the energy conservation law, something we've known to hold for several centuries and something that has only been strengthened by all the subsequent research and evidence that was accumulated. The energy conservation law holds because the laws of physics don't change if we move everything in the direction of time – if we "repeat the same experiments later". This relationship between symmetries and conservation laws was found by Emmy Noether, one of the greatest female mathematicians of all time. So one has to change something about the matter – like ignite fusion – to get energy out of it. People work on fusion and indeed, plasma is there, but these people don't have any new technology of fusion and they don't even claim to have it. Without fusion, one can't be continuously extracting energy out of plasma. (Incidentally, the term " free energy " has a very particular meaning in physics and it has nothing to do with the colloquiual understanding of "energy for free".) Second, the video clip confuses electromagnetism with gravity. It wants to emulate some gravitational influences in the Cosmos but it apparently fails to realize that gravity is only important for celestial bodies because they're large. Gravity will never play any important role in a man-made engine or lab because gravity between systems that may be "constructed" by the men is negligibly weak. By their character, magnitude, ability to change sign, and other features, gravity and electromagnetism dramatically differ and they only get unified in conditions that can't be achieved experimentally. The narrator's suggestion that gravity and electromagnetism are almost the same thing and should be considered together is preposterous and only proves that they don't have a clue about basic concepts of physics such as electromagnetism, gravity, and the differences between them. Third, the video presents a preposterous picture of "types of matter" which are divided into a triangle of "normal matter, antimatter, dark matter". This is nonsense. Antimatter is always a dual relationship (although antimatter to some particles may be identical with these particles – in some cases, e.g. photons, there is no difference between matter and antimatter). Dark matter isn't any "third type" that could be added to matter or antimatter. Dark matter is just a particular type of matter (or antimatter: probably the kind of matter that is identical to its antimatter) but it's nonsensical to create a special category for dark matter because we should do the same thing for electrons or other things. Dark matter is composed of a particle species analogous to electron, top quark, photon, or other elementary particles. It's not "qualitatively different" from them and it is not a "third type of negation" that could extend antimatter. Fourth, they make lots of trivial, small, but important technical errors in their discussion of pretty much everything. For example, they omit antineutrinos in the decay of the neutron. Their claims that human cells depend on magnetism etc. are fiction, too. Fifth, it's nonsensical that one can create plasma by a similar electric circuit in a Coke bottle with home-available voltage. One needs lots of energy to create plasma, even more energy to create a plasma that has a chance to constantly generate energy (via fusion), and it is a hugely difficult technological task to sustain a body of plasma at least for a second. It's certain they can't solve this difficult task – that is almost mastered but not quite mastered by the tokamak rings. After all, a Coke bottle is a completely wrong shape for the task of maintaining a plasma. A toroidal ring or a small vessel with the radiation coming from all directions seem to be among the rare clever techniques to stabilize plasma, a very difficult task you can't expect to solve in your kitchen. There are lots of other nonsenses in the video – combined with lots of kitschy clichés about renewable energies and similar physically vacuous or problematic stuff – but I suspect that a sensible viewer has stopped the video several minutes ago. The video is addressed to viewers who lack the basic high school education in physics and it has clearly found thousands of them.
{ "source": [ "https://physics.stackexchange.com/questions/48335", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1266/" ] }
48,349
The Lagrangian $\mathcal L = -\frac14 F^{\mu\nu} F_{\mu\nu}$ with $F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$ results in the four-potential's equation of motion $$ \underbrace{\partial^\mu \partial_\mu}_{\equiv \square} A^\nu - \partial^\nu\partial_\mu A^\mu = 0\quad(1)$$ which for the Lorentz-gauge $\partial_\mu A^\mu=0$ yields the classical wave equation $$ \square A^\nu = 0\quad(2)$$ Since $\square = -\hat P^2$ the Lorentz-gauged field must be massless field due to Wigner's Classification . However Physics should be gauge-invariant, while the additional degree of freedom of a gauge transformation $A^\mu \to A^\mu + \partial^\mu\phi$ allows for an arbitrary scalar gauge field for which $(1)$ does not imply any restriction on $\square\partial^\mu\phi$. Especially, $\square\partial^\mu\phi = -m^2\partial^\mu\phi$, i.e. a massive gauge boson, seems possible, rendering the vector field itself massive. While there is no physical interaction, this still seems quite odd, so how can this be fixed?
Yes, the claims in the video are totally absurd from the viewpoint of science. It's enough to listen for roughly 70 seconds to be sure that the narrator doesn't have the slightest clue about physics and the remaining 302 seconds make this fact even more self-evident. I won't try to answer the question whether the authors of the video realize that what they present is nonsense and they're deliberately cheating the viewers or they don't realize because they believe their own stuff, due to their totally inadequate education or intelligence. First of all, one can't create "free energy" out of nothing as it violates the energy conservation law, something we've known to hold for several centuries and something that has only been strengthened by all the subsequent research and evidence that was accumulated. The energy conservation law holds because the laws of physics don't change if we move everything in the direction of time – if we "repeat the same experiments later". This relationship between symmetries and conservation laws was found by Emmy Noether, one of the greatest female mathematicians of all time. So one has to change something about the matter – like ignite fusion – to get energy out of it. People work on fusion and indeed, plasma is there, but these people don't have any new technology of fusion and they don't even claim to have it. Without fusion, one can't be continuously extracting energy out of plasma. (Incidentally, the term " free energy " has a very particular meaning in physics and it has nothing to do with the colloquiual understanding of "energy for free".) Second, the video clip confuses electromagnetism with gravity. It wants to emulate some gravitational influences in the Cosmos but it apparently fails to realize that gravity is only important for celestial bodies because they're large. Gravity will never play any important role in a man-made engine or lab because gravity between systems that may be "constructed" by the men is negligibly weak. By their character, magnitude, ability to change sign, and other features, gravity and electromagnetism dramatically differ and they only get unified in conditions that can't be achieved experimentally. The narrator's suggestion that gravity and electromagnetism are almost the same thing and should be considered together is preposterous and only proves that they don't have a clue about basic concepts of physics such as electromagnetism, gravity, and the differences between them. Third, the video presents a preposterous picture of "types of matter" which are divided into a triangle of "normal matter, antimatter, dark matter". This is nonsense. Antimatter is always a dual relationship (although antimatter to some particles may be identical with these particles – in some cases, e.g. photons, there is no difference between matter and antimatter). Dark matter isn't any "third type" that could be added to matter or antimatter. Dark matter is just a particular type of matter (or antimatter: probably the kind of matter that is identical to its antimatter) but it's nonsensical to create a special category for dark matter because we should do the same thing for electrons or other things. Dark matter is composed of a particle species analogous to electron, top quark, photon, or other elementary particles. It's not "qualitatively different" from them and it is not a "third type of negation" that could extend antimatter. Fourth, they make lots of trivial, small, but important technical errors in their discussion of pretty much everything. For example, they omit antineutrinos in the decay of the neutron. Their claims that human cells depend on magnetism etc. are fiction, too. Fifth, it's nonsensical that one can create plasma by a similar electric circuit in a Coke bottle with home-available voltage. One needs lots of energy to create plasma, even more energy to create a plasma that has a chance to constantly generate energy (via fusion), and it is a hugely difficult technological task to sustain a body of plasma at least for a second. It's certain they can't solve this difficult task – that is almost mastered but not quite mastered by the tokamak rings. After all, a Coke bottle is a completely wrong shape for the task of maintaining a plasma. A toroidal ring or a small vessel with the radiation coming from all directions seem to be among the rare clever techniques to stabilize plasma, a very difficult task you can't expect to solve in your kitchen. There are lots of other nonsenses in the video – combined with lots of kitschy clichés about renewable energies and similar physically vacuous or problematic stuff – but I suspect that a sensible viewer has stopped the video several minutes ago. The video is addressed to viewers who lack the basic high school education in physics and it has clearly found thousands of them.
{ "source": [ "https://physics.stackexchange.com/questions/48349", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/97/" ] }
48,357
The equation for the photon propagator is straightforward $$ D_{ij} = \langle 0 |T \{ A_{i}(x')A_{j}(x) \}|0 \rangle $$ However, $A_{i}(x)$ is gauge-dependent and therefore unphysical (in the arguable sense). Then, since the propagator is dependent on the vector potential, the propagator is unphysical. Sadly, my whole understanding of what amplitudes mean may be skewed, but I would assume the probability amplitude for a photon to propagate between $x$ and $x'$ is something we would want to be gauge- independent . Edit: I guess I wasn't clear enough. By computing the probability amplitude for a process, we obtain a complex number that when multiplied by it's complex conjugate we obtain a probability for such a process to occur (when normalized). Here, the physical process is propagation, and the probability is $|\langle 0 |T \{ A_{i}(x')A_{j}(x) \}|0 \rangle|^2$. However, this probability is gauge dependent, and hence, the usual physical interpretation of $|\langle 0 |T \{ A_{i}(x')A_{j}(x) \}|0 \rangle|^2$ is questionable to me. Where has my interpretation gone astray?
Yes, the claims in the video are totally absurd from the viewpoint of science. It's enough to listen for roughly 70 seconds to be sure that the narrator doesn't have the slightest clue about physics and the remaining 302 seconds make this fact even more self-evident. I won't try to answer the question whether the authors of the video realize that what they present is nonsense and they're deliberately cheating the viewers or they don't realize because they believe their own stuff, due to their totally inadequate education or intelligence. First of all, one can't create "free energy" out of nothing as it violates the energy conservation law, something we've known to hold for several centuries and something that has only been strengthened by all the subsequent research and evidence that was accumulated. The energy conservation law holds because the laws of physics don't change if we move everything in the direction of time – if we "repeat the same experiments later". This relationship between symmetries and conservation laws was found by Emmy Noether, one of the greatest female mathematicians of all time. So one has to change something about the matter – like ignite fusion – to get energy out of it. People work on fusion and indeed, plasma is there, but these people don't have any new technology of fusion and they don't even claim to have it. Without fusion, one can't be continuously extracting energy out of plasma. (Incidentally, the term " free energy " has a very particular meaning in physics and it has nothing to do with the colloquiual understanding of "energy for free".) Second, the video clip confuses electromagnetism with gravity. It wants to emulate some gravitational influences in the Cosmos but it apparently fails to realize that gravity is only important for celestial bodies because they're large. Gravity will never play any important role in a man-made engine or lab because gravity between systems that may be "constructed" by the men is negligibly weak. By their character, magnitude, ability to change sign, and other features, gravity and electromagnetism dramatically differ and they only get unified in conditions that can't be achieved experimentally. The narrator's suggestion that gravity and electromagnetism are almost the same thing and should be considered together is preposterous and only proves that they don't have a clue about basic concepts of physics such as electromagnetism, gravity, and the differences between them. Third, the video presents a preposterous picture of "types of matter" which are divided into a triangle of "normal matter, antimatter, dark matter". This is nonsense. Antimatter is always a dual relationship (although antimatter to some particles may be identical with these particles – in some cases, e.g. photons, there is no difference between matter and antimatter). Dark matter isn't any "third type" that could be added to matter or antimatter. Dark matter is just a particular type of matter (or antimatter: probably the kind of matter that is identical to its antimatter) but it's nonsensical to create a special category for dark matter because we should do the same thing for electrons or other things. Dark matter is composed of a particle species analogous to electron, top quark, photon, or other elementary particles. It's not "qualitatively different" from them and it is not a "third type of negation" that could extend antimatter. Fourth, they make lots of trivial, small, but important technical errors in their discussion of pretty much everything. For example, they omit antineutrinos in the decay of the neutron. Their claims that human cells depend on magnetism etc. are fiction, too. Fifth, it's nonsensical that one can create plasma by a similar electric circuit in a Coke bottle with home-available voltage. One needs lots of energy to create plasma, even more energy to create a plasma that has a chance to constantly generate energy (via fusion), and it is a hugely difficult technological task to sustain a body of plasma at least for a second. It's certain they can't solve this difficult task – that is almost mastered but not quite mastered by the tokamak rings. After all, a Coke bottle is a completely wrong shape for the task of maintaining a plasma. A toroidal ring or a small vessel with the radiation coming from all directions seem to be among the rare clever techniques to stabilize plasma, a very difficult task you can't expect to solve in your kitchen. There are lots of other nonsenses in the video – combined with lots of kitschy clichés about renewable energies and similar physically vacuous or problematic stuff – but I suspect that a sensible viewer has stopped the video several minutes ago. The video is addressed to viewers who lack the basic high school education in physics and it has clearly found thousands of them.
{ "source": [ "https://physics.stackexchange.com/questions/48357", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/7167/" ] }
48,574
I was reading this article from NASA -- it's NASA -- and literally found myself perplexed. The article describes the discovery that black holes emit a "note" that has physical ramifications on the detritus around it. Sept. 9, 2003: Astronomers using NASA’s Chandra X-ray Observatory have found, for the first time, sound waves from a supermassive black hole. The “note” is the deepest ever detected from any object in our Universe. The tremendous amounts of energy carried by these sound waves may solve a longstanding problem in astrophysics. The black hole resides in the Perseus cluster of galaxies located 250 million light years from Earth. In 2002, astronomers obtained a deep Chandra observation that shows ripples in the gas filling the cluster. These ripples are evidence for sound waves that have traveled hundreds of thousands of light years away from the cluster’s central black hole. “The Perseus sound waves are much more than just an interesting form of black hole acoustics,” says Steve Allen, of the Institute of Astronomy and a co-investigator in the research. “These sound waves may be the key in figuring out how galaxy clusters, the largest structures in the Universe, grow.” Except: Black holes are so massive that light, which is faster than sound, can't escape. Sound can't travel in space (space has too much, well, space) It's a b-flat? So: How can a black hole produce sound if light can't escape it?
I'm not going to address the production mechanism, 1 just the nature of the "sound" in this case. What you think of as the hard vacuum of outer space could just as well be seen as a very, very , very diffuse, somewhat ionized gas. That gas can support sound waves as long as the wavelength is considerably longer than the mean free path of the atoms on the gas. As for the tone, there is a simple relationship between the tone of the same name in different octaves, so once they know the dominant frequency they can figure its place on the scale. 1 Though it won't be happening inside the event horizon -- which is where "not even light can escape" holds -- but in the region around the hole proper where it accumulates gas and dust and the magnetic fields from the hole play merry havoc with the ionized components of the accumulated stuff.
{ "source": [ "https://physics.stackexchange.com/questions/48574", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/6410/" ] }
49,745
For a system of electric charges $q_i$, at positions $\mathbf{r}_i$, with a nonzero net charge $Q=\sum_i q_i$, one can define a "centre of charge" in the obvious way as $$ \mathbf{r}_c=\frac{1}{Q}\sum_i q_i\mathbf{r}_i. $$ This concept is definitely not as useful as one might naively hope, but it does have a physical significance as the position of the origin that sets the system's dipole moment (and therefore the dipolar term in a multipolar expansion) to zero. That means, then, that the monopole approximation is far better in the far field than normally: the electrostatic potential goes as $$ \Phi(\mathbf{r})=\frac{Q}{|\mathbf{r}-\mathbf{r}_c|}+O\left(\frac{1}{|\mathbf{r}-\mathbf{r}_c|^{3}}\right) $$ where the subleading term is of order $1/r^3$, instead of $1/r^2$ as usual. For a neutral system with $Q=0$, however, the concept of centre of charge is meaningless and there is no monopole term in the expansion. The relevant concept is then the dipole moment, $$ \mathbf{d}=\sum_i q_i(\mathbf{r}_i-\mathbf{r}_0), $$ which is independent of the position $\mathbf{r}_0$ of the origin. However, this means that the relative importance of the subleading term in the multipole expansion is higher than above: $$ \Phi(\mathbf{r})=\frac{\mathbf{d}\cdot\mathbf{r}}{|\mathbf{r}-\mathbf{r}_0|^2}+O\left(\frac{1}{|\mathbf{r}-\mathbf{r}_0|^{3}}\right) $$ My question is: for a neutral system, is it possible to find a suitable position $\mathbf{r}_0$ for the origin that will set the subleading, quadrupole term to zero? Since this entails a system of five equations (linear in $\mathbf{r}_0$ when $Q=0$), I suspect this is impossible in general geometries. If this is the case, which geometries allow for vanishing quadrupole moments and which ones don't? In the cases where one can do this, does this position have a special name? More generally, if all multipole moments up to some $l\geq0$ are zero, (when) can the subleading term be made to vanish?
Sometimes you can The obvious example is a purely dipolar charge which has been displaced from the origin, such as a dipolar gaussian $$ \rho(\mathbf r) =p_z(z-z_0)\frac{e^{-(\mathbf{r}-\mathbf{r}_0)^2/2\sigma^2}}{\sigma^5(2\pi)^{3/2}} . $$ This system is neutral, and it has a nonzero dipole moment $p_z=\int z\rho(\mathbf r)\mathrm d\mathbf r$ along the $z$ axis. It also has a nonzero quadrupole moment; for example, $$ Q_{zz} =\int \left(z^2-\frac{r^2}{3}\right)\rho(\mathbf r)\mathrm d\mathbf r =\frac{4}{3}p_z z_0. $$ However, this is obviously trivially fixable by putting the origin at $\mathbf r_0$, in which case the charge density becomes purely dipolar and all other multipole moments will vanish. This is obviously a slightly contrived example, but it shows that it is indeed possible, in general, for charge densities where the quadrupole moments are originally nonzero but can nevertheless be transformed away by a suitable translation. Sometimes you can't Unfortunately, this is not always the case. As a counterexample, consider the charge density $$ \rho(\mathbf r) =\left( \frac{p_x x}{\sigma^2} + \frac{Q_{xx}(x^2-y^2)}{2\sigma^4} \right) \frac{e^{-\mathbf{r}^2/2\sigma^2}}{\sigma^3(2\pi)^{3/2}}. $$ Here $\rho(\mathbf r)$ is neutral, it has a nonvanishing dipole moment $\mathbf p=\int \mathbf r\rho(\mathbf r)\mathrm d\mathbf r=p_x\hat{\mathbf x}$, and it has nonzero quadrupole moments $$ Q_{xx} =-Q_{yy} =\int \left(x^2-\frac{r^2}{3}\right)\rho(\mathbf r)\mathrm d\mathbf r $$ along the $x$ and $y$ axes. (Of course, $Q_{zz}$ and all off-diagonal elements are zero.) Here you have essentially one quadrupole moment to cancel out, and one nonzero dipole moment component with which to do so, so if all you're doing is counting linearly independent components then it looks like you have a good chance. Unfortunately, this doesn't work. To be more specific, take a passive approach where you keep $\rho(\mathbf r)$ as it is and you calculate the new multipole moments about a different origin, $$ Q'_{ij} =\int \left((x_i-x_i^{(0)})(x_j-x_j^{(0)})-\frac{(\mathbf r-\mathbf r_0)^2}{3}\right)\rho(\mathbf r)\mathrm d\mathbf r. $$ In these conditions the moments transform as follows: \begin{align} Q'_{xx} & = Q_{xx} -\frac{4}{3} p_x x_0, \\ Q'_{yy} & = -Q_{xx} +\frac{2}{3} p_x x_0,\\ Q'_{zz} & =\frac{2}{3} p_x x_0. \end{align} It is therefore possible to choose the new origin's $x$ coordinate $x_0$ such that $Q'_{xx}$ vanish, but it is patently impossible to make all the components vanish. This completes the proof: some systems are not susceptible to this sort of simplification, and the subleading term in the multipole expansion is always $1/r$ behind the leading term instead of being reducible to $1/r^2$. As to how this looks, here are some plots of our counterexample $\rho(\mathbf r)$ as the parameters go from purely dipolar to purely quadrupolar and back: None of the displayed distributions (with the exception of the extremes) can be brought to dipole + hexadecapole form via a translation. The image link should lead to an interactive version of this animation, but it's rather sluggish so give it some time. And you can tell the difference OK, so the result is sometimes true and sometimes not, which does very little to help us understand what's going on here. So, let's have a closer look and characterize the cases where it is and isn't possible to transform away the quadrupole components, and that will help us understand why some work and some don't. The trick here (and here I thank my undergraduate supervisor, Eduardo Nahmad-Achar, for the crucial suggestion) is to boil down the quadrupole moment tensor to its essential components. $Q_{ij}$ is an intimidating quantity, but looks more complicated than it is. In particular, we're defining it as $$ Q_{ij}=\int \left(x_ix_j -\delta_{ij}\frac{r^2}{3}\right) \rho(\mathbf r)\mathrm d \mathbf r, $$ so it is easy to see that it is a second-rank tensor with respect to rotations, and that it is symmetric and traceless (in the sense that $\sum_iQ_{ii}=0$). The symmetry takes the number of independent components down from nine to six, and the tracelessness from there to five, but in fact you can go lower: because the tensor is symmetric, it can always be diagonalized, so the off-diagonal components can be set to zero with a suitably aligned set of axes. The tracelessness then reduces $Q_{ij}$ to only two independent components in this frame. So, let's start by setting up a general framework for our transformed quadrupole moments. Using the passive transformation from before, and expanding the square, we get \begin{align} Q'_{ij} & = \int \left( \left(x_i-x_i^{(0)}\right)\left(x_j-x_j^{(0)}\right)-\delta_{ij}\frac{(\mathbf r-\mathbf r_0)^2}{3} \right)\rho(\mathbf r)\mathrm d\mathbf r \\ & = \int \left( x_ix_j-x_ix_j^{(0)}-x_jx_i^{(0)}+x_i^{(0)}x_j^{(0)}-\delta_{ij}\frac{r^2 - 2\mathbf r\cdot\mathbf r_0+r_0^2}{3} \right)\rho(\mathbf r)\mathrm d\mathbf r \\ & = Q_{ij} -x_j^{(0)}p_i-x_i^{(0)}p_j + \frac23 \delta_{ij}\mathbf{r}_0\cdot\mathbf p , \end{align} where the constant terms in $x_i^{(0)}x_j^{(0)}$ and in $\frac13\delta_{ij} r_0^2$ cancel out because the system is neutral. This system splits into two types of equations: three with $i\neq j$, and three with $i=j$, which look relatively different: \begin{align} Q'_{ij} &= Q_{ij}- p_ix_j^{(0)}-p_jx_i^{(0)}\text { for }i\neq j,\text{ and}\\ Q'_{ii} &= Q_{ii}-2\left( p_ix_i^{(0)}-\frac13 \mathbf p\cdot\mathbf r_0 \right). \end{align} Here the goal is to have the primed moments vanish, and without loss of generality we've put ourselves in a frame where the off-diagonal moments, $Q_{ij}$ for $i\neq j$, vanish to begin with. That means, then, that our equations really read \begin{align} p_ix_j^{(0)}+p_jx_i^{(0)} & = 0\text { for }i\neq j,\text{ and}\\ \left( p_ix_i^{(0)}-\frac13 \mathbf p\cdot\mathbf r_0 \right) & = \frac12 Q_{ii}. \end{align} These two sets are rather different, and it's better to tackle them separately. Let's start, then, with the diagonal set of equations, \begin{align} \left(p_i\hat{\mathbf e}_i -\frac13 \mathbf p\right)\cdot \mathbf r_0 = \frac12 Q_{ii}. \end{align} which looks nice enough. However, there's a problem with this set, in that it has a built-in linear dependence that comes from the innate tracelessness of our original quadrupole moments. This becomes somewhat clearer when expressed in coordinate form as \begin{align} \frac23 p_x x^{(0)} - \frac13 p_y y^{(0)} - \frac13 p_z z^{(0)} & = \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} + \frac23 p_y y^{(0)} - \frac13 p_z z^{(0)} & = \frac12 Q_{yy} \\ -\frac13 p_x x^{(0)} - \frac13 p_y y^{(0)} + \frac23 p_z z^{(0)} & = \frac12 Q_{zz}, \end{align} in that if you add all the three equations you get identically zero; that is, any equation is (the negative of) the sum of the other two. The solution space of each of this set of equations is a plane in $\mathbf r_0$ space, and we are guaranteed that if two of the planes intersect then the other plane must also coincide with that intersection. In addition, we also know that sometimes these equations can fail to have solutions and that therefore the planes can be parallel. Indeed, we see this in our 'sometimes you can't' example, where the equations reduce to \begin{align} \frac23 p_x x^{(0)} & = \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} & = - \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} & =0, \end{align} which is not a consistent set of equations, and describes three parallel planes normal to the $x$ axis. On the other hand, since the system is linearly dependent then we also know that if a solution does exist, then it will have a degeneracy of at least dimension $1$ along the common line of intersection of the planes. Finding the direction of this line tells us a lot about the planes, and we can get it directly as the cross product of any two normal vectors, so, for definiteness, \begin{align} \mathbf n & = \left(p_1\hat{\mathbf e}_1 -\frac13 \mathbf p\right) \times \left(p_2\hat{\mathbf e}_2 -\frac13 \mathbf p\right) \\ & = p_1p_2\hat{\mathbf e}_3 -\frac13p_1\hat{\mathbf e}_1\times(p_2\hat{\mathbf e}_2+p_3\hat{\mathbf e}_3) -\frac13p_2(p_1\hat{\mathbf e}_1+p_3\hat{\mathbf e}_3)\times\hat{\mathbf e}_2 \\ & = p_1p_2\hat{\mathbf e}_3 -\frac13(p_1p_2\hat{\mathbf e}_3-p_1p_3\hat{\mathbf e}_2) -\frac13(p_1p_2\hat{\mathbf e}_3-p_2p_3\hat{\mathbf e}_1) \\ & = \frac13\left[\vphantom{\sum} p_1p_2\hat{\mathbf e}_3+p_2p_3\hat{\mathbf e}_1+p_3p_1\hat{\mathbf e}_2 \right]. \end{align} This is a really cool result, and as a bonus we know that we get the same vector for the pairs $(2,3)$ and $(3,1)$ of planes (and $-\mathbf n$ of the reversed pairs). This then gives us a first criterion: if, in an eigenframe of the original quadrupole moment, any two components of the dipole moment are nonzero, then the planes describing the equations for the vanishing of the diagonal components will intersect in a line. If only one of these components is nonzero, then the planes are parallel, and there will only be solutions if all the planes coincide. Having said all this, the analysis above is not enough to solve the problem, because we still need to follow up on what happens with the off-diagonal moments, which were originally zero because of the frame we chose but need to remain at zero. Those equations read \begin{align} \phantom{p_i x^{(0)}} + p_z y^{(0)} + p_y z^{(0)} & = 0 \\ p_z x^{(0)} \phantom{+p_i x^{(0)}} + p_x z^{(0)} & = 0 \\ p_y x^{(0)} + p_x y^{(0)} \phantom{+p_i x^{(0)}} & = 0, \end{align} and being a homogeneous system they're somewhat easier to handle: zero is always a solution, and they only have nonzero solutions (which we do want, or our translation by $\mathbf r_0$ doesn't do anything) if their determinant $$ \det\begin{pmatrix} 0 & p_z & p_y \\ p_z & 0 & p_x \\ p_y & p_x & 0 \end{pmatrix} =2p_xp_yp_z $$ vanishes. That puts us in a bit of a conundrum, though, because we wanted to have nonvanishing dipole components in this frame, which means that (as we really should have expected) the system is riding the edge of solvability. We have, therefore, two conflicting demands on how many components of the dipole are allowed to vanish in this frame. They can't all be nonzero (or the determinant above kills the off-diagonal equations) and they can't all be zero (by initial hypothesis of nonzero $\mathbf p$), so that leaves two distinct cases Exactly one component of the dipole vanishes. Choosing that component as $p_z$, our equations read in component form \begin{align} \frac23 p_x x^{(0)} - \frac13 p_y y^{(0)} & = \frac12 Q_{xx} \\ -\frac13 p_x x^{(0)} + \frac23 p_y y^{(0)} & = \frac12 Q_{yy}, \end{align} and \begin{align} \phantom{p_i x^{(0)} + 0 y^{(0)} + } p_y z^{(0)} & = 0 \\ \phantom{ 0 x^{(0)} +p_i x^{(0)} + } p_x z^{(0)} & = 0 \\ p_y x^{(0)} + p_x y^{(0)} \phantom{+p_i x^{(0)}} & = 0; \end{align} and from the latter we get that $z^{(0)}=0$. To get the other two components, we fist solve the first set of equations, which are now guaranteed a unique solution, \begin{align} \begin{pmatrix} x^{(0)} \\ y^{(0)} \end{pmatrix} & = \frac32 \begin{pmatrix} 2 p_x & - p_y \\ - p_x & 2 p_y \end{pmatrix}^{-1} \begin{pmatrix} Q_{xx} \\ Q_{yy}\end{pmatrix} \\ & = \frac{3/2}{5p_xp_y} \begin{pmatrix} 2 p_y & p_y \\ p_x & 2 p_x \end{pmatrix}^{-1} \begin{pmatrix} Q_{xx} \\ Q_{yy}\end{pmatrix} \\ & = \frac{3/2}{5p_xp_y} \begin{pmatrix} p_y(2Q_{xx} + Q_{yy}) \\ p_x(Q_{xx} + 2 Q_{yy}) \end{pmatrix} , \end{align} and then comes the final test - whether this solution satisfies the final equation, \begin{align} 0 &= p_y x^{(0)}+ p_x y^{(0)} = \frac{3/2}{5p_xp_y} \begin{pmatrix} p_y & p_x\end{pmatrix} \begin{pmatrix} p_y(2Q_{xx} + Q_{yy}) \\ p_x(Q_{xx} + 2 Q_{yy}) \end{pmatrix} \\ & = \frac{3}{10p_xp_y} \bigg[ p_y^2(2Q_{xx} + Q_{yy}) + p_x^2(Q_{xx} + 2 Q_{yy}) \bigg]. \end{align} Since both $p_x$ and $p_y$ need to be nonzero here, both of the coefficients are forced to vanish, and since that reads in matrix form as $$ \begin{pmatrix} 2 & 1 \\ 1 & 2\end{pmatrix} \begin{pmatrix} Q_{xx} \\ Q_{yy} \end{pmatrix}=0, $$ with a nonsingular constant matrix, then we're forced to conclude that $Q_{xx}=0=Q_{yy}$ and that therefore $Q_{zz}=-Q_{xx}-Q_{yy}$ also vanishes. That is, this option doesn't work: it requires that our original quadrupoles be zero to begin with, and the whole thing collapses to zero. The other option, then is that exactly two components of the dipole vanish. Putting our only remaining component along $z$, our equations read \begin{align} - \frac13 p_z z^{(0)} & = \frac12 Q_{xx} \\ - \frac13 p_z z^{(0)} & = \frac12 Q_{yy} \\ + \frac23 p_z z^{(0)} & = \frac12 Q_{zz}, \end{align} and \begin{align} \phantom{p_i x^{(0)}} + p_z y^{(0)} \phantom{+ p_y z^{(0)}}& = 0 \\ p_z x^{(0)} \phantom{ + p_i x^{(0)} + p_x z^{(0)} } & = 0. \end{align} Here the second set requires that we remain on the symmetry axis of the dipole, i.e. $x^{(0)}=y^{(0)}=0$, and the first set gives us a nonzero displacement $z^{(0)}$ along that axis - together with a strong (but not fully stifling) constraint on the quadrupole moments, namely, that $$ Q_{xx} = Q_{yy} = -\frac12 Q_{zz}. $$ In other words, the quadrupole moment needs to be zonal ( as opposed to tesseral or sectoral ) and it needs to have full axial symmetry. This means that our initial 'sometimes you can' example was completely representative and it exhausted, in its essence, all the possible scenarios. To phrase the theorem in full, then, we have: Summary Given a neutral charge distribution with nonzero dipole and quadrupole moments, it is possible to choose an origin about which all quadrupole moments $Q'_{ij}$ vanish if and only if the quadrupole moment tensor is cylindrically symmetric - i.e., if and only if two of its eigenvalues are equal - and the dipole moment lies along its axis of symmetry. This is really restrictive (particularly compared to the absolutely-no-qualms case one order above), but hey, that's life. This answer supersedes two previous, incorrect versions, which are accessible via the revision history .
{ "source": [ "https://physics.stackexchange.com/questions/49745", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/8563/" ] }
50,075
My Physics teacher was reluctant to define Lagrangian as Kinetic Energy minus Potential Energy because he said that there were cases where a system's Lagrangian did not take this form. Are you are aware of any such examples? Update: Here I'm of course assuming that $T$ and $U$ stands for the kinetic and the potential energy, respectively. Also: adding a total time derivative term to the Lagrangian, or scaling the Lagrangian with a non-zero multiplicative constant do not change the Euler-Lagrange equations, as Dilaton and dmckee points out in the comments. Needless to say that I'm not interested in such trivial modifications (1&2).
For a relativistic free particle you would think that the Lagrangian would be like $$ \tag{1} L ~=~ T ~=~ E-E_0~=~(\gamma -1)m_0c^2. \qquad(\leftarrow\text{Turns out to be wrong!}) $$ This is not the case! Instead it is $$ \tag{2} L ~=~ -\gamma^{-1}m_0c^2. $$ These two functions look like and are not the same. This choice (2) of kinetic term gives a canonical momentum $$p~:=~\frac{\partial L}{\partial v}~=~\gamma m_0v,$$ as it should be.
{ "source": [ "https://physics.stackexchange.com/questions/50075", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16620/" ] }
50,080
A question from an example from a MIT Classical Mechanics Lecture on Work. Here's the given definition for gravitational potential energy ( ~32:00 ): "The gravitational potential energy at a point $P$ is the work that I, Walter Lewin, [the lecturer] have to do to bring that mass from $\infty$ to that point $P$. My force is always the same as gravity with a minus sign. " This diagram is given at ~40:00 : It is unclear to me why W. Lewin's force $F_{WL}$ is pointing towards $\infty$ (+ $r$ direction) if he is measuring the force to bring the object from $\infty \to P$. Furthermore, the work required by WL is given as $\int_{\infty}^{R} \frac{mMG}{r^2} dr $. Since $\infty \to $r is the $-r$ direction, shouldn't this integral have a minus sign?
For a relativistic free particle you would think that the Lagrangian would be like $$ \tag{1} L ~=~ T ~=~ E-E_0~=~(\gamma -1)m_0c^2. \qquad(\leftarrow\text{Turns out to be wrong!}) $$ This is not the case! Instead it is $$ \tag{2} L ~=~ -\gamma^{-1}m_0c^2. $$ These two functions look like and are not the same. This choice (2) of kinetic term gives a canonical momentum $$p~:=~\frac{\partial L}{\partial v}~=~\gamma m_0v,$$ as it should be.
{ "source": [ "https://physics.stackexchange.com/questions/50080", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/18723/" ] }
50,107
What is the difference between center of mass and center of gravity ? These terms seem to be used interchangeably. Is there a difference between them for non-moving object on Earth, or moving objects for that matter?
The difference is that the centre of mass is the weighted average of location with respect to mass, whereas the centre of gravity is the weighted average of location with respect to mass times local $g$. If $g$ cannot be assumed constant over the whole of the body (perhaps because the body is very tall), they might (and generally will) have different values. I don't see an immediate connection with movement though.
{ "source": [ "https://physics.stackexchange.com/questions/50107", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/927/" ] }
50,142
I have heard from here that stable orbits (ones that require a large amount of force to push it significantly out of it's elliptical path) can only exist in a three spatial dimensions because gravity would operate differently in a two or four dimensional space. Why is this?
Specifically what that is referring to is the ' inverse-square law ', nature of the gravitational force, i.e. the force of gravity is inversely proportional to the square of the distance: $F_g \propto \frac{1}{d^2}$. If you expand this concept to that of general power-law forces (e.g. when you're thinking about the virial theorem ), you can write: $F \propto d^a$, Stable orbits are only possible for a few, special values of the exponent '$a$'---in particular, and more specifically 'closed 1 ', stable orbits only occur for $a = -2$ (the inverse-square law) and $a = 1$ ( Hooke's law ). This is called ' Bertrand's Theorem '. Now, what does that have to do with spatial dimensions? Well, it turns out that in a more accurate description of gravity (in particular, general relativity ) the exponent of the power-law ends up being one-less than the dimension of the space. For example, if space were 2-dimensional, then the force would look like $F \propto \frac{1}{d}$, and there would be no closed orbits. Note also that $a<-3$ (and thus 4 or more spatial dimensions) is unconditionally unstable, as per @nervxxx's answer below. 1: A 'closed' orbit is one in which the particle returns to its previous position in phase space (i.e. its orbit repeats itself).
{ "source": [ "https://physics.stackexchange.com/questions/50142", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/18815/" ] }
51,220
I'm having problems understanding the following situation. Suppose two 1-tonne cars are going with the same orientations but opposite senses, each 50 km/h with respect to the road. Then the total energy is $$\begin{eqnarray}E=E_1+E_2&=&\frac{1\mathrm t\times(50\mathrm{km}/\mathrm h)^2}2+\frac{1\mathrm t\times(50\mathrm{km}/\mathrm h)^2}2\\&=&1\mathrm t\times(50\mathrm{km}/\mathrm h)^2\\&=&2500\frac{\mathrm t\times\mathrm{km}^2}{\mathrm h^2}.\end{eqnarray}$$ Now if we look at it from the point of view of one of the cars, then the total energy is $$\begin{eqnarray}E=E_1+E_2&=&\frac{1\mathrm t\times(0\mathrm{km}/\mathrm h)^2}2+\frac{1\mathrm t\times(100\mathrm{km}/\mathrm h)^2}2\\&=&\frac{1\mathrm t\times(100\mathrm{km}/\mathrm h)^2}2\\&=&5000\frac{\mathrm t\times\mathrm{km}^2}{\mathrm h^2}.\end{eqnarray}.$$ I know that kinetic energy is supposed to change when I change the frame of reference. But I understand that then there must be some other kind of energy to make up for it so that the energy in the system stays unchanged. But I don't see any other kind of energy here. I only see two total energies of the same system that seem to be different. Could you explain this to me? Please note that while I don't understand any physics, I do understand college level mathematics, so if necessary please use it. (I doubt anything more than high school maths should be needed here, but I want to say this just in case.)
You have successfully discovered that the kinetic energy depends on the reference frame. That is actually true. What is amazing, however, is that while the value of the energy is frame DEpendent, once you've chosen a reference frame, the law of conservation of energy itself is NOT reference frame-dependent -- every reference frame will observe a constant energy, even if the exact number they measure is different. So, when you balance your conservation of energy equation in the two frames, you'll find different numbers for the total energy, but you will also see that the energy before and after an elastic collision will be that same number. So, let's derive the conservation of energy in two reference frames. I'm going to model an elastic collision between two particles. In the first reference frame, I am going to assume that the second particle is stationary, and we have: $$\begin{align} \frac{1}{2}m_{1}v_{i}^{2} + \frac{1}{2}m_{2}0^{2} &= \frac{1}{2}m_{1}v_{1}^{2} + \frac{1}{2}m_{2}v_{2}^{2}\\ m_{1}v_{i}^2 &= m_{1}v_{1}^{2} + m_{2}v_{2}^{2} \end{align}$$ to save myself time and energy, I'm going to call $\frac{m_{2}}{m_{1}} = R$ , and we have: $$v_{i}^{2} = v_{1}^{2} + Rv_{2}^{2}$$ Now, what happens if we shift to a different reference frame, moving to the right with speed v? This is essentially the same thing as subtracting $v$ from all of these terms. We thus have: $$\begin{align} (v_{i}-v)^{2} + R(-v)^{2} &= (v_{1}-v)^{2} + R(v_{2}-v)^{2}\\ v_{i}^{2} -2v_{i}v + v^{2} + Rv^{2} &= v_{1}^{2} - 2 vv_{1} + v^{2} + Rv_{2}^{2}-2Rv_{2}v + Rv^{2}\\ v_{i}^{2} -2v_{i}v &= v_{1}^{2}- 2vv_{1} + Rv_{2}^{2}-2Rv_{2}v\\ v_{i}^{2} &= v_{1}^{2} + Rv_{2}^{2} + 2v(v_{i} - v_{1} - R v_{2}) \end{align}$$ So, what gives? It looks like the first equation, except we have this extra $2v(v_{i} - v_{1} - R v_{2})$ term? Well, remember that momentum has to be conserved too. In our first frame, we have the conservation of momentum equation (remember that the second particle has initial velocity zero: $$\begin{align} m_{1}v_{i} + m_{2}(0) &= m_{1}v_{1} + m_{2}v_{2}\\ v_{i} &= v_{1} + Rv_{2}\\ v_{i} - v_{1} - Rv_{2} &=0 \end{align}$$ And there you go! If momentum is conserved in our first frame, then apparently energy is conserved in all frames!
{ "source": [ "https://physics.stackexchange.com/questions/51220", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/19853/" ] }
51,226
Say I have a hollow cylinder and I wanted to strap it down to the bed of a truck. I would tension the strap on one end, and it would exert a force on the cylinder. My intuition tells me that the strap would crush the hollow cylinder down toward the truck bed, but when I think about it, there are inward forces perpendicular to the truck bed caused by the straps on the cylinder as well. Is this correct thinking, or are all of the forces only vertical?
You have successfully discovered that the kinetic energy depends on the reference frame. That is actually true. What is amazing, however, is that while the value of the energy is frame DEpendent, once you've chosen a reference frame, the law of conservation of energy itself is NOT reference frame-dependent -- every reference frame will observe a constant energy, even if the exact number they measure is different. So, when you balance your conservation of energy equation in the two frames, you'll find different numbers for the total energy, but you will also see that the energy before and after an elastic collision will be that same number. So, let's derive the conservation of energy in two reference frames. I'm going to model an elastic collision between two particles. In the first reference frame, I am going to assume that the second particle is stationary, and we have: $$\begin{align} \frac{1}{2}m_{1}v_{i}^{2} + \frac{1}{2}m_{2}0^{2} &= \frac{1}{2}m_{1}v_{1}^{2} + \frac{1}{2}m_{2}v_{2}^{2}\\ m_{1}v_{i}^2 &= m_{1}v_{1}^{2} + m_{2}v_{2}^{2} \end{align}$$ to save myself time and energy, I'm going to call $\frac{m_{2}}{m_{1}} = R$ , and we have: $$v_{i}^{2} = v_{1}^{2} + Rv_{2}^{2}$$ Now, what happens if we shift to a different reference frame, moving to the right with speed v? This is essentially the same thing as subtracting $v$ from all of these terms. We thus have: $$\begin{align} (v_{i}-v)^{2} + R(-v)^{2} &= (v_{1}-v)^{2} + R(v_{2}-v)^{2}\\ v_{i}^{2} -2v_{i}v + v^{2} + Rv^{2} &= v_{1}^{2} - 2 vv_{1} + v^{2} + Rv_{2}^{2}-2Rv_{2}v + Rv^{2}\\ v_{i}^{2} -2v_{i}v &= v_{1}^{2}- 2vv_{1} + Rv_{2}^{2}-2Rv_{2}v\\ v_{i}^{2} &= v_{1}^{2} + Rv_{2}^{2} + 2v(v_{i} - v_{1} - R v_{2}) \end{align}$$ So, what gives? It looks like the first equation, except we have this extra $2v(v_{i} - v_{1} - R v_{2})$ term? Well, remember that momentum has to be conserved too. In our first frame, we have the conservation of momentum equation (remember that the second particle has initial velocity zero: $$\begin{align} m_{1}v_{i} + m_{2}(0) &= m_{1}v_{1} + m_{2}v_{2}\\ v_{i} &= v_{1} + Rv_{2}\\ v_{i} - v_{1} - Rv_{2} &=0 \end{align}$$ And there you go! If momentum is conserved in our first frame, then apparently energy is conserved in all frames!
{ "source": [ "https://physics.stackexchange.com/questions/51226", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/19854/" ] }
51,231
I am having an hard time trying to understand why the radiated power per unit area $P$ of a black body is given by $$P=\frac{c}{4} u$$ in terms of the energy density $u$ and the velocity of light. I know there is a derivation in HyperPhysics , but I did not find it particularly convincing. I cannot understand the physical significance of the parameter $\theta$ and what is its relationship with the the total energy per unit time provided by a unit area. Could someone explain it better to me?
You have successfully discovered that the kinetic energy depends on the reference frame. That is actually true. What is amazing, however, is that while the value of the energy is frame DEpendent, once you've chosen a reference frame, the law of conservation of energy itself is NOT reference frame-dependent -- every reference frame will observe a constant energy, even if the exact number they measure is different. So, when you balance your conservation of energy equation in the two frames, you'll find different numbers for the total energy, but you will also see that the energy before and after an elastic collision will be that same number. So, let's derive the conservation of energy in two reference frames. I'm going to model an elastic collision between two particles. In the first reference frame, I am going to assume that the second particle is stationary, and we have: $$\begin{align} \frac{1}{2}m_{1}v_{i}^{2} + \frac{1}{2}m_{2}0^{2} &= \frac{1}{2}m_{1}v_{1}^{2} + \frac{1}{2}m_{2}v_{2}^{2}\\ m_{1}v_{i}^2 &= m_{1}v_{1}^{2} + m_{2}v_{2}^{2} \end{align}$$ to save myself time and energy, I'm going to call $\frac{m_{2}}{m_{1}} = R$ , and we have: $$v_{i}^{2} = v_{1}^{2} + Rv_{2}^{2}$$ Now, what happens if we shift to a different reference frame, moving to the right with speed v? This is essentially the same thing as subtracting $v$ from all of these terms. We thus have: $$\begin{align} (v_{i}-v)^{2} + R(-v)^{2} &= (v_{1}-v)^{2} + R(v_{2}-v)^{2}\\ v_{i}^{2} -2v_{i}v + v^{2} + Rv^{2} &= v_{1}^{2} - 2 vv_{1} + v^{2} + Rv_{2}^{2}-2Rv_{2}v + Rv^{2}\\ v_{i}^{2} -2v_{i}v &= v_{1}^{2}- 2vv_{1} + Rv_{2}^{2}-2Rv_{2}v\\ v_{i}^{2} &= v_{1}^{2} + Rv_{2}^{2} + 2v(v_{i} - v_{1} - R v_{2}) \end{align}$$ So, what gives? It looks like the first equation, except we have this extra $2v(v_{i} - v_{1} - R v_{2})$ term? Well, remember that momentum has to be conserved too. In our first frame, we have the conservation of momentum equation (remember that the second particle has initial velocity zero: $$\begin{align} m_{1}v_{i} + m_{2}(0) &= m_{1}v_{1} + m_{2}v_{2}\\ v_{i} &= v_{1} + Rv_{2}\\ v_{i} - v_{1} - Rv_{2} &=0 \end{align}$$ And there you go! If momentum is conserved in our first frame, then apparently energy is conserved in all frames!
{ "source": [ "https://physics.stackexchange.com/questions/51231", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/19857/" ] }
51,503
I understand that the Bernoulli effect is a flawed explanation for the cause of lift, and does not cause much at all, but how much ? Is there any experimental data on the force caused by the Bernoulli effect? Maybe implicitly through data of the pressure difference between the top and underside of an aeroplane's wings. After that, I assume I could (crudely approximating the pressure to be acting perpendicularly to the flight direction) use $\Delta P A$ to work out the net force on the plane. Perhaps there is another way to quantitatively analyse the extent to which the Bernoulli effect causes lift. Edit: see this short cartoon (content similar to Mike Dunlavey's answer).
There's no problem with the Bernoulli effect, only with the way it's understood and explained. It's usually explained with mistakes, like the need for asymmetrical airfoil and equal flow time above and below, and without mentioning the need to deflect the direction of airflow. Here's the best light-math explanation I've seen . Also study this section that directly answers your question. EDIT: It is easy to find wrong pictures like this: as opposed to a correct one like this (from the link above): So the answer to your question is: All of the lift depends on the Bernoulli principle, because speed and pressure are in trade-off, but the physics need to be correctly understood.
{ "source": [ "https://physics.stackexchange.com/questions/51503", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/11657/" ] }
51,838
I believe the purpose of a tuning fork is to produce a single pure frequency of vibration. How do two coupled vibrating prongs isolate a single frequency? Is it possible to produce the same effect using only 1 prong? Can a single prong not generate a pure frequency? Does the addition of more prongs produce a "more pure" frequency? The two prong system only supports a single standing wave mode, why is that?
I am by no means an expert in tuning fork design, but here are some physical considerations: Different designs may have different "purities," but don't take this too far. It is certainly possible to tune to something not a pure tone; after all, orchestras usually tune to instruments, not tuning forks. Whatever mode(s) you want to excite, you don't want to damp with your hand. Imagine a single bar. If you struck it in free space, a good deal of the power would go into the lowest frequency mode, which would involve motion at both ends. However, clamping a resonator at an antinode is the best way to damp it - all the energy would go into your hand. A fork, on the other hand, has a natural bending mode that will not couple very well to a clamp in the middle.
{ "source": [ "https://physics.stackexchange.com/questions/51838", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/8866/" ] }
51,994
I am a bit confused by the concepts of active and passive transformations . In all the courses I am doing at the moment we do transformations of the form: $$\phi(x) \rightarrow\phi'(x') = \phi(x)$$ and $$\partial_{\mu}\phi(x) \rightarrow \partial_{\nu'} \phi'(x') = \frac{\partial x^{\alpha}}{\partial x^{\prime \nu}} \partial_{\alpha}\phi(x)$$ This is all perfectly clear to me. However, I am reading Peskin and Schoder at the moment, and they adapt an "active" point of view (their words), such that the above transformations are: $$\phi(x) \rightarrow\phi'(x) = \phi(\Lambda^{-1}x)$$ and $$\partial_{\mu}\phi(x)\rightarrow\partial _{\mu}(\phi(\Lambda^{-1}x)) = (\Lambda^{-1})^{\nu}_{\mu}(\partial_{\nu}\phi)(\Lambda^{-1}x).$$ I don't understand how to interpret this and especially how to derive the second equation.
What you wrote down is the same as what Peskin writes. To see this, notice that if we write the "transformed" position $x'$ as $x' = \Lambda x$, then your first equation can be written as $\phi'(\Lambda x) = \phi(x)$ but this equivalent to $\phi'(x) = \phi(\Lambda^{-1} x)$ which is the same as the first Peskin equation you wrote down. Your second equation and Peskin's second equation are equivalent. You can show this by using the definition of $x'$ plus the chain rule for partial differentiation. I can add details if you'd like, but I think it's a good exercise to figure out. Active v. Passive The convention in which we define $\phi'(x) = \phi(\Lambda^{-1} x)$ is the active convention because the transformed field value at the transformed point is that same as the non-transformed field value at the non-transformed point, so it's as if we have kept our coordinate system fixed and transformed the field configuration. To get intuition for this, imagine a temperature field $T$ in a 2D laboratory, and imagine keeping the laboratory fixed, but rotating the entire temperature field counterclockwise by a rotation $R$ to obtain a temperature field $T'$. Then (drawing a picture helps) the new temperature field evaluated at a counterclockwise rotated point $R x$ should be the same as the old temperature field evaluated at the non-rotated point $x$, namely $T'(R x) = T(x)$ which is the same as $T'(x) = T(R^{-1} x)$ The passive convention is the one in which we define $\phi'(x) = \phi(\Lambda x)$ and has the interpretation of transforming the coordinates while keeping the field configuration fixed. Try using the temperature analogy to understand this.
{ "source": [ "https://physics.stackexchange.com/questions/51994", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20155/" ] }
52,012
How can I tell if $A$ and $\exp(B)$ commute? For $[A, B]$ it's simply $AB-BA$ and for $[\exp(A), \exp(B)]$ I think it'd be $\exp(A)\exp(B) - \exp(B)\exp(A) = \exp(A+B) - \exp(B+A) = 0$. Update: it's not generally true. Is there a 'simple' way to find $[A, \exp(B)]$? Or is this one of those problems where, if you encounter them at all, you are probably doing something wrong? The example I am encountering is $[\vec{S}, \exp(S_z)]$).
If OP wants to evaluate $[A,e^B]$ in terms of $[A,B]$, there is a formula $$\tag{1} [A,e^B] ~=~\int_0^1 \! ds~ e^{(1-s)B} [A,B] e^{sB}. $$ Proof of eq.(1): The identity (1) follows by setting $t=1$ in the following identity $$\tag{2} e^{-tB} [A,e^{tB}] ~=~ \int_0^t\!ds~e^{-sB}[A,B]e^{sB} .$$ To prove equation (2), first note that (2) is trivially true for $t=0$. Secondly, note that a differentiation wrt. $t$ on both sides of (2) produces the same expression $$\tag{3} e^{-tB}[A,B]e^{tB},$$ where we use the fact that $$\tag{4}\frac{d}{dt}e^{tB}~=~Be^{tB}~=~e^{tB}B.$$ So the two sides of eq.(2) must be equal. Remark: See also this related Phys.SE post. (It is related because $[A, \cdot]$ acts as a linear derivation.)
{ "source": [ "https://physics.stackexchange.com/questions/52012", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20164/" ] }
52,249
Let's say I have two planets that are one hundred thousand lightyears away from each other. I and my immortal friend on the other planet want to communicate, with a strong laser and a tachyon communication device. I record a message on the tachyon communication device and release the message at exactly the same time as I activate the laser, both of which are directed to the other planet which is one hundred thousand lightyears away. Say it is the year 0 for both of us at the time I did this. If tachyons existed, then the message would arrive to my friend before the photons in the laser. It would arrive, say, a thousand years earlier. From my vantage point, that message will arrive to her at year 99,999; the same would be true for my friend's vantage point. However, she will only see the laser at year 100,000. So since she got the message at year 99,999, she immediately sends me a reply back going through the same procedure as I did. She records a message and releases it at the same time as the laser. The tachyons will arrive 1,000 years earlier than the laser, so for me, I will receive the message at year 199,998. I will receive the laser, however, at year 199,999. It seems to me that communication this way does not violate causality. I will still have received the message after I had sent it. If tachyons truly violated causality, though, I realize it should arrive at year -1 for her, and so she can reply to me at year -2, which would mess me up by year 0 as I will ask her how she knew I was planning on sending her a message before I sent it. I could send her a different message, which she would end up receiving at year -1, and will end up confusing her as she would have received one message asking her out, and the other asking her how did she know I was asking her out. She then decides I am crazy and sends me a message at year -2 that she does not want to date me, and so she will have both turned me down and entertained me before I have even asked her out. On the other hand, let's go back to year 0 and add a third device to our list: an Alcubierre drive. After I send out the message and the laser, I get impatient and do not feel like waiting 99,999 years, so I get on my Alcubierre drive spaceship and arrive on her planet at the same year 0. My friend is not in her office, so I leave a note to her also immortal secretary saying I dropped by and that she should expect a message for her in year 99,999. I then get back on my Alcubierre drive and land back on my planet, still on year 0. Meanwhile, the tachyons and photons I sent out are still racing to arrive to her. By year 99,999, she receives the message just as I Alcubierre drive back to her, and I pick her up for dinner. But the point of my question is, it seems to me that just going faster than light, if that alone was what you had, would not violate causality. It must be something else. I understand time dilation and that things with mass cannot travel at the speed of light, but using the Alcubierre drive, hypothetically speaking, I was still able to outpace the photons while also having mass. It still did not produce causality problems. Alcubierre drives are also valid solutions to GR. It seems circular to me to say that what makes traveling faster than light violate causality is because it violates causality (if faster than light communication was divorced from causality problems, then the causality problem would cause itself -- thereby violating causality and, hence, we would scrap it and conclude that there is no causality problem after all). What is it that I am missing? If someone could help me out, that would be excellent. I've been itching to ask my friend out for a few millenia now. :)
(There's a couple of these questions kicking around, but I didn't see anyone give the "two boosted copies" answer. Generically, I'd say that's the right answer, since it gives an actual causality violation.) In your scenario, the two planets remain a hundred thousand light years apart. The fact is, you won't get any actual causality violations with FTL that way. The trouble comes if the two planets are moving away from each other. So, let's say that your warp drive travels at ten times the speed of light. Except if the two endpoints of the trip are moving, then what does that mean? Ten times the speed of light relative to which end ? Let's say Tralfamadore is moving at a steady 20% of $c$ (the speed of light), away from Earth. (So, Earth is moving at a steady 20% of $c$ away from Tralfamadore.) If I leave Tralfamadore (in the direction of Earth) and I am travelling at anything less than 20% of $c$ relative to Tralfamadore, then I am still moving away from Earth. I'll never get home. Let's say instead I am travelling at 60% of $c$ relative to Tralfamadore. I will catch up to Earth. Relative to Earth, how fast am I approaching? You might guess the answer is 40% of $c$, but it's 45.45%. Generally, the velocity subtraction formula of relativity is: $$w = (u-v)/(1-uv/c^2)$$ Let's say instead I am travelling at 100% of $c$ relative to Tralfamadore. Plug $u=c, v=0.2c$ into the formula and get $w=c$. Relative to Earth, I am approaching at 100% of $c$! The speed of light is the same for everyone. So finally, let's say instead I am using your warp drive to travel at 1000% of $c$ relative to Tralfamadore. Relative to Earth, I am approaching at -980% of $c$. In Earth's reference frame, I will arrive on Earth before I leave Tralfamadore. Now you may say this in itself isn't a causality violation, because we've applied Earth's calendar to Tralfamadore. And that's true, but I'll make a round trip: In the futuristic Earth year of 3000, Tralfamadore is 98,000 light years away, and receding at 20% of $c$. I leave Earth at 1000% of $c$, relative to Earth . In Earth year 13000 Tralfamadore is 100,000 light years away, and I catch up to it. I turn around and leave Tralfamadore at 1000% of $c$, relative to Tralfamadore . In Earth year 2796, I arrive home. Earth's calendar certainly applies to Earth, and I arrived home two centuries before I left. No two ways about it, I'm a time traveller! There is nothing special about ten times the speed of light. Given a warp drive that moves a certain amount faster than light, you can make the above time machine using two endpoints that are moving apart a certain amount slower than light, provided that the warp drive can move faster than light relative to either end. This time machine works for any form of FTL: tachyons, warp drives, wormholes, what have you.
{ "source": [ "https://physics.stackexchange.com/questions/52249", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20032/" ] }
52,250
The Wigner's friend thought experiment can be used to understand non-realism in quantum mechanics. For anyone not familiar, the thought experiment involves two researchers observing an experiment at different times, let's say it's an electron spin. The question is, how should the second researcher to observe the experiment treat the system between his observation and his friend's observation? In a non-realist interpretation we would say that the second researcher should continue to time evolve the system with the Schrodinger equation until he observes the system himself, but now he must include his friend as part of the quantum system as well. So, the two observers see the collapse of the wavefunctions at different times, which is fine because the time of wavefunction collapse is not given by a linear Hermitian operator so they need not agree about it. My question concerns what happens if the additional time evolution that the second researcher observes changes the probability. For instance, say there is a 50% probability of spin up when the first researcher observes the experiment. Can this probability change from the second observer's perspective in the time between observations? If it can, how do we explain the fact that if they repeat the experiment 100 times the first researcher would expect to see spin up 50 times and the second researcher would expect to see something else. If it can't, what is the purpose of the additional unitary time evolution that the second researcher uses to describe the situation, couldn't he just use a description where the wavefunction collapses when the first researcher observes the experiment and get the same answer?
(There's a couple of these questions kicking around, but I didn't see anyone give the "two boosted copies" answer. Generically, I'd say that's the right answer, since it gives an actual causality violation.) In your scenario, the two planets remain a hundred thousand light years apart. The fact is, you won't get any actual causality violations with FTL that way. The trouble comes if the two planets are moving away from each other. So, let's say that your warp drive travels at ten times the speed of light. Except if the two endpoints of the trip are moving, then what does that mean? Ten times the speed of light relative to which end ? Let's say Tralfamadore is moving at a steady 20% of $c$ (the speed of light), away from Earth. (So, Earth is moving at a steady 20% of $c$ away from Tralfamadore.) If I leave Tralfamadore (in the direction of Earth) and I am travelling at anything less than 20% of $c$ relative to Tralfamadore, then I am still moving away from Earth. I'll never get home. Let's say instead I am travelling at 60% of $c$ relative to Tralfamadore. I will catch up to Earth. Relative to Earth, how fast am I approaching? You might guess the answer is 40% of $c$, but it's 45.45%. Generally, the velocity subtraction formula of relativity is: $$w = (u-v)/(1-uv/c^2)$$ Let's say instead I am travelling at 100% of $c$ relative to Tralfamadore. Plug $u=c, v=0.2c$ into the formula and get $w=c$. Relative to Earth, I am approaching at 100% of $c$! The speed of light is the same for everyone. So finally, let's say instead I am using your warp drive to travel at 1000% of $c$ relative to Tralfamadore. Relative to Earth, I am approaching at -980% of $c$. In Earth's reference frame, I will arrive on Earth before I leave Tralfamadore. Now you may say this in itself isn't a causality violation, because we've applied Earth's calendar to Tralfamadore. And that's true, but I'll make a round trip: In the futuristic Earth year of 3000, Tralfamadore is 98,000 light years away, and receding at 20% of $c$. I leave Earth at 1000% of $c$, relative to Earth . In Earth year 13000 Tralfamadore is 100,000 light years away, and I catch up to it. I turn around and leave Tralfamadore at 1000% of $c$, relative to Tralfamadore . In Earth year 2796, I arrive home. Earth's calendar certainly applies to Earth, and I arrived home two centuries before I left. No two ways about it, I'm a time traveller! There is nothing special about ten times the speed of light. Given a warp drive that moves a certain amount faster than light, you can make the above time machine using two endpoints that are moving apart a certain amount slower than light, provided that the warp drive can move faster than light relative to either end. This time machine works for any form of FTL: tachyons, warp drives, wormholes, what have you.
{ "source": [ "https://physics.stackexchange.com/questions/52250", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20277/" ] }
52,273
Suppose I have a caliper that is infinitely precise. Also suppose that this caliper returns not a number, but rather whether the precise length is rational or irrational. If I were to use this caliper to measure any small object, would the caliper ever return an irrational number, or would the true dimensions of physical objects be constrained to rational numbers?
The set of irrational numbers densely fills the number line. Even assuming that quantum mechanics doesn't disable the preimse of your question, the probability that you will randomly pick an irrational number out of a hat of all numbers is roughly $1 - \frac{1}{\infty} \approx 1$. So the question should be "is it possible to have an object with rational length?
{ "source": [ "https://physics.stackexchange.com/questions/52273", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5620/" ] }
52,452
What is the motivation for including the compactness and semi-simplicity assumptions on the groups that one gauges to obtain Yang-Mills theories? I'd think that these hypotheses lead to physically "nice" theories in some way, but I've never, even from a computational perspective. really given these assumptions much thought.
As Lubos Motl and twistor59 explain, a necessary condition for unitarity is that the Yang Mills (YM) gauge group $G$ with corresponding Lie algebra $g$ should be real and have a positive (semi)definite associative/invariant bilinear form $\kappa: g\times g \to \mathbb{R}$ , cf. the kinetic part of the Yang Mills action. The bilinear form $\kappa$ is often chosen to be (proportional to) the Killing form , but that need not be the case. If $\kappa$ is degenerate, this will induce additional zeromodes/gauge-symmetries, which will have to be gauge-fixed, thereby effectively diminishing the gauge group $G$ to a smaller subgroup, where the corresponding (restriction of) $\kappa$ is non-degenerate. When $G$ is semi-simple, the corresponding Killing form is non-degenerate. But $G$ does not have to be semi-simple. Recall e.g. that $U(1)$ by definition is not a simple Lie group . Its Killing form is identically zero. Nevertheless, we have the following YM-type theories: QED with $G=U(1)$ . the Glashow-Weinberg-Salam model for electroweak interaction with $G=U(1)\times SU(2)$ .
{ "source": [ "https://physics.stackexchange.com/questions/52452", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/19976/" ] }
52,479
I was reading about WiTricity ( http://en.wikipedia.org/wiki/WiTricity ) a technology developed by MIT to wirelessly transmit electricity through resonance, and I have this question: Given the phenomenon of resonant inductive coupling which wikipedia defines as: the near field wireless transmission of electrical energy between two coils that are tuned to resonate at the same frequency. http://en.wikipedia.org/wiki/Resonant_inductive_coupling And the Schumann resonances of the earth ( ~7.83Hz, see wikipedia), would it be theoretically possible to create a coil that resonates at the same frequency or one of it's harmonics (7.83, 14.3, 20.8, 27.3 and 33.8 Hz) to generate electricity? I have a feeling that these wavelengths may be too big to capture via resonance (they are as large as the circumference of the earth if I understand it correctly), so alternatively would it be possible to create a coil that resonates with one of the EM waves that the sun sends our way?
As Lubos Motl and twistor59 explain, a necessary condition for unitarity is that the Yang Mills (YM) gauge group $G$ with corresponding Lie algebra $g$ should be real and have a positive (semi)definite associative/invariant bilinear form $\kappa: g\times g \to \mathbb{R}$ , cf. the kinetic part of the Yang Mills action. The bilinear form $\kappa$ is often chosen to be (proportional to) the Killing form , but that need not be the case. If $\kappa$ is degenerate, this will induce additional zeromodes/gauge-symmetries, which will have to be gauge-fixed, thereby effectively diminishing the gauge group $G$ to a smaller subgroup, where the corresponding (restriction of) $\kappa$ is non-degenerate. When $G$ is semi-simple, the corresponding Killing form is non-degenerate. But $G$ does not have to be semi-simple. Recall e.g. that $U(1)$ by definition is not a simple Lie group . Its Killing form is identically zero. Nevertheless, we have the following YM-type theories: QED with $G=U(1)$ . the Glashow-Weinberg-Salam model for electroweak interaction with $G=U(1)\times SU(2)$ .
{ "source": [ "https://physics.stackexchange.com/questions/52479", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20314/" ] }
52,628
I wonder if there ever could be a star (really small) which may orbit around a planet (really big)?
One thing to keep in mind is that objects that are bound gravitationally actually revolve around each other around a point called a barycenter . The fact that the earth looks like its revolving around the sun is because the sun is much more massive and its radius is large enough that it encompasses the barycenter. This is a similar situation with the Earth and Moon. If there were three bodies, where two bodies were of similar size (like a binary star system plus a massive planet) then an analysis of three body systems shows that there are stable configurations where the objects will be in very complicated orbits where it would be difficult to say one orbits the other. Update: The short answer is yes, it is possible when you look at the complete dynamical system, for the reasons stated above. More evidence of this can be found in the study of regular star orbits where very complicated orbits are possible and can be stable. Currently the cut off for classification of a planet and a brown dwarf is 13 Jupiter masses , which is arbitrary to some degree. The lightest main sequence stars have a mass of 75 Jupiters . This will put the barycenter well outside the radius of either body for binary systems. A quick check of the two body system using the equation: $$R = \dfrac{1}{m_1 + m_2}(m_1r_1 + m_2r_2)$$ Setting $m_1 = 75$, $r_1 = 1$, $m_2 = 13$, $r_2 = 2$ gives: $$\dfrac{75 + 26}{75+13} = 1.147$$ Indicating a barycenter at roughly $\dfrac{1}{7}$ the distance between the objects. More bodies will cause more complicated orbits, where again, it would be difficult to say which object orbits which. It should be noted that if the system was composed of 3 objects, 2 of which had similar mass, it would be possible to develop a system that appears to have two larger objects orbiting a third smaller object. A quick check reveals: $$R = \dfrac{1}{m_1 + m_2 + m_3}(m_1r_1 + m_2r_2+ m_3r_3)$$ Setting $m_1 = 75$, $r_1 = 1$, $m_2 = 13$, $r_2 = 2$ $m_3 = 75$, $r_3 = 3$ gives: $$\dfrac{75 + 26 + 225}{75+13+75} = 2$$ Whether such an orbit system is realizable when you consider the full dynamics of a natural system is debatable, but I am not aware of a specific proof that would rule it out. UPDATE It should be noted that there are new periodic solutions to 3-body problems when the objects have the same mass.
{ "source": [ "https://physics.stackexchange.com/questions/52628", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20423/" ] }
52,943
If photon energies are continuous and atomic energy levels are discrete, how can atoms absorb photons? The probability of a photon having just the right amount of energy for an atomic transition is $0$.
That's a really good question! There are three cases, the third of which is the most fundamental and most interesting. The first case is incomplete absorption, such as a gamma ray knocking loose a few electrons as it passes. In that case the differences are taken care of locally and fairly trivially by allocating energy, momentum and spin appropriately between the parts that were hit and the remaining photon. The second case is flexible absorption, which is when the target receptor is sufficiently large and complex to absorb whatever the difference is between the light that was emitted and atomic-level receptors of the target. A good example of this kind of flexible absorption is the opsin proteins in the retina of your eye. These proteins are sufficiently large and complex that, like pitcher's mitts in baseball, the molecule as a whole can absorb the mismatches energy, momentum, and spin of any photon that falls within a certain rather broad range of frequencies and polarizations. So, it is some variant of this category that takes care of the flexibility needed in most forms of photon absorption. The third case and most curious case happens when you start looking at the quantum side of the question. Because of quantum mechanics, no photon has a truly exact location, energy, momentum, or polarization (or spin, basically its angular momentum). A photon that has traveled for a long time through interstellar space, for example, does pretty well on the precision of its frequency (energy and momentum), but is frankly all over the place in terms of where it could wind up in space. Nonetheless, it still has some residual uncertainty in its frequency, even after a long trip. As with the local flexibility of receptors such as the opsins, this bit of quantum frequency uncertainty also allows some leeway in whether or not a photon will be absorbed by an electron in an atom. The wave function description of the photon in that case allows it to behave like any of a small number of close but distinct frequencies, one of which will be selected when it arrives even at a flexible receptor such as an opsin. However, this final form of flexibility is a bit strange. If energy (which is the same as its frequency for a photon in space) is absolutely conserved, isn't this bit of ambiguity in how the photon is "registered" with an opsin protein in your eye going to cause a slight deviation somewhere in the total energy of the universe? For example, what if the original atom that emitted the photon ended up in its energy accounting books as having emitted the emitted the photon at the lower end of the likely envelope, but the atoms in your opsin protein interpreted it as having energy in the upper end of that envelope? If that happens, hasn't your eye in that case just created a tiny bit of energy that did not exist in the universe as a whole before, and so violated energy conservation by just a tiny bit? The answer is intriguing and not at all understandable from a classical viewpoint. While quantum uncertainty does allow certain degree of freedom that makes absorption possible over a range of frequencies, it does so at a cost to our usual concepts of locality. Specifically, every such event "entangles" the emission and absorption of the photon into single quantum event, no matter how separated the events may appear to be in ordinary clock time. When I say "entangle" I mean the word in exactly the same somewhat mysterious way it is used by people describing quantum computation. Entanglement is a bit of physics that crosses ordinary boundaries of space and time in very odd ways, but some aspect of it is always involved in quantum events. How odd? Well, if you live in the Northern hemisphere try this some night: Figure out where the Andromeda Galaxy is, and go out and look at it. So: did you see it? If so, you just ensured that the frequencies (energies) of every photon you saw is now exactly balanced in the unforgiving conservation books of the universe with the formerly uncertain energies of photon emission events that took place roughly 2.5 million years ago . This balancing in a quite real sense did not occur until you took a look at the Andromeda galaxy and forced those photons to give up their former uncertainty. That's how all entanglement works: The wave function remains open and uncertain until a firm detection occurs, then suddenly and frankly rather magically, everything balances out. And all this time you thought there was nothing particularly strange about ordinary light-based vision, yes? Notice, however, that this third entanglement-enabled form of photon reception flexibility only works within the constraints of the photon's wave function. That observation suggests an experiment that is closely related to your original question, which is this: If you could make the wave function of the photon so tightly and narrowly defined that the slop enabled by entanglement no longer applies? Would your question about "zero probability" then apply, at least at the limit of a wave function with no uncertainty at all in it? The answer is yes. As it turns out, you can approximate that "no ambiguity left" limit in photon wave functions simply by increasing the energy of the photon. In particular, when you get up into the range of gamma photons, exact, "kick-free" absorption (versus emission) of a photon starts becoming a rare event indeed. This specificity of gamma photons can be demonstrated experimentally by using something called the Mössbauer effect , which is itself a beautiful and decidedly strange example of quantum effects impinging on everyday large-scale life. In Mössbauer, groups of atoms within ordinary chunks of elements at room temperature behave as if they were completely motionless. (How they do that is beyond the scope of this question, but it has to do with a novel form of Bose-Einstein condensation within the vibration modes of the atoms.) The Mössbauer effect allows your question to be explored experimentally in an ordinary lab. One group of "motionless" gamma emitting atoms sends gamma rays to another "motionless" group of atoms that can absorb exactly that frequency of gamma photons. Next, you try messing with the frequencies of the gamma photons every so slightly by putting one of the groups of atoms into linear motion relative to the other one. The question then becomes this: How fast do you have to move one of the groups of atoms (which one doesn't matter) before the receiving group can no longer "see" the frequencies and absorb the gamma photons? You might think that you'd have to go thousands of miles per hour to have such a profound effect on something as energetic as gamma rays, but it's the other way around! Even a tiny, tiny velocity of a centimeter or so per second is enough to cause a big drop in the level of absorption of the gamma photons. And that is why your question really is an interesting one: Because you are right. While it takes some work to set it up and some pretty unusual effects to test it, in the end it really is vanishingly unlikely to get an exact match between the frequency emitted by one atom (or nucleus of an atom) and the frequency expectations of the absorbing atom. It is only through three compensating factors -- incomplete absorption, locally forgiving absorption, and quantum entanglement -- that you get the high levels of "practical" photon absorption that make the world as we know it possible.
{ "source": [ "https://physics.stackexchange.com/questions/52943", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/11625/" ] }
53,012
In John Preskill's review of monopoles he states on p. 471 Nowadays, we have another way of understanding why electric charge is quantized. Charge is quantized if the electromagnetic $U(l)_{\rm em}$ gauge group is compact. But $U(l)_{\rm em}$ is automatically compact in a unified gauge theory in which $U(l)_{\rm em}$ is embedded in a nonabelian semisimple group. [Note that the standard Weinberg-Salam-Glashow (35) model is not "unified" according to this criterion.] The implication of the third sentence is that, in some circumstances, the $U(1)_{\rm em}$ gauge group may not be compact. How could this be? Since $U(1)$ as a differentiable manifold is diffeomorphic to $S^1$ isn't it automatically always compact? The following paragraph: In other words, in a unified gauge theory, the electric charge operator obeys nontrivial commutation relations with other operators in the theory. Just as the angular momentum algebra requires the eigenvalues of $J_z$ to be integer multiples of $\frac{\hbar}{2}$, the commutation relations satisfied by the electric charge operator require its eigenvalues to be integer multiples of a fundamental unit. This conclusion holds even if the symmetries generated by the charges that fail to commute with electric charge are spontaneously broken. is OK, but I don't follow what that has to do with the compactness of $U(1)$.
By the "noncompact $U(1)$ group", we mean a group that is isomorphic to $({\mathbb R},+)$. In other words, the elements of $U(1)$ are formally $\exp(i\phi)$ but the identification $\phi\sim \phi+2\pi k$ isn't imposed. When it's not imposed, it also means that the dual variable ("momentum") to $\phi$, the charge, isn't quantized. One may allow fields with arbitrary continuous charges $Q$ that transform by the factor $\exp(iQ\phi)$. It's still legitimate to call this a version of a $U(1)$ group because the Lie algebra of the group is still the same, ${\mathfrak u}(1)$. In the second part of the question, where I am not 100% sure what you don't understand about the quote, you probably want to explain why compactness is related to quantization? It's because the charge $Q$ is what determines how the phase $\phi$ of a complex field is changing under gauge transformations. If we say that the gauge transformation multiplying fields by $\exp(iQ\phi)$ is equivalent for $\phi$ and $\phi+2\pi$, it's equivalent to saying that $Q$ is integer-valued because the identity $\exp(iQ\phi)=\exp(iQ(\phi+2\pi))$ holds iff $Q\in{\mathbb Z}$. It's the same logic as the quantization of momentum on compact spaces or angular momentum from wave functions that depend on the spherical coordinates. He is explaining that the embedding of the $Q$ into a non-Abelian group pretty much implies that $Q$ is embedded into an $SU(2)$ group inside the non-Abelian group, and then the $Q$ is quantized for the same mathematical reason why $J_z$ is quantized. I would only repeat his explanation because it seems utterly complete and comprehensible to me. Note that the quantization of $Q$ holds even if the $SU(2)$ is spontaneously broken to a $U(1)$. After all, we see such a thing in the electroweak theory. The group theory still works for the spontaneously broken $SU(2)$ group.
{ "source": [ "https://physics.stackexchange.com/questions/53012", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3099/" ] }
53,148
In relativistic quantum field theories (QFT), $$[\phi(x),\phi^\dagger(y)] = 0 \;\;\mathrm{if}\;\; (x-y)^2<0\,.$$ On the other hand, even for space-like separation $$\phi(x)\phi^\dagger(y)\ne0\,.$$ Many texts (e.g. Peskin and Schroeder) promise that this condition ensures causality. Why isn't the matrix element $\langle\psi| \phi(x)\phi^\dagger(y)|\psi\rangle$ of physical interest? What is stopping me from cooking up an experiment that can measure $|\langle\psi| \phi(x)\phi^\dagger(y)|\psi\rangle|^2$ ? What is wrong with interpreting $\langle\psi| \phi(x)\phi^\dagger(y)|\psi\rangle \ne 0$ as the (rather small) amplitude that I can transmit information faster than the speed of light?
Recall that commuting observables in quantum mechanics are simultaneously observable. If I have observables A and B, and they commute, I can measure A and then B and the results will be the same as if I measured B and then A (if you insist on being precise, then by the same I mean in a statistical sense where I take averages over many identical experiments). If they don't commute, the results will not be the same: measuring A and then B will produce different results than measuring B and then A. So if I only have access to A and my friend only has access to B, by measuring A several times I can determine whether or not my friend has been measuring B or not. Thus it is crucial that if A and B do not commute, they are not spacelike separated. Or to remove the double negatives, it is crucial that A and B must commute if they are spacelike separated. Otherwise I can tell by doing measurements of A whether or not my friend is measuring B, even though light could not have reached me from B. Then with the magic of a lorentzian spacetime I could end up traveling to my friend and arriving before he observed B and stop him from making the observation. The correlation function you wrote down, the one without the commutator, is indeed nonzero. This represents the fact that values of the field at different points in space are correlated with one another. This is completely fine, after all there are events that are common to both in their past light cone, if you go back far enough. They have not had completely independent histories. B U T the point is that these correlations did not arise because you made measurements. You cannot access these correlations by doing local experiments at a fixed spacetime point, you can only see these correlations by measuring field values at spatial location x and then comparing notes with your friend who measured field values at spatial location y. You can only compare notes when you have had time to travel to get close to each other. The vanishing commutator guarantees that your measurements at x did not affect her measurements at y. It is dangerous to think of fields as creating particles at spacetime locations, because you can't localize a relativistic particle in space to a greater precision than its compton wavelength. If you are thinking of fields in position space it is better to think of what you are measuring as a field and not think of particles at all. (Actually I should say that I don't think you could actually learn that your friend was measuring B at y by only doing measurements at A. But the state of the field would change, and the evolution of the field would be acausal. I think this is a somewhat technical point, the main idea is that you don't want to be able to affect what the field is going OVER THERE outside the light cone by doing measurements RIGHT HERE because you get into trouble with causality)
{ "source": [ "https://physics.stackexchange.com/questions/53148", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/12957/" ] }
53,318
While studying the Dirac equation, I came across this enigmatic passage on p. 551 in From Classical to Quantum Mechanics by G. Esposito, G. Marmo, G. Sudarshan regarding the $\gamma$ matrices: $$\tag{16.1.2} (\gamma^0)^2 = I , (\gamma^j)^2 = -I \ (j=1,2,3) $$ $$\tag{16.1.3} \gamma^0\gamma^j + \gamma^j \gamma^0 = 0 $$ $$\tag{16.1.4} \gamma^j \gamma^k + \gamma^k \gamma^j = 0, \ j\neq k$$ In looking for solutions of these equations in terms of matrices, one finds that they must have as order a multiple of 4, and that there exists a solution of order 4. Obviously the word order here means dimension. In my QM classes the lecturer referenced chapter 5 from Advanced Quantum Mechanics by F. Schwabl, especially as regards the dimension of Dirac $\gamma$ matrices. However there it is stated only that, since the number of positive and negative eigenvalues of $\alpha$ and $\beta^k$ must be equal, $n$ is even. Moreover, $n=2$ is not sufficient, so $n=4$ is the smallest possible dimension in which it is possible to realize the desired algebraic structure. While I got that the smallest dimension is 4, I fail to find any argument to reject the possibility that $n=6$ could be a solution. I also checked this Phys.SE post, but I didn't find it helpful at all. Can anyone help me?
Let us generalize from four space-time dimensions to a $d$ -dimensional Clifford algebra $C$ . Define $$ p~:=~[\frac{d}{2}], \tag{1}$$ where $[\cdot]$ denotes the integer part . OP's question then becomes Why must the dimension $n$ of a finite dimensional representation $V$ be a multiple of $2^p$ ? Proof: If $C\subseteq {\rm End}(V)$ and $V$ are both real, we may complexify, so we may from now on assume that they are both complex. Then the signature of $C$ is irrelevant, and hence we might as well assume positive signature. In other words, we assume that we are given $n\times n$ matrices $\gamma_{1}, \ldots, \gamma_{d}$ , that satisfy $$ \{\gamma_{\mu}, \gamma_{\nu}\}_+~=~2\delta_{\mu\nu}{\bf 1}, \qquad \mu,\nu~\in~\{1,\ldots, d\}.\tag{2} $$ We may define $$ \gamma_{\mu\nu}~:=~ \frac{1}{2}[\gamma_{\mu}, \gamma_{\nu}]_- ~=~-\gamma_{\nu\mu}, \qquad \mu,\nu~\in~\{1,\ldots, d\}. \tag{3}$$ In particular, define $p$ elements $$ H_1, \ldots, H_p,\tag{4} $$ as $$ H_r ~:=~i\gamma_{r,p+r}, \qquad r~\in~\{1,\ldots, p\}.\tag{5} $$ Note that the elements $H_1,\ldots, H_p$ , (and $\gamma_d$ if $d$ is odd), are a set of mutually commuting involutions $$ [H_r,H_s]_- ~=~0, \qquad r,s~\in~\{1,\ldots, p\},\tag{6} $$ $$ H_r^2 ~=~{\bf 1}, \qquad r~\in~\{1,\ldots, p\}.\tag{7} $$ Therefore, according to Lie's Theorem , then $H_1,\ldots, H_p$ , (and $\gamma_d$ if $d$ is odd), must have a common eigenvector $v$ . Since $H_1,\ldots, H_p$ are involutions, their eigenvalues are $\pm 1$ . In other words, $$H_1 v~=~(-1)^{j_1} v, \quad \ldots, \quad H_p v~=~(-1)^{j_p} v,\tag{8} $$ where $$ j_1,\ldots, j_p~\in ~\{0,1\} \tag{9}$$ are either zero or one. Apply next the $p$ first gamma matrices $$ \gamma^{1}, \gamma^{2}, \ldots, \gamma^{p}, \tag{10} $$ to the common eigenvector $v$ , so that $$ v_{(k_1,\ldots, k_p)}~:=~ \gamma_{1}^{k_1}\gamma_{2}^{k_2}\cdots\gamma_{p}^{k_p} v, \tag{11} $$ where the indices $$ k_1,\ldots, k_p~\in ~\{0,1\} \tag{12} $$ are either zero or one. Next note that $$ [H_r,\gamma_s]_-~=~0 \quad \text{if}\quad r~\neq~ s \mod p \tag{13} $$ and $$ \{H_r,\gamma_r\}_+~=~0. \tag{14} $$ It is straightforward to check that the $2^p$ vectors $v_{(k_1,\ldots, k_p)}$ also are common eigenvectors for $H_1,\ldots, H_p$ . In detail, $$ H_r v_{(k_1,\ldots, k_p)}~=~(-1)^{k_r+j_r}v_{(k_1,\ldots, k_p)}.\tag{15}$$ Note that each eigenvector $v_{(k_1,\ldots, k_p)}$ has a unique pattern of eigenvalues for the tuple $(H_1,\ldots, H_p)$ , so the $2^p$ vectors $v_{(k_1,\ldots, k_p)}$ must be linearly independent. Since $$ \gamma_{p+r}~=~ i H_r \gamma_r, \qquad r~\in~\{1,\ldots, p\}, \tag{16} $$ we see that $$ W~:=~{\rm span}_{\mathbb{C}} \left\{ v_{(k_1,\ldots, k_p)} \mid k_1,\ldots, k_p~\in ~\{0,1\} \right\} \tag{17} $$ is an invariant subspace $W\subseteq V$ for $C$ . This shows that that any irreducible complex representation of a complex $d$ -dimensional Clifford algebra is $2^p$ -dimensional. Finally, we believe (but did not check) that a finite dimensional representation $V$ of a complex Clifford algebra is always completely reducible, i.e. a finite sum of irreducible representations, and hence the dimension $n$ of $V$ must be a multiple of $2^p$ . $\Box$
{ "source": [ "https://physics.stackexchange.com/questions/53318", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10642/" ] }
53,534
I'm trying to see that the invariance of the Klein–Gordon field implies that the Fourier coefficients $a(\mathbf{k})$ transform like scalars: $a'(\Lambda\mathbf{k})=a(\mathbf{k}).$ Starting from the mode expansion of the field $$\phi'(x)=\phi(\Lambda^{-1}x)= \int \frac{d^{3}k}{(2\pi)^{3}2E_{k}} \left( e^{-ik\cdot \Lambda^{-1}x}a(\mathbf{k}) +e^{ik\cdot \Lambda^{-1}x}b^{*}(\mathbf{k}) \right),$$ it's easy to see that it equals $$\int \frac{d^{3}k}{(2\pi)^{3}2E_{k}} \left( e^{-i(\Lambda k)\cdot x}a(\mathbf{k}) +e^{i(\Lambda k)\cdot x}b^{*}(\mathbf{k}) \right).$$ when using the property $\Lambda^{-1}=\eta\Lambda^{T}\eta$. Then doing a change of variable $\tilde{k}=\Lambda k$ and considering orthochronous transformations so that the Jacobian is 1, then I get the wanted result ($a'(\Lambda\mathbf{k})=a(\mathbf{k})$) when comparing with the original mode expansion. However, this is not quite right as I would have to justify that $E_\tilde{k}=E_k$ but I can't see how.
The important insight is that it's actually the whole combination $$ \frac{d^3 k}{2(2\pi)^3 E_\mathbf k}, \qquad E_\mathbf{k} = \sqrt{\mathbf k^2 + m^2} $$ that forms a Lorentz-invariant measure. To see this, note that if we define $k= (k^0, \mathbf k)$ and use the identity $$ \delta(f(x)) = \sum_{\{x_i:f(x_i) = 0\}} \frac{\delta(x-x_i)}{|f'(x_i)|} $$ then we get $$ \delta(k^2 - m^2)=\frac{\delta(k^0 - \sqrt{\mathbf k^2+m^2})}{2\sqrt{\mathbf k^2+m^2}} + \frac{\delta(k^0 + \sqrt{\mathbf k^2+m^2})}{2\sqrt{\mathbf k^2+m^2}} $$ so the original measure can be rewritten as $$ \frac{d^3 k}{2(2\pi)^3 E_\mathbf k}=\frac{d^3k\,d k^0}{2(2\pi)^3 k^0}\delta(k^0 - \sqrt{\mathbf k^2+m^2}) = \frac{d^4k}{(2\pi)^3}\delta(k^2-m^2)\theta(k^0) $$ which is manifestly Lorentz invariant for proper, orthochronous Lorentz transformations. The rest of your manipulations go through unscathed, and you get the result you want! Hope that helps! Cheers!
{ "source": [ "https://physics.stackexchange.com/questions/53534", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20335/" ] }
53,648
I am not a physicist; I am a software engineer. While trying to fall asleep recently, I started thinking about the following. There are many explanations online of how any object stays in orbit. The explanation boil down to a balance of the object's tangential force with centripetal force. But suppose something upsets this balance by a miniscule amount -- say, a meteorite or a spaceship crashing into Earth. Doesn't this start a positive feedback process to break Earth out of orbit? Suppose the meteorite crashes such that the Earth is briefly forced toward the sun. (The meteorite contributes to the centripetal force.) Now Earth is just a smidgen closer to the sun. Due to the equation $F=G \frac{m_1 m_2}{d^2}$ that everyone learns in high school physics, the sun's centripetal force acting on Earth in turn increases! That pulls Earth even closer to the sun, increasing centripetal force yet more, and so on. A similar argument applies to briefly forcing Earth away from the sun. Empirically, I want to say that there's a buffer, such that if the balance of forces is disrupted by less than X%, we remain in orbit. But I cannot justify any buffer from the equation above. So, what am I missing? How can objects in orbit suffer minor perturbations in the balance of tangential and centripetal force and yet remain in orbit, when it appears to me that any perturbation starts a positive feedback loop?
An orbit is stable because of conservation of angular momentum. Suppose we start with an object in an exactly circular orbit and slow it down slightly. That means it is moving at less than orbital velocity so it starts to fall inwards. However as its distance to the Sun decreases the tangential component of its velocity has to increase to conserve angular momentum. So as the object nears the Sun it moves faster and faster, and at its closest approach to the Sun it is moving at well above orbital velocity so it starts to move outwards again. You end up with an elliptical orbit: (this diagram shamelessly cribbed from Google images) It's actually very difficult to get something orbiting a star to fall into it, because you have to reduce the tangential velocity to zero. At the distance of the Earth from the Sun the orbital velocity is 108,000 km/h. You would have to slow the Earth by this amount to make it fall into the Sun, and fortunately no meteorite is likely to do that. On a side note, NASA recently sent the Messenger spaceship to study Mercury, and getting the ship to Mercury was hard because of the need to shed all that orbital velocity. Even though Mercury is a lot closer to the Sun than the Earth is you can't just fall there. Messenger had to use several gravity assists to shed enough speed to allow it to orbit Mercury.
{ "source": [ "https://physics.stackexchange.com/questions/53648", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20841/" ] }
53,682
After a while, a ball point pen doesn't write very well anymore. It will write for a little distance, then leave a gap, then maybe write in little streaks, then maybe write properly again. It seems to be worse with older pens, but I have observed this with new pens right out of the box too. Experiments I have done: Take the cartridge out and look at the amount of ink. There is still plenty. Inspect the ball with a jewler's loupe, no obvious damage, everything looks smooth and clean. Stored new pens unused tip-down to eliminate gravity slowly pulling the ink from the ball and leaving a air pocket. Some of the pens exhibit the symptom even when used the first time with the cap never removed before and stored this way for a year. Stuck a wire in the open end of the ink reservoir to see if maybe the end dried to a hard plug so that new ink couldn't move down as it was removed from the reservoir by writing. I have never found anything hard, and observed the same symptoms even after "stirring" the top of the reservoir with a wire a little. When a pen stops writing, shake it hard, like resetting a fever thermometer. That seems to help for a brief while, but so does just waiting a few seconds, so I'm not sure the shaking is relevant. Stored a pen ball-down in a glass of water overnight. The thought was if the ink just above the ball had dried, maybe this would re-constitute it. Some ink clearly dissolved in the water since it was colored, but once the pen was started up again there was no apparent change to the symptoms. Cold seems to exaggerate the symptoms, but warming to body temperature doesn't fix them. This is not just a single pen or a single model. I have bunch of different pens of different models that do this. I'm curious, what causes this? Added: I have done some more experimenting, and it seems Emile Jetzer was right. The cause seems to be that the ink is so viscous that new ink doesn't flow down to replace what is removed via the ball fast enough. Two experiments support this hypothesis: A pen will write again after a while by just letting it sit ball-down, but the time is significantly decreased when you shake it, like you would resetting a fever thermometer. Some stick pens are sealed except for a small air hole at the top. Putting lips around the top of the pen and applying pressure as if you were trying to blow into it resets the writing action quickly. Even better, I can write with such pens much longer than they would normally go by holding my mouth over the top and applying constant air pressure. So, I think the mystery is solved. Probably ink in the reservoir dries out slowly over time by losing water vapor from the top. That makes the entire ink more visous, which explains why old but unused pens also exhibit this symptom. The next experiment is to take such a pen and add a little water at the top of the ink reservoir, then let it stand for a week and see what difference that makes. Added 2: I added a little bit of water at the end of the ink reservoir in one of the problem pens. I did this by using a small flexible tube (plastic insulation stripped from #22 wire) to put some water right at the end of the ink without a bubble between the ink and the water. At first, there was no change to the symptom. After about 2 weeks, the pen worked significantly better. This pen had about 1 1/2 inches of ink in the reservoir, so it apparently took that long for the water to diffuse down to the ball end. I think that this and the other tests conclusively prove that the problem is the ink drying out over time, which makes it more viscous, which prevents it from flowing down to the ball just by gravity as fast as the ball is capable of removing ink.
I would guess it's in part because of the viscosity of the ink. That would explain why the effect is more seen when it's colder. I don't know how doable it is, but you could try filling an ink cartridge with ink used for fountain pens, which is typically less viscous. You might get blotches of ink, but my guess is you won't get dry strokes. So maybe ink manufacturers used easy-flowing ink in ballpoints at first, but then saw it flowed too easily, and made more viscous inks. But this is speculative.
{ "source": [ "https://physics.stackexchange.com/questions/53682", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20848/" ] }
53,802
In non-relativistic QM, the $\Delta E$ in the time-energy uncertainty principle is the limiting standard deviation of the set of energy measurements of $n$ identically prepared systems as $n$ goes to infinity. What does the $\Delta t$ mean, since $t$ is not even an observable?
Let a quantum system with Hamiltonian $H$ be given. Suppose the system occupies a pure state $|\psi(t)\rangle$ determined by the Hamiltonian evolution. For any observable $\Omega$ we use the shorthand $$ \langle \Omega \rangle = \langle \psi(t)|\Omega|\psi(t)\rangle. $$ One can show that (see eq. 3.72 in Griffiths QM) $$ \sigma_H\sigma_\Omega\geq\frac{\hbar}{2}\left|\frac{d\langle \Omega\rangle}{dt}\right| $$ where $\sigma_H$ and $\sigma_\Omega$ are standard deviations $$ \sigma_H^2 = \langle H^2\rangle-\langle H\rangle^2, \qquad \sigma_\Omega^2 = \langle \Omega^2\rangle-\langle \Omega\rangle^2 $$ and angled brackets mean expectation in $|\psi(t)\rangle$ . It follows that if we define $$ \Delta E = \sigma_H, \qquad \Delta t = \frac{\sigma_\Omega}{|d\langle\Omega\rangle/dt|} $$ then we obtain the desired uncertainty relation $$ \Delta E \Delta t \geq \frac{\hbar}{2} $$ It remains to interpret the quantity $\Delta t$ . It tells you the approximate amount of time it takes for the expectation value of an observable to change by a standard deviation provided the system is in a pure state. To see this, note that if $\Delta t$ is small, then in a time $\Delta t$ we have $$ |\Delta\langle\Omega\rangle| =\left|\int_t^{t+\Delta t} \frac{d\langle \Omega\rangle}{dt}\,dt\right| \approx \left|\frac{d\langle \Omega\rangle}{dt}\Delta t\right| = \left|\frac{d\langle \Omega\rangle}{dt}\right|\Delta t = \sigma_\Omega $$
{ "source": [ "https://physics.stackexchange.com/questions/53802", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20906/" ] }
53,913
It is generally agreed upon that electromagnetic waves from an emitter do not have to connect to a receiver, but how can we be sure this is a fact? The problem is that we can never observe non-received EM-Waves, because if we observe them the instrument of observation becomes a receiver. Electromagnetic waves have changing electric and magnetic fields and are both electric and magnetic. Electric current connects like from an anode to a cathode. Magnetic fields illustrated by flux lines connect from one magnetic pole to another, and no non-connecting flux lines are observed. So electric currents connect and magnetic fields connect, so why doesn’t the electromagnetic wave also always connect to a receiver? A receiver which could be a plasma particle, a planet, a star and anything else which can absorb EM-radiation. There is one big problem. If a photon has to be emitted in the direction of a future receiver, the photon must know where a future receiver will be. So this conflicts with our view on causality, or a cause creating an effect. And as the emitter doesn’t know where the receiver will be some time in the future, it can't emit an EM-wave against it. But how can we know that the causality principle is always valid without exceptions? There seems to be reasons for questioning the universal validity of the causality principle: Information does not have a mass and may then not be restricted by the speed of light, so the causality principle may not always hold for massless particles/waves. When something travels with the speed of light, it will experience that distance become zero. If there is no distance, there is a full connection and a continuous electromagnetic wave between the emitter and receiver. Again, using the photon as a reference frame is not something relativistic physicists seem to like. Maxwell's electromagnetic wave equation has a simple and an advanced solution. The advanced solution is usually discarded because the effect happens before the cause. But in Wheeler–Feynman absorber theory the advanced solution is used because it works. See this link for more information: http://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman_absorber_theory The field of quantum mechanics is discussing many different causality problems. Like the observation of a particle might decide where the particle will be in time and space. Relevant to this discussion is the question of what triggers the atom to emit light: Over the last hundred years, physicists have discovered systems that change from one state to another without any apparent physical “trigger.” These systems are described by quantum mechanics. The simplest such system is the hydrogen atom. It’s just an electron bound to a proton. Two particles – that’s about as simple as you can get. According to QM, the electron can occupy one of a discrete set of energy levels. The electron can be excited to a higher energy level by absorbing a photon… When the electron drops from a higher energy level to a lower level, it emits a photon: a quantum of light… Quantum mechanics describes this process beautifully, but it only predicts the average time the electron will stay in the higher energy level. It doesn’t give any clue as to the specific time the electron will drop to the lower level. More precisely, the transition rate (the probability of a transition per unit time) is constant: it doesn’t matter how long it has been since the atom was excited, the transition rate stays the same… When you first encounter this, you can’t quite wrap your brain around it. Surely there must be some internal mechanism, some kind of clock, that ticks along and finally “goes off,” causing the transition! But no such mechanism has ever been found. QM has had an unexcelled record of accurate predictions, without any need for such a mechanism…” -George Mason University physicist, Robert Oerter So is the excited atom a random generator or is it something external that triggers the release of a photon? It seems like it’s something external, and this external trigger might be the unphysical connection to a future receiver described by the advanced solution to Maxwell’s equation of electromagnetic radiation. So it seems to me like we currently can’t be sure if a photon is always emitted against a receiver, or it is emitted randomly in any direction into space. But this question might be one of the most important questions ever asked, because if an electromagnetic wave is always connected to a receiver the implications are vast. It could shed light on the discussion of many topics. It might change our view on time and space. It might not only be the past pushing the present forward, but the future pulling on the present, making a syntropy which will create order out of chaos, and describe the marvelous universe we live in. Even the view of the present itself as a sharp line between the past and the future could be questioned. Time itself might not be totally linear, and the future may change the past. To avoid paradoxes with time travel we have to allow a number of parallel universes, as suggested by American physicist Hugh Everett who formulated the idea of their existence to explain the theory that every possible outcome of every choice we have actually does happen. But before we can fully dive into all these fascinating questions, we have to solve this question: Does an electromagnetic wave always have to connect to a receiver? This hypothetical question might seem purely philosophical, but it is not. And it might even be confirmed by observations. We can’t directly observe non-received photons, but we might indirectly observe the existence or nonexistence of these photons. Any answer or suggestions are most welcome.
Richard Feynman's PhD thesis was about just this topic, if I am understanding your question rightly. Here is an earlier question about Feynman's thesis that addresses some of the fascinating issues involved with this. At the suggestion of his thesis adviser John Wheeler , Feynman explained photon emission as a two-way interaction in which the regular photon is emitted and follows the "retarded" solutions to Maxwell's equations . "Meanwhile" (in some rather abstract sense of the word indeed) a target atom or particle in the distant future emits its own photon, but a very special one that travels backwards in time -- a type of solution to Maxwell's equations that had been recognized since Maxwell's time but had been ignored. These solutions were called the "advanced" solutions. This advanced photon travels back in time and "just happens" to arrive at the source at the exact instant when the regular photon is emitted, causing the emitting atom to be kicked backwards a tiny bit. Amazingly, Wheeler and Feynman were able to write a series of papers showing that despite how mind-boggling this scenario sounded, it did not result in violations of causality, and it did provide a highly effective model of electron-photon interactions. From this start, and with some important changes, Feynman eventually produced his Feynman-diagram explanation of quantum electrodynamics , or QED. The curious time relationship continue in Feynman's QED, where for example a positron or anti-electron simply become an ordinary electron traveling backwards in time. Staying fully consistent with his own ideas, Feynman himself described photon interactions as always having an emission and a reception event, no matter how far apart those events occur in ordinary time. In his view, if you shone a flashlight into deep space, the photons could not even be emitted until they found their "partner" advanced photon emission events somewhere in the distant future. The proof of it is in the very slight push back on your hand that happens when you shine the light, that kick coming from the advanced photons arriving from that distant point in the future and nudging the electrons in your flashlight filament.
{ "source": [ "https://physics.stackexchange.com/questions/53913", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20954/" ] }
53,916
I've looked through about 20 different explanations, from the most basic to the most complex, and yet I still don't understand this basic concept. Perhaps someone can help me. I don't understand the difference between the electric and magnetic force components in electromagnetism. I understand that an electric field is created by electrons and protons. This force is attractive to particles carrying opposite charge and repulsive to like-charge particles. So then you get moving electrons and all of a sudden you have a "magnetic" field. I understand that the concept of a magnetic field is only relative to your frame of reference, but there's no ACTUAL inherent magnetic force created, is there? Isn't magnetism just a term we use to refer to the outcomes we observe when you take a regular electric field and move it relative to some object? Electrons tend to be in states where their net charge is offset by an equivalent number of protons, thus there is no observable net charge on nearby bodies. If an electron current is moving through a wire, would this create fluctuating degrees of local net charge? If that's the case, is magnetism just what happens when electron movement creates a net charge that has an impact on other objects? If this is correct, does magnetism always involve a net charge created by electron movement? If my statement in #2 is true, then what exactly are the observable differences between an electric field and a magnetic field? Assuming #3 is correct, then the net positive or negative force created would be attractive or repulsive to magnets because they have localized net charges in their poles, correct? Whereas a standard electric field doesn't imply a net force, and thus it wouldn't be attractive or repulsive? A magnetic field would also be attractive or repulsive to some metals because of the special freedom of movement that their electrons have? If i could take any object with a net charge, (i.e. a magnet), even if it's sitting still and not moving, isn't that an example of a magnetic field? I just generally don't understand why moving electrons create magnetism (unless i was correct in my net charge hypothesis) and I don't understand the exact difference between electrostatic and magnetic fields.
So then you get moving electrons and all of a sudden you have a "magnetic" field. But at the same time if you take a magnetic dipole ( a magnet as we know it) and move it around you will all of sudden get an electric field. It was a great step forward in the history of physics when these two observations were combined in one electromagnetic theory in Maxwell's equations. . Changing electric fields generate magnetic fields and changing magnetic fields generate electric fields. The only difference between these two exists in the elementary quantum of the field. The electric field is a pole, the magnetic field is a dipole in nature, magnetic monopoles though acceptable by the theories, have not been found. Electric dipoles exist in symmetry with the magnetic dipoles: $\hspace{50px}$ $\hspace{50px}$ .$$ \begin{array}{c} \textit{electric dipole field lines} \\ \hspace{250px} \end{array} \hspace{50px} \begin{array}{c} \textit{magnetic dipole field lines} \\ \hspace{250px} \end{array} $$ but there's no ACTUAL inherent magnetic force created, is there? There is symmetry in electric and magnetic forces (the next is number 2 in the question) Isn't magnetism just a term we use to refer to the outcomes we observe when you take a regular electric field and move it relative to some object? Historically magnetism was observed in ancient times in minerals coming from Magnesia, a region in Asia Minor. Hence the name. Nothing to do with obvious moving electric fields. After Maxwell's equation and the discovery of the atomic nature of matter the small magnetic dipoles within the magnetic materials building up the permanent magnets were discovered. Electrons tend to be in states where their net charge is offset by an equivalent number of protons, thus there is no observable net charge on nearby bodies. If an electron current is moving through a wire, would this create fluctuating degrees of local net charge? If that's the case, is magnetism just what happens when electron movement creates a net charge that has an impact on other objects? If this is correct, does magnetism always involve a net charge created by electron movement? No. See answer to 2. Changing magnetic fields create electric fields and vice versa. No net charges involved. If my statement in #2 is true, then what exactly are the observable differences between an electric field and a magnetic field? Assuming #3 is correct, then the net positive or negative force created would be attractive or repulsive to magnets because they have localized net charges in their poles, correct? Whereas a standard electric field doesn't imply a net force, and thus it wouldn't be attractive or repulsive? A magnetic field would also be attractive or repulsive to some metals because of the special freedom of movement that their electrons have? No. A magnetic field interacts to firs order with the magnetic dipole field of atoms. Some have strong ones some have none. A moving magnetic field will interact with the electric field it generates with the electrons in a current. If i could take any object with a net charge, (i.e. a magnet), even if it's sitting still and not moving, isn't that an example of a magnetic field? A magnet has zero electric charge usually, unless particularly charged by a battery or whatnot. It has a magnetic dipole which will interact with magnetic fields directly. See link above. I just generally don't understand why moving electrons create magnetism (unless i was correct in my net charge hypothesis) and I don't understand the exact difference between electrostatic and magnetic fields. It is an observational fact, an experimental fact, on which classical electromagnetic theory is based, and the quantum one. Facts are to be accepted and the mathematics of the theories fitting the facts allow predictions and manipulations which in the case of electromagnetism are very accurate and successful, including this web page we are communicating with.
{ "source": [ "https://physics.stackexchange.com/questions/53916", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20956/" ] }
54,052
I want to reproduce this experiment by myself. What I need for this. What parameters of slits and laser/another light source it needs? Is it possible to make DIY -detector?
It's actually quite easy to perform the experiment in the comfort of your own home. The simplest setup I have seen (as depicted in this , and other youtube videos) is to use a laser pointer and pencil lead, but you can certainly be more systematic and cut slits in some opaque material as well. I would encourage you to experiment to answer the question of how far apart the slits need to be etc., but some basic math behind this is as follows: If the slits are a distance $d$ apart, if the light has wavelength $\lambda$, and if the distance between the slits and the screen is $L$, then the spacing $\Delta y$ between successive fringes on the wall will approximately be $$ \Delta y \approx \frac{\lambda L}{d} $$ So let's say the laser is red so that $\lambda\approx 700 \mathrm{nm}$, the slits are $1\,\mathrm{mm}$ apart, and the screen is $1.5\,\mathrm m$ away from the slits, then we have $$ \Delta y \approx \frac{(700\,\mathrm{nm})(1.5\,\mathrm{m})}{1\,\mathrm{mm}} = 1.05\,\mathrm{mm} $$ So you can actually try this and see if your results agree! (I might actually try this myself come to think of it; thanks for the question!) Cheers!
{ "source": [ "https://physics.stackexchange.com/questions/54052", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16027/" ] }
54,102
Reports of the Russian meteor event (2013) say that it released more energy than 20 atomic bombs of the size dropped on Hiroshima, Japan : Scientists estimated the meteor unleashed a force 20 times more powerful than the Hiroshima bomb, although the space rock exploded at a much higher altitude. Amy Mainzer, a scientist at NASA's Jet Propulsion Laboratory, said the atmosphere acted as a shield. The shock wave may have shattered windows, but "the atmosphere absorbed the vast majority of that energy," she said. http://abclocal.go.com/kfsn/story?section=news/national_world&id=8994311 Really? Wouldn't that have done more damage than was seen? How does the damage depend on how quickly the energy is released?
Hiroshima exploded 67 terajoules of energy i.e. $6.7\times 10^{13}$ joules. We may calculate the mass of the Russian meteor assuming that the speed is $v=20,000$ m/s: $$ \frac{1}{2} mv^2 = 6.7 \times 10^{13} $$ We obtain 335 tons. The numbers aren't precise but they're in the ballpark and reasonable. The Russian academy of sciences actually estimates 10,000 tons which would be something like 700 Hiroshimas. The Hiroshima bomb was harmful because its energy was focused on a small place, several kilometers around the explosion were destroyed. The energy of the Russian meteor was distributed to a much larger area of radius closer to dozens if not 100 km and much of the energy was deposited to the atmosphere, so the local impact was significantly smaller than it was in Hiroshima. If we exaggerate a bit, the energy was spread to 20 times longer distances than in Hiroshima and the dilution scales as something in between the second and third power, so one gets about 500 times smaller "local impact" at the relevant places than in Hiroshima even if we add the factor of 30 (30 Hiroshimas). Some individual collisions detected on the ground were estimated to have just 1 TJ or so, 67 times weaker than the Hiroshima. The bulk of the energy was deposited to the atmosphere. But I guess that the main reason why you find the numbers counterintuitive is the widespread antiwar propaganda that prefers to present a nuclear blast as a nearly supernatural event of nearly infinite proportions. This ain't the case. The bomb in Hiroshima was just another bomb, a bit stronger one (plus some annoying radioactive stuff that had other consequences, something that wasn't caused by the meteorite). Five kilometers from the ground zero, they experienced similar symptoms as they did in Chelyabinsk – broken windows and a reasonable but not infinite sound of the explosion. Hundreds of meters from the explosion, things vaporized and the Russian meteor had no place with this much concentrated energy.
{ "source": [ "https://physics.stackexchange.com/questions/54102", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21024/" ] }
54,392
There are two very famous quotes from German Nobel Laureate Albert Abraham Michelson that are remembered mainly for being extremely wrong (especially since he said them just before two major revolutions in physics, quantum mechanics and relativity): The more important fundamental laws and facts of physical science have all been discovered, and these are so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote. Many other instances might be cited, but these will suffice to justify the statement that “our future discoveries must be looked for in the sixth place of decimals.” It is however somewhat understandable that Michelson thought physics was almost 'completed'. Almost every physics-related phenomenon that a human could encounter in day-to-day life had been explained, including gravity, motion, and electromagnetism. With the advent of quantum mechanics even more physical phenomena have been explained. It has gotten to the point nowadays that to a layman, it might seem that physics is indeed 'complete'. As far as I know, the exceptions to this rule lie very deep within the realm of theoretical physics, e.g. in topics such as quantum gravity, dark matter, or dark energy. These are things that the average person doesn't know a thing about. Furthermore, and in contrast to classical mechanics, he really doesn't need to know about them, since they mostly involve worlds that are very small, very big, very far away, or very hypothetical. This leads me to my question: Are there any 'everyday' phenomena that remain unexplained by physics? To clarify, by everyday I mean regarding 'stuff' that the average person knows something about, and might perhaps encounter in everyday life. For example, an unresolved issue in mechanics might qualify even if it is not a common effect.
A Sizable Mystery Here's a mystery that remains poorly understood, though there have been many attempts to explain it: Why does volume -- the ability of matter to fill up space exclusively -- depend on how particles rotate? By volume I mean for example the fact that you can pound on a desk with your fist, and your fist stops at the desk. The matter in your desk and your fist exclude each other from occupying the same space. Without volume, the universe would be a very boring place. That's because instead of planets, suns, and nebulae we would have black holes, black holes, and black holes. Furthermore, the same features that enable volume also enable all the incredible richness and variety of combination called chemistry. So, without the physics of volume, we would not be here to talk about the topic in the first place. For Every Volume, Turn, Turn, Turn Yet the existence of volume depends rather remarkably on the way some particles rotate. It is that simple connection to rotation that remains mysterious and still smells of something important being overlooked, of some insight that if finally found would make everyone go "Ah! So that's what's really going on there!" But that simple insight remains missing, even though folks such as Nobel Laureate Richard Feynman worked on the problem off and on for decades, without any notable success. I should emphasize first that how volume works is very well understood. An Exclusive Club It's created by something called the Pauli exclusion principle , which behaves like an extremely powerful repulsive force that only comes into effect when identical particles of a certain type, called fermions , are pressed closely together. Fermions are what we usually think of as matter , and they have an "address space" with three parts: location, momentum, and spin orientation (think axis of a spinning globe). As long as all particles remain unique in at least one part of this address space, the fermions are happy, which is to say they stay fairly low in energy. All of the geometry and bonding mechanisms of chemistry arise directly from the rather complicated interplay of a nucleus that attracts a set of electrons, and of all of those electrons insisting on having their own unique three-component addresses. But that is the known part. The hard-to-explain-well part is why Pauli exclusion is experimentally tied to a very specific type of particle rotation. As with many quantities in quantum mechanics, the rotation of a very small object begins to lock into discrete values that are based on their angular momentum. Realizing that this quantization would have to occur for angular momentum, physicists defined the smallest unit of angular momentum as spin 1 . No one really thought much about it at first, since spin just seemed like yet another "feature" that needed to be tracked when talking about atoms and particles. A Tale of Two Particle Types This assumption turned out to be spectacularly wrong. It was subsequently realized from experimental data that the entire universe seems to break down into two major classes of particles, and that these two classes are based entirely on how they spin. The first group is the fermions I've already talked about; they are the ones that have Pauli Exclusion and thus volume. The second group is called the bosons . The bosons have spins that are simple integer multiples of the smallest obvious unit of quantized spin, spin 1 . But these fundamental particles have no volume! They not only don't care at all if they share the same address, there are cases where they prefer to have the same address. That is what a laser is: A lot of spin 1 particles of light that have decided to join together and all occupy the same location, momentum, and spin address at one time. Fundamental bosons are what we usually think of as some form of energy. But if bosons have rotations that are simple multiples of the smallest possible unit of rotation, spin 1 , what kind of rotation can fermions have that is different? Where do they fit in? The Sound of Half a Rope Spinning That is the first really weird thing about volume: It is based on particles whose rotations are offset by exactly $\frac{1}{2}$ unit from the boson rotation values, and thus fit "in between" the integer spin values of the bosons. So for example, fundamental electrons and the more complicated protons and neutrons of matter all have spin $\frac{1}{2}$ , and so all occupy space. If all that sounds odd, it is. The fermion offset of "spin $\frac{1}{2}$" was completely unanticipated to theorists. It was first a source of amusement and then bafflement when experimentation first forced theorist to consider its existence. For a theorist of that time (or now!), trying to interpret the visual meaning of "one half" of the already smallest-possible spin 1 was like trying to visualize one-half of a skip rope loop. After all, in a skip rope you can have one loop, or two, or even more with expert skip rope twirlers -- but less than one loop? What does that even mean ? So just how mysterious is this half-unit of spin? Reluctantly at First, He Took it For a Spin Well, Wolfgang Pauli was easily one of the most brilliant (and abrasive) members of the very elite club of physicists who in mid 1920s developed the foundations of modern quantum mechanics. Pauli at first rejected even the idea that point-like electrons could spin, and likely cost Ralph Kronig a Nobel prize because of it. Pauli chastising Kronig so severely just for bringing up the idea that Kronig thereafter argued adamantly against his own idea! Pauli on the other hand subsequently not only repented of his initial view, but ended up developing the mathematical model for spin $\frac{1}{2}$ that is used to this day. The model is called the Pauli spin matrices . But even someone as intimate with the issue of spin as Pauli pretty much gave up on any kind of conventional explanation of it. Instead, he declared particle spin to be an " abstract property " (p.3, line 9 from bottom) that has no particular connection to ordinary rotation. However, since quantum spin is a just a quantized version of everyday rotation, it is unavoidably deeply linked to it. Thus a more accurate translation of the word "abstract" in this particular context might be: "The math works beautifully, so please just use it and stop asking me what it means!" So in summary, matter (which mostly likes to stay put, has volume, and resists compression) is built up from fermions whose rotations all have odd spin $\frac{1}{2}$ offsets in their rotations, while energy (which most often is literally as fluid and ephemeral as light and sound, and which can be compressed or focused almost without limit into a small volume) is built up from bosons whose rotations are all multiples of spin 1 . Lies, Darned Lies, and Spin Statistics The spin statistics theorem is the formal name for all of that, stating that particles with spin $\frac{1}{2}$ are subject to Pauli Exclusion ("volume"), while particles with simple integer spin (or zero spin) are not subject to it. This theorem is primarily a summary of experimental findings; it is not some kind of mathematical result from which fermions and bosons were predicted based on first principles. And that is why the connection between volume -- the resistance of matter particles to being compressed -- and the spin $\frac{1}{2}$ offset of fermions remains more a mystery than well-understood principle of physics. The proofs devised for it remain unconvincing even to the experts. For example, a 1998 assessment of spin statistics theory by Ian Duck and E.C.G. Sudarshan provides a detailed summary of the strategies theorists have used in trying to prove the spin statistics theorem, yet it concludes with this final line: "Finally we are forced to conclude that although the Spin Statistics Theorem is simply stated, it is by no means simply understood or simply proved." Two examples of such proofs include a very early (and still persuasive) [proof by Julian Schwinger , and this much more recent 2003 theory by Paul O'Hara . Invisible Hands, But Not the Adam Smith Type One reason why I do not find any of these proofs particularly persuasive is this: If the theorists who created them did not know in advance exactly where they needed to go, it appears unlikely they ever would have managed to arrive at their destination. That situation is in sharp contrast to Paul Dirac 's Dirac equation , which remains the gold standard for experimentally predictive theoretical mathematics. Once he came up with it, the Dirac equation pretty much had to drag Dirac kicking and screaming into acknowledging that there must be an entire second universe of antiparticles that are mirror images of regular particles. The Conclusion So, while various methods used to prove the spin statistics theorem may well be correct, they feel more like someone forging a circuitous path through deep woods to make their way finally to a bright light they could see off in the distance at all times. It seems quite likely that the main road, the easy path that shows you exactly where that destination lies, has yet to be uncovered. A truly simple explanation of why spin $\frac{1}{2}$ offsets lead to Pauli exclusion, and to its simple, everyday consequence that two objects cannot occupy the same space at the same time , has yet to be found.
{ "source": [ "https://physics.stackexchange.com/questions/54392", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20533/" ] }
54,684
Have you ever wondered about the elastic properties of neutron stars? Such stars, being immensely dense, in which neutrons are bound together by the strong nuclear force on top of the strong gravity that “presses” them together, one would think they must have extremely large Young modulus, and the speed of sound could be on a par with the speed of light in the vacuum. If we let $c_s$ be the speed of sound, and also assume that the neutron star is isotropic, then using the well known equation for the speed for acoustic waves in solids, we can write the following equation for the crust of the neutron star $c_s=\sqrt{\frac{E}{\rho}}$ For a neutron star of density $\rho =5.9\times 10^{17}$ Kg m$^{-3}$ and Young modulus of about $E=5.3\times 10^{30}$Pa we get a value for $c_s=3.0\times 10^6$ ms$^{-1}$! The Questions are: 1) How can sound travel at such immense speeds inside a neutron star? 2) Should nuclear interactions, n-n and q-q , dictate the elastic properties of a neutron star, or is it just the gravitational force?
The neutron star crust is a solid and there are indeed elastic waves for which the speed of sound is controlled by the shear modulus. I'm not sure where you got your estimate of the shear modulus from (there is some literature on the subject, see for example http://arxiv.org/abs/1104.0173 ). Most of the neutron star is a liquid, and the speed of sound is given by the usual hydrodynamic result $$c_s^2=\left(\frac{\partial P}{\partial\rho}\right)_{s}$$ In dilute, weakly interacting neutron matter the speed of sound (in units of the speed of light $c$) is $$ c_s^2 = \frac{1}{3}\frac{k_F}{\sqrt{k_F^2+m^2}}$$ where the Fermi momentum $k_F$ is determined by the density, $$ n = \frac{k_F^3}{3\pi^2}$$ In the high density (relativistic) limit the speed of sound approaches $c/\sqrt{3}$. In the center of a neutron star you get quite close to this. Interaction change this result by factors of order one (indeed, recent observations of neutron star masses and radii suggest that the speed of sound near the center is close to $c$), but as an order of magnitude estimate these simple results are quite good.
{ "source": [ "https://physics.stackexchange.com/questions/54684", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/18685/" ] }
54,688
Ernst Mach , a man to who influenced Albert Einstein significantly in his approach to relativity, did not quite seem to believe in space as a self-existing entity. I'm pretty sure it would be correct to say he thought it was created by matter. So let's take that idea for a moment and run with it. Wouldn't the idea that matter creates space unavoidably create testable predictions about the nature of how forces work, especially over astronomical distances? I say this because from my admittedly information-centric perspective, the only way matter can create something approximating what we call space is by creating relationships between particles. Space would then become nothing more than a particular set of rules for how those relationships interact and change over time, capturing for example the idea that they have locality and interact based on rules such as the $\frac{1}{n^2}$ fall off of some forces with distance. With large enough sets of particles, the resulting interactions would become sufficiently smooth and fine-grained to create the abstraction we call space. But Euclidean space as we know it would necessarily be an illusion. (Incidentally, I have no idea if this line of thinking might be related to holographic universe ideas.) Now, the interesting thing about that argument is this: If one assumes as an experimental hypothesis that matter creates space, I don't easily see no way around the implication that the idea should be testable, at least in principle. That's because a simulation of smooth space created by the particle interactions will necessarily be incomplete and dependent on the distribution of those particles. So for example, if you only had a universe of two particles, only one space-like relationship would exist. The resulting simulation of space could not possibly be as smooth or rich as the space we know while sitting at the Euclidean limit of a nearly number of particles and particle relationships. It would for example likely have some sort of predominantly one-dimensional field equations, e.g. a the universe with only one electron and one positron might maintain constant attraction between them regardless of their separation distance. (It's also interesting to note that constant 1D-like attraction is approximately how quarks behave when pushed far away from short-range asymptotic freedom envelopes.) Detecting deviations from the Euclidean limit would be hugely more difficult in our particle-rich universe, but I cannot easily see how it would be flatly impossible. Asymmetries of matter at the scale of the entire universe would for example have to affect the nature of space by creating asymmetries in the number and richness of the underlying particle relationships. If a precise model could be made for how such relationship asymmetries affected our local space abstraction, testable predictions would be possible. The first approximation in any such model would simply be to map out the density and orientation of the relationships based on our best guesses at particle distributions, then see if those relationship sets correlate strongly to any known effects. The unexpected and quirky distribution in dark matter could certainly be a candidate for such an effect. Again, as an information type I take high correlations pretty seriously, and some of curve fits between certain galactic phenomena and predictions made by the oddly simple MOND rule remain poorly explained at best. If such correlations somehow stemmed from asymmetries in space itself due to large-scale asymmetries in the galaxy-scale distribution of particles, a very different approach might be possible to explaining why the MOND rule sometimes produces such unexpectedly strong curve correlations. So again, the question is just this: Has the possibility of testing the Machian hypothesis every been explored theoretically? And if not, why not? What am I missing? (This question is a direct outgrowth of my earlier amusingly unproductive question about whether space exists independently of matter . For that one I received exactly one "yes" answer, exactly one "no" answer, and no up or down votes on either one. Flip a coin indeed!)
The neutron star crust is a solid and there are indeed elastic waves for which the speed of sound is controlled by the shear modulus. I'm not sure where you got your estimate of the shear modulus from (there is some literature on the subject, see for example http://arxiv.org/abs/1104.0173 ). Most of the neutron star is a liquid, and the speed of sound is given by the usual hydrodynamic result $$c_s^2=\left(\frac{\partial P}{\partial\rho}\right)_{s}$$ In dilute, weakly interacting neutron matter the speed of sound (in units of the speed of light $c$) is $$ c_s^2 = \frac{1}{3}\frac{k_F}{\sqrt{k_F^2+m^2}}$$ where the Fermi momentum $k_F$ is determined by the density, $$ n = \frac{k_F^3}{3\pi^2}$$ In the high density (relativistic) limit the speed of sound approaches $c/\sqrt{3}$. In the center of a neutron star you get quite close to this. Interaction change this result by factors of order one (indeed, recent observations of neutron star masses and radii suggest that the speed of sound near the center is close to $c$), but as an order of magnitude estimate these simple results are quite good.
{ "source": [ "https://physics.stackexchange.com/questions/54688", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/7670/" ] }
54,738
I always used to wonder why this happens.. when one stretches a rubberband to nearly it snapping point holding it close to your skin - preferably cheek(helps feel the heat), it emits heat. While releasing the stretched rubberband holding it close to the skin produces a cooling effect on the skin. Can some one explain the physics behind this pls?
This is a very interesting question with a very interesting answer. The key lies in the reason for the stretchiness of the rubber band. Rubber is made of polymers (long chain molecules). When the elastic band is not stretched, these molecules are all tangled up with each other and have no particular direction to them, but when you stretch the elastic they all become lined up with one another, at least to some extent. The polymer molecules themselves are not stretched, they're just aligned differently. To a first approximation there's no difference in the energy of these two different ways of arranging the polymers, but there's a big difference in the entropy . This just means that there's a lot more different ways that the polymers can be arranged in a tangled up way than an aligned way. So when you release the elastic band, all the polymers are jiggling around at random due to thermal motion, and they tend to lose their alignment, so they go back towards the tangled state, and that's what makes the elastic contract. This is called an entropic force . Now, I said earlier that there isn't any difference in energy between the stretched (aligned) and un-stretched (tangled) states. But it takes energy to stretch the elastic -- you're doing work to pull the ends apart, against the entropic force that's trying to pull them back together. That energy doesn't go into stretching the individual polymer molecules, but it has to go somewhere , so it ends up as heat. Some of this heat will stay in the elastic (making the polymer molecules jiggle around a bit faster) but some will be transferred to the surrounding air, or to your skin. The reverse happens when you let the elastic contract. The molecules are jiggling around at random and becoming more and more tangled, which makes them contract. But to contract they have to do work on whatever's holding the ends of the elastic apart. That energy has to come from somewhere, so it comes from heat. At first this might seem to run against thermodynamics - normally you can't just cool something down without heating something else up. But remember that the state with the tangled molecules has a higher entropy than when they're aligned. So you're taking heat out of the air, which reduces its entropy, but this reduction in entropy is countered by the increase in entropy of the elastic itself, so the second law is safe. For further reading you can look into the ideal chain , which is an idealised mathematical model of this situation.
{ "source": [ "https://physics.stackexchange.com/questions/54738", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21258/" ] }
54,896
My text introduces multi-quibt quantum states with the example of a state that can be "factored" into two (non-entangled) substates. It then goes on to suggest that it should be obvious 1 that the joint state of two (non-entangled) substates should be the tensor product of the substates: that is, for example, that given a first qubit $$\left|a\right\rangle = \alpha_1\left|0\right\rangle+\alpha_2\left|1\right\rangle$$ and a second qubit $$\left|b\right\rangle = \beta_1\left|0\right\rangle+\beta_2\left|1\right\rangle$$ any non-entangled joint two-qubit state of $\left|a\right\rangle$ and $\left|b\right\rangle$ will be $$\left|a\right\rangle\otimes\left|b\right\rangle = \alpha_1 \beta_1\left|00\right\rangle+\alpha_1\beta_2\left|01\right\rangle+\alpha_2\beta_1\left|10\right\rangle+\alpha_2\beta_2\left|11\right\rangle$$ but it isn't clear to me why this should be the case. It seems to me there is some implicit understanding or interpretation of the coefficients $\alpha_i$ and $\beta_i$ that is used to arrive at this conclusion. It's clear enough why this should be true in a classical case, where the coefficients represent (where normalized, relative) abundance, so that the result follows from simple combinatorics. But what accounts for the assertion that this is true for a quantum system, in which (at least in my text, up to this point) coefficients only have this correspondence by analogy (and a perplexing analogy at that, since they can be complex and negative)? Should it be obvious that independent quantum states are composed by taking the tensor product, or is some additional observation or definition (e.g. of the nature of the coefficients of quantum states) required? 1: See (bottom of p. 18) "so the state of the two qubits must be the product " (emphasis added).
Great question! I don't think there is anything obvious at play here. In quantum mechanics, we assume that that state of any system is a normalized element of a Hilbert space $\mathcal H$. I'm going to limit the discussion to systems characterized by finite-dimensional Hilbert spaces for conceptual and mathematical simplicity. Each observable quantity of the system is represented by a self-adjoint operator $\Omega$ whose eigenvalues $\omega_i$ are the values that one can obtain after performing a measurement of that observable. If a system is in the state $|\psi\rangle$, then when one performs a measurement on the system, the state of the system collapses to one of the eigenvectors $|\omega_i\rangle$ with probability $|\langle \omega_i|\psi\rangle|^2$. The spectral theorem guarantees that the eigenvectors of each observable form an orthonormal basis for the Hilbert space, so each state $\psi$ can be written as $$ |\psi\rangle = \sum_{i}\alpha_i|\omega_i\rangle $$ for some complex numbers $\alpha_i$ such that $\sum_i|\alpha_i|^2 = 1$. From the measurement rule above, it follows that the $|\alpha_i|^2$ represents the probability that upon measurement of the observable $\Omega$, the system will collapse to the state $|\omega_i\rangle$ after the measurement. Therefore, the numbers $\alpha_i$, although complex, do in this sense represent "relative abundance" as you put it. To make this interpretation sharp, you could think of a state $|\psi\rangle$ as an ensemble of $N$ identically prepared systems with the number of $N_i$ elements in the ensemble corresponding to the state $|\omega_i\rangle$ equaling $|\alpha_i|^2 N$. Now suppose that we have two quantum systems on Hilbert spaces $\mathcal H_1$ and $\mathcal H_2$ with observables $\Omega_1$ and $\Omega_2$ respectively. Then if we make a measurement on the combined system of both observables, then system 1 will collapse to some $|\omega_{1i}\rangle$ and system 2 will collapse to some state $|\omega_{2 j}\rangle$. It seems reasonable then to expect that the state of the combined system after the measurement could be any such pair. Moreover, the quantum superposition principle tells us that any complex linear combination of such pair states should also be a physically allowed state of the system. These considerations naturally lead us to use the tensor product $\mathcal H_1\otimes\mathcal H_2$ to describe composite system because it is the formalization of the idea that the combined Hilbert space should consists of all linear combinations of pairs of states in the constituent subsystems. Is that the sort of motivation for using tensor products that you were looking for?
{ "source": [ "https://physics.stackexchange.com/questions/54896", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2540/" ] }
54,907
It puzzles me that Zee uses throughout the book this definition of covariant derivative: $$D_{\mu} \phi=\partial_{\mu}\phi-ieA_{\mu}\phi$$ with a minus sign, despite of the use of the $(+---)$ convention. But then I see that Srednicki , at least in the free preprint, uses too the same definition, with the same minus sign. The weird thing is that Srednicki uses $(-+++)$ I looked too into Peskin & Schröder , who stick to $(+---)$ (the same as Zee) and the covariant derivative there is: $$D_{\mu} \phi=\partial_{\mu}\phi+ieA_{\mu}\phi$$ Now, can any of you tell Pocoyo what is happening here? Why can they consistently use different signs in that definition?
We will work in units with $c=1=\hbar$. The $4$-potential $A^{\mu}$ with upper index is always defined as $$A^{\mu}~=~(\Phi,{\bf A}). $$ 1) Lowering the index of the $4$-potential depends on the sign convention $$ (+,-,-,-)\qquad \text{resp.} \qquad(-,+,+,+) $$ for the Minkowski metric $\eta_{\mu\nu}$. This Minkowski sign convention is used in $$\text{Ref. 1 (p. xix) and Ref. 2 (p. xv)} \qquad \text{resp.} \qquad \text{Ref. 3 (eq. (1.9))}.$$ The $4$-potential $A_{\mu}$ with lower index is $$A_{\mu}~=~(\Phi,-{\bf A}) \qquad \text{resp.} \qquad A_{\mu}~=~(-\Phi,{\bf A}).$$ Maxwell's equations with sources are $$ d_{\mu}F^{\mu\nu}~=~j^{\nu} \qquad \text{resp.} \qquad d_{\mu}F^{\mu\nu}~=~-j^{\nu}. $$ The covariant derivative is $$D_{\mu} ~=~d_{\mu}+iqA_{\mu}\qquad \text{resp.} \qquad D_{\mu} ~=~d_{\mu}-iqA_{\mu}, $$ where $q=-|e|$ is the charge of the electron. 2) The sign convention for the elementary charge $e$ is $$e~=~-|e| ~<~0 \qquad \text{resp.} \qquad e~=~|e|~>~0.$$ This charge sign convention is used in $$\text{Ref. 1 (p. xxi) and Ref. 3 (below eq. (58.1))} \qquad \text{resp.} \qquad \text{Ref. 2.}$$ References: M.E. Peskin and D.V Schroeder, An Introduction to QFT. A. Zee, QFT in a nutshell. M. Srednicki, QFT.
{ "source": [ "https://physics.stackexchange.com/questions/54907", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14182/" ] }
54,975
Bearing in mind I am a layman - with no background in physics - please could someone explain what the "big deal" is with quantum entanglement? I used to think I understood it - that 2 particles, say a light-year apart spatially, could affect each other physically, instantly. Here I would understand the "big deal". On further reading I've come to understand (maybe incorrectly) that the spatially separated particles may not affect each other, but in knowing one's properties you can infer the other's. If that it the case, I don't see what the big deal is... 2 things have some properties set in correlation to each other at the point of entanglement, they are separated, measured, and found to have these properties...? What am I missing? Is it that the particles properties are in an "un-set" state, and only when measured do they get set? (i.e. the wave-function collapses). If this is true - why do we think this instead of the more intuitive thought that the properties were set at an earlier time?
I understand your confusion, but here's why people often feel that quantum entanglement is rather strange. Let's first consider the following statement you make: 2 things have some properties set in correlation to each other at the point of entanglement, they are separated, measured, and found to have these properties A classical (non-quantum) version of this statement would go something like this. Imagine that you take two marbles and paint one of them black, and one of them white. Then, you put each in its own opaque box and to send the white marble to Los Angeles, and the black marble to New York. Next, you arrange for person L in Los Angeles and person N in New York to open each box at precisely 5:00 PM and record the color of the ball in his box. If you tell each of person L and person N how you have prepared the marbles, then they will know that when they open their respective boxes, there will be a 50% chance of having a white marble, and a 50% chance of having a black marble, but they don't know which is in the box until they make the measurement. Moreover, once they see what color they have, they know instantaneously what the other person must have measured because of the way the system of marbles was initially prepared. However, because you painted the marbles, you know with certainty that person L will have the white marble, and person N will have the black marble . In the case of quantum entanglement, the state preparation procedure is analogous. Instead of marbles, we imagine having electrons which have two possible spin states which we will call "up" denoted $|1\rangle$ and "down" denoted $|0\rangle$. We imagine preparing a two-electron system in such a way that the state $|\psi\rangle$ of the composite system is in what's called a superposition of the states "up-down" and "down-up" by which I mean $$ |\psi\rangle = \frac{1}{\sqrt 2}|1\rangle|0\rangle + \frac{1}{\sqrt{2}}|0\rangle|1\rangle $$ All this mathematical expression means is that if we were to make a measurement of the spin state of the composite system, then there is a 50% probability of finding electron A in the spin up state and electron B in the spin down state and a 50% probability of finding the reverse. Now me imagine sending electron $A$ to Los Angeles and electron B to New York, and we tell people in Los Angeles and New York to measure and record the spin state of his electron at the same time and to record his measurement, just as in the case of the marbles. Then, just as in the case of the marbles, these observers will only know the probability (50%) of finding either a spin up or a spin down electron after the measurement. In addition, because of the state preparation procedure, the observers can be sure of what the other observer will record once he makes his own observation, but there is a crucial difference between this case and the marbles. In electron case, even the person who prepared the state will not know what the outcome of the measurement will be. In fact, no one can know with certainty what the outcome will be ; there is an inherent probabilistic nature to the outcome of the measurement that is built into the state of the system. It's not as though there is someone who can have some hidden knowledge, like in the case of the marbles, about what the spin states of the electrons "actually" are. Given this fact, I think most people find it strange that once one observer makes his measurement, he knows with certainty what the other observer will measure. In the case of the marbles, there's no analogous strangeness because each marble was either white or black, and certainly no communication was necessary for each observed to know what the other would see upon measurement. But in the case of the electrons, there is a sort of intrinsic probability to the nature of the state of the electron. The electron truly has not "decided" on a state until right when the measurement happens, so how is it possible that the electrons always "choose" to be in opposite states given that they didn't make this "decision" right until the moment of measurement . How will they "know" what the other electron picked? Strangely enough, they do, in fact, somehow "know." Addendum. Certainly, as Lubos points out in his comment, there is nothing actually physically paradoxical or contradictory in entanglement, and it is just a form of correlation, but I personally think it's fair to call it a "strange" or "unintuitive" form of correlation. IMPORTANT DISCLAIMER I put a lot of things in quotes because I wanted to convey the intuition behind the strangeness of entanglement by using analogies; these descriptions are not meant to be scientifically precise. In particular, any anthropomorphisations of electrons should be taken with a large grain of conceptual salt.
{ "source": [ "https://physics.stackexchange.com/questions/54975", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14233/" ] }
55,204
I have seen the term topological charge defined in an abstract mathematical way as a essentially a labeling scheme for particles which follows certain rules. However I am left guessing when trying to explain what physical properties of a system lead to the need to introduce this new type of "charge." If it makes any difference, I'm interested in studying the statistical properties of Quantum Hall systems (and in particular anyonic interferometry), where the different topological charges contribute to the total number of quantum states.
Local quasiparticle excitations and topological quasiparticle excitations To understand and classify anyonic quasiparticles in topologically ordered states, such as FQH states, it is important to understand the notions of local quasiparticle excitations and topological quasiparticle excitations. First let us define the notion of ``particle-like'' excitations. Let us consider a system with translation symmetry. The ground state has a uniform energy density. If we have a state with an excitation, we can observe the energy distribution of the state over the space. If for some local area the energy density is higher than ground state, while for the rest area the energy density is the same as ground state, one may say there is a ``particle-like'' excitation, or a quasiparticle, in this area. Quasiparticles defined like this can be further divided into two types. The first type can be created or annihilated by local operators, such as a spin flip. Hence they are not robust under perturbations. The second type are robust states. The higher local energy density cannot be created or removed by any local operators in that area. We will refer the first type of quasiparticles as local quasiparticles, and the second type of quasiparticles as topological quasiparticles. As an simple example, consider the 1D Ising model with open boundary condition. There are two ground states, spins all up or all down. Simply flipping one spin of the ground state leads to the second excited state, and creates a local quasiparticle. On the other hand, the first excited state looks like a domain wall. For example the spins on the left are all up while those on the right all down, and the domain wall between the up domain and the down domain is a topological quasiparticle. Flipping the spins next to the domain wall moves the quasiparticle but cannot remove it. Such quasiparticles is protected by the boundary condition. As long as as the two edge spins are opposite, there will be at least one domain wall, or one topological quasiparticle in the bulk. Moreover a spin flip can be viewed as two domain walls. From the notions of local quasiparticles and topological quasiparticles, we can also introduce a notion topological quasiparticle types (ie topological charges ), or simply, quasiparticle types. We say that local quasiparticles are of the trivial type, while topological quasiparticles are of non-trivial types. Also two topological quasiparticles are of the same type if and only if they differ by local quasiparticles. In other words, we can turn one topological quasiparticle into the other one by applying some local operators. The total number of the topological quasiparticle types (including the trivial type) is also a topological property. It turns out that this topological property is directly related to another topological property for 2+1D topological states: The number of the topological quasiparticle types equal to the ground state degeneracy on torus . This is one of many amazing and deep relations in topological order. See also Why is fractional statistics and non-Abelian common for fractional charges? , A physical understanding of fractionalization , and Whis is the difference between charge fractionalization in 1D and 2D?
{ "source": [ "https://physics.stackexchange.com/questions/55204", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10100/" ] }
55,768
We can store cold (ice), heat (i.e. hot water bag) and electrical charge (batteries). We can even "store" a magnetic field in a magnet. We can convert light into energy and then, if we want, back to light. But we can't store light in form of light in significant amounts. What is the explanation of that in physics terms?
For the photons that make up light to exist they have to be travelling at the speed of light. This means that to store them you have to put them in a container where they can move around at the speed of light until you want to let them out. You could build the container out of mirrors, but no mirror we can currently build is 100% reflective, or indeed can be ever 100% reflective. Usually when a photon "hits" the mirror it is absorbed by one of the atoms in the mirror and then re-emitted back out into the container. However, occasionally the photon either won't get re-emitted (leaving the atom in an excited state) or it doesn't hit one of the atoms and makes it way through the mirror and out of the container. While the chances of this happening for an individual photon are low, there are lots of photons travelling very fast so it happens many times thus causing the light to "leak" or decay. Building a near perfect mirror is hard, so it's easier to convert the light into something that can be stored and then convert that back into light when you need it.
{ "source": [ "https://physics.stackexchange.com/questions/55768", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20409/" ] }
55,829
Is it true that a glass window, that has been placed in a wall for about 10 years or more, is thicker on the bottom than on the top? I can vaguely remember my physics teacher saying that this was true. So if this is true, how is this possible?
The observation that old windows are sometimes found to be thicker at the bottom than at the top is often offered as supporting evidence for the view that glass flows over a timescale of centuries. The assumption being that the glass was once uniform, but has flowed to its new shape, which is a property of liquid. However, this assumption is incorrect; once solidified, glass does not flow anymore. The reason for the observation is that in the past, when panes of glass were commonly made by glassblowers, the technique used was to spin molten glass so as to create a round, mostly flat and even plate (the crown glass process, described above). This plate was then cut to fit a window. The pieces were not, however, absolutely flat; the edges of the disk became a different thickness as the glass spun. When installed in a window frame, the glass would be placed with the thicker side down both for the sake of stability and to prevent water accumulating in the lead cames at the bottom of the window. Occasionally such glass has been found thinner side down or thicker on either side of the window's edge, the result of carelessness during installation. Wikipedia
{ "source": [ "https://physics.stackexchange.com/questions/55829", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }