source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
17,079 | In another question How does Newtonian mechanics explain why orbiting objects do not fall to the object they are orbiting? , one can read an affirmative answer. They how do you explain satellites falling to Earth? (Here, falling means colliding with the Earth surface or burning out before reaching the surface. For example, fragments of Sky Lab fell on Australia in 1979.) | Well, the sentence It seems like if it's an inherent property of the spring it shouldn't change, so if it does, why? clearly isn't a valid argument to calculate the $k$ of the smaller springs. They're different springs than their large parent so they may have different values of an "inherent property": if a pizza is divided to 4 smaller pieces, the inherent property "mass" of the smaller pizzas is also different than the mass of the large one. ;-) You may have meant that it is an "intensive" property (like a density or temperature) which wouldn't change after the cutting of a big spring, but you have offered no evidence that it's "intensive" in this sense. No surprise, this statement is incorrect as I'm going to show. One may calculate the right answer in many ways. For example, we may consider the energy of the spring. It is equal to $k_{\rm big}x_{\rm big}^2/2$ where $x_{\rm big}$ is the deviation (distance) from the equilibrium position. We may also imagine that the big spring is a collection of 4 equal smaller strings attached to each other. In this picture, each of the 4 springs has the deviation $x_{\rm small} = x_{\rm big}/4$ and the energy of each spring is
$$ E_{\rm small} = \frac{1}{2} k_{\rm small} x_{\rm small}^2 = \frac{1}{2} k_{\rm small} \frac{x_{\rm big}^2}{16} $$
Because we have 4 such small springs, the total energy is
$$ E_{\rm 4 \,small} = \frac{1}{2} k_{\rm small} \frac{x_{\rm big}^2}{4} $$
That must be equal to the potential energy of the single big spring because it's the same object
$$ = E_{\rm big} = \frac{1}{2} k_{\rm big} x_{\rm big}^2 $$
which implies, after you divide the same factors on both sides,
$$ k_{\rm big} = \frac{k_{\rm small}}{4} $$
So the spring constant of the smaller springs is actually 4 times larger than the spring constant of the big spring. You could get the same result via forces, too. The large spring has some forces $F=k_{\rm big}x_{\rm big}$ on both ends. When you divide it to four small springs, there are still the same forces $\pm F$ on each boundary of the smaller strings. They must be equal to $F=k_{\rm small} x_{\rm small}$ because the same formula holds for the smaller springs as well. Because $x_{\rm small} = x_{\rm big}/4$, you see that $k_{\rm small} = 4k_{\rm big}$. It's harder to change the length of the shorter spring because it's short to start with, so you need a 4 times larger force which is why the spring constant of the small spring is 4 times higher. | {
"source": [
"https://physics.stackexchange.com/questions/17079",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5957/"
]
} |
17,082 | I usually think of gravitational potential energy as representing just what it sounds like: the energy that we could potentially gain, using gravity. However, the equation for it (derived by integrating Newton's law of gravitational force)... $$PE_1 = -\frac{GMm}{r}$$ ..has me thrown for a loop, especially after this answer . If potential energy really meant what I thought it did, then it would always have to be non-negative... but this equation is always negative. So what does "negative potential energy" mean!? If $KE + PE$ is always a constant, but PE is not only negative but becomes more negative as the particles attract, doesn't that mean the kinetic energy will become arbitrarily large? Shouldn't this mean all particles increase to infinite KE before a collision? If we are near the surface of the earth, we can estimate PE as $$PE_2 = mgh$$ by treating Earth as a flat gravitational plane. However, $h$ in this equation plays exactly the same role as $r$ in the first equation, doesn't it? So why is $PE_1$ negative while $PE_2$ is positive? Why does one increase with $h$ while the other increases inversely with $r$? Do they both represent the same "form" of energy? Since $PE_2$ is just an approximation of $PE_1$, we should get nearly the same answer using either equation, if we were near Earth's surface and knew our distance to its center-of-mass. However, the two equations give completely different answers! What gives!? Can anyone help clear up my confusion? | About negative energies: they set no problem: On this context, only energy differences have significance. Negative energy appears because when you've made the integration, you've set one point where you set your energy to 0. In this case, you have chosen that $PE_1 = 0$ for $r = \infty$. If you've set $PE_1 = 1000$ at $r = \infty$, the energy was positive for some r. However, the minus sign is important, as it is telling you that the test particle is losing potential energy when moving to $r = 0$, this is true because it is accelerating, causing an increase in $KE$: let's calculate the $\Delta PE_1$ for a particle moving in direction of $r = 0$: $r_i = 10$ and $r_f = 1$: $\Delta PE_1 = PE_f - PE_i = Gm(-1 - (-0.1)) = -Gm\times0.9 < 0$ as expected: we lose $PE$ and win $KE$. Second bullet: yes, you are right. However, it is only true IF they are point particles: has they normally have a definite radius, they collide when $r = r_1 + r_2$, causing an elastic or inelastic collision. Third bullet: you are right with $PE_2 = mgh$, however, again, you are choosing a given referential: you are assuming $PE_2 = 0$ for $y = 0$, which, on the previous notation, means that you were setting $PE_1 = 0$ for $ r = r_{earth}$. The most important difference now is that you are saying that an increase in h is moving farther in r (if you are higher, you are farther from the Earth center). By making the analogy to the previous problem, imagine you want to obtain the $\Delta PE_2$. In this case, you begin at $h_i = 10$ and you want move to $h_f = 1$ (moving in direction to Earth center, like $\Delta PE_1$: $\Delta PE_2 = PE_{f} - PE_{i} = 1mg - 10mg = -9mg < 0$. As expected, because we are falling, we are losing $PE$ and winning $KE$, the same result has $PE_1$ Fourth bullet: they both represent the same thing. The difference is that $gh$ is the first term in the Taylor series of the expansion of $PE_1$ near $r = r_{Earth}$. As exercise, try to expand $PE_1(r)$ in a taylor series, and show that the linear term is: $PE_1 = a + \frac{Gm(r-r_{earth})}{r_{earth}^2}$. Them numerically calculate $Gm/r_{earth}^2$ (remember that $m=m_{earth}$). If you haven't made this already, I guess you will be surprised. So, from what I understood, your logic is totally correct, apart from two key points: energy is defined apart of a constant value. in the $PE_1$, increase r means decrease $1/r$, which means increase $PE_2 = -Gm/r$. In $PE_2$, increase h means increase $PE_2=mgh$. | {
"source": [
"https://physics.stackexchange.com/questions/17082",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/175/"
]
} |
17,109 | I recently came to know about the Conventional Current vs. Electron Flow issue. Doing some search I found that the reason for this is that Benjamin Franklin made a mistake when naming positive and negative charges. There is even this little comic about that http://xkcd.com/567/ My question is, how can a naming convention be wrong? Perhaps I don't understand what is the actual mistake here, I don't know. But I don't see how a naming convention could be wrong or right. There is no right and wrong about that. It could've been any other name, like charge A and charge B. I'll appreciate anyone can help me understand what is wrong in all this. | It's not a mistake, and conventional current is not wrong or backwards. Electric current is often thought to be a flow of electrons, but this is wrong. Electric current is a flow of electric charge . Charge can be positive (protons) or negative (electrons), and both types of charged particles can and do flow in electric circuits: In metal wires, carbon resistors, and vacuum tubes, electric current consists of a flow of electrons. In batteries , electrolytic capacitors , and neon lamps, current consists of a flow of ions, either positive or negative or both (flowing in opposite directions) In hydrogen fuel cells and water ice , current consists of a flow of protons . In semiconductors, the current can consist of holes , which are not quite the same as an absence of electrons . (The Hall Effect can be used to show whether a charge carrier is positively charged and flowing in one direction, or negatively charged and flowing in the other.) When a Lithium-ion battery discharges into a load, for instance, there is no electron flow in the battery, but there is still a current flow: (Source: Center for Sustainable Nanotechnology ) If you considered only the electron flow, your calculations would be wrong. You need to consider the net flow of charge, no matter what the charge carriers. Conventional current abstracts away the different charge carriers and represents all of these different flows as a net flow of (positive) charge, simplifying circuit analysis. Conventional current is not the opposite of electron current , so if they were defined to flow in the same direction, it would be even easier to confuse them and go through life misunderstanding what current is. Electron current is a subset of conventional current. Conventional current combines the effects of electron, ion, proton, and hole flows all into one number. Wikipedia agrees : In other media, any stream of charged objects may constitute an electric current. To provide a definition of current that is independent of the type of charge carriers flowing, conventional current is defined to flow in the same direction as positive charges. The labeling of one polarity of charge as "positive" and the other as "negative" is totally arbitrary. It could be done either way and everything would still work out the same. Franklin didn't choose wrong; he just chose. Labeling protons as negative and electrons as positive wouldn't change anything. It might actually make things more confusing, as described in Ben Franklin should have said electrons are positive? Wrong. If Franklin had instead chosen the electrons to be positive, then we might never confront the real problem. If electrons weren't negative, we'd easily ignore our misconceptions, and we'd end up with only an illusion of understanding. Yet also we'd still have all sorts of niggling unanswered questions caused by the misconceptions. Fortunately the negative electrons rub our noses in the problem, making our questions grow into something far more than just "niggling!" | {
"source": [
"https://physics.stackexchange.com/questions/17109",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1768/"
]
} |
17,169 | How does a mobile phone vibrate without any external force? By Newton's law, any body can't move without any external force | That's not true, Newtons's laws do not say that. What's important here is conservation of momentum. Inside the phone, there is an oscillating mass. While the mass inside has a momentum and thus a velocity in one direction, the (friction-free) phone has to have the same momentum in the opposite direction. It "vibrates". Homework: Get on a skateboard (best kneeling, not standing), take a decent mass with you (e.g. a cobblestone) and move it forth and back in front of your chest. Now, put a large cardboard box over your head (e.g. from a refrigerator) and you have a box that moves back and forth without any external force. If you want translation instead of oscillation, you have to divide the object, making one part go in one direction and the other in the opposite direction (again, with the same momentum). That's how rockets work, by expelling the reaction products of their fuel at high speed in the opposite direction. Again, without "external" force. Alternatively, you can just sit in a chair, and punch the air really fast. When your arm moves out, your body moves back, when your arm moves back in, your body moves toward the arm. | {
"source": [
"https://physics.stackexchange.com/questions/17169",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4023/"
]
} |
17,227 | What evidence is there that dark matter isn't one of the known types of neutrinos? If it were, how would this be measurable? | Dark matter can be hot, warm or cold. Hot means the dark matter particles are relativistic (kinetic energy on the order of the rest mass or much higher), cold means they are not relativistic (kinetic energy much less than rest mass) and warm is in between. It is known that the total amount of dark matter in the universe must be about 5 times the ordinary (baryonic) matter to explain the CMB as measured by WMAP. However, cold dark matter must be a very significant component of the universe to explain the growth of structures from the small fluctuations in the early universe that grew to become galaxies and stars (see this reference ). Thus cold dark matter is also required to explain the currently measured galactic rotation curves. Now, the neutrino oscillation experiments prove that neutrinos have a non-zero rest mass. However, the rest masses must still be very small so they could only contribute to the hot dark matter. The reason they can only be hot dark matter is because it is assumed that in the early hot, dense universe, the neutrinos would have been in thermal equilibrium with the hot ordinary matter at that time. Since the neutrino's rest mass is so small, they would be extremely relativistic, and although the neutrinos would cool as the universe expands, they would have still been very relativistic at the time of structure formation in the early universe. Thus, they can only contribute to hot dark matter in terms of the early growth of structure formation. [Because of the expansion of the universe since then, the neutrinos should have cooled so much that they are non-relativistic today.] According to this source : Current estimates for the neutrino fraction of the Universe’s
mass–energy density lie in the range 0.1% <∼ ν <∼ a few %, under
standard assumptions. The uncertainty reflects our incomplete
knowledge of neutrino properties. So most cosmic neutrinos are probably less than 10% of the total dark matter in the universe. In addition most of the rest (of the non-neutrino) 90% of dark matter must also be cold dark matter - both in the early universe and even now. | {
"source": [
"https://physics.stackexchange.com/questions/17227",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/614/"
]
} |
17,477 | In the time-dependent Schrodinger equation, $ H\Psi = i\hbar\frac{\partial}{\partial t}\Psi,$ the Hamiltonian operator is given by $$\displaystyle H = -\frac{\hbar^2}{2m}\nabla^2+V.$$ Why can't we consider $\displaystyle i\hbar\frac{\partial}{\partial t}$ as an operator for the Hamiltonian as well? My answer (which I am not sure about) is the following: $\displaystyle H\Psi = i\hbar\frac{\partial}{\partial t}\Psi$ is not an equation for defining $H$. This situation is similar to $\displaystyle F=ma$. Newton's second law is not an equation for defining $F$; $F$ must be provided independently. Is my reasoning (and the analogy) correct, or is the answer deeper than that? | If one a priori wrongly declares that the Hamiltonian operator $\hat{H}$ is the time derivative $i\hbar \frac{\partial}{\partial t}$ , then the Schrödinger equation $$\hat{H}\Psi~=~i\hbar \frac{\partial\Psi}{\partial t}\tag{1}$$ would become a tautology. Such trivial Schrödinger equation could not be used to determine the future (nor past) time evolution of the wavefunction $\Psi({\bf r},t)$ . On the contrary, the Hamiltonian operator $\hat{H}$ is typically a function of the operators $\hat{\bf r}$ and $\hat{\bf p}$ , and the Schrödinger equation $$\hat{H}\Psi~=~i\hbar \frac{\partial\Psi}{\partial t}\tag{2}$$ is a non-trivial requirement for the wavefunction $\Psi({\bf r},t)$ . One may then ask why is it then okay to assign the momentum operator as a gradient $$\hat{p}_k~=~ \frac{\hbar}{i}\frac{\partial}{\partial r^k}~?\tag{3}$$ (This is known as the Schrödinger representation.) The answer is because of the canonical commutation relations $$[\hat{r}^j, \hat{p}_k]~=~i\hbar~\delta^j_k~\hat{\bf 1}.\tag{4}$$ On the other hand, the corresponding commutation relation for time $t$ is $$[\hat{H}, t]~=~0, \tag{5}$$ because time $t$ is a parameter not an operator in quantum mechanics, see also this & this Phys.SE posts. Note that in contrast $$\left[i\hbar \frac{\partial}{\partial t}, ~t\right]~=~i\hbar,\tag{6}$$ which also shows that one should not identify $\hat{H}$ and $i\hbar \dfrac{\partial}{\partial t}$ . | {
"source": [
"https://physics.stackexchange.com/questions/17477",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4521/"
]
} |
17,509 | For time-like geodesic, the affine parameter is the proper time $\tau$ or its linear transform, and the geodesic equation is $$\frac{\mathrm d^{2}x^{\mu}}{\mathrm d\tau^{2}}+\Gamma_{\rho\sigma}^{\mu}\frac{\mathrm dx^{\rho}}{\mathrm d\tau}\frac{\mathrm dx^{\sigma}}{\mathrm d\tau}=0. $$ But proper time $\Delta\tau=0$ for null paths, so what the physical meaning of is the affine parameter for null geodesic? | If you forget about the affine-ness for a moment: you can parametrize a null geodesic in any way you want. Actually, you can parametrize any geodesic (heck, even any curve) in any way you want; all you need is a monotonic function that maps points on the geodesic to unique values of the parameter. But for timelike geodesics, you almost always use the proper time because it's a nice, sensible physical quantity that also happens to work as a parameter. With null geodesics, you don't have the proper time as an option because the proper time mapping assigns the same value to all points on the geodesic. So you have to pick some other parametrization. In principle, again, it can be any monotonic function that maps points on the geodesic to unique values of the parameter. However, it's possible to pick a way to parametrize the null geodesic in a way that is "sensible" in the same way that proper time is "sensible" for a timelike geodesic. This is called an affine parameter . In particular, one way to define an affine parameter is that it satisfies the geodesic equation. (Note: the geodesic equation does not work for just any arbitrary parametrization of a geodesic. You have to use an affine parameter.) Another way is to say that iff the parametrization is affine, parallel transport preserves the tangent vector, as Wikipedia does. Another way is to say that the acceleration is perpendicular to the velocity given an affine parameter, as Ron did. All these definitions are equivalent. It turns out, although I don't know the details of a proof, that there is a unique affine parameter for any geodesic, up to transformations of the form $t \to at+b$. | {
"source": [
"https://physics.stackexchange.com/questions/17509",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3887/"
]
} |
17,741 | On a systems level, I understand that as electrons are pushed into a wire, there is a net field and a net electron velocity. And I've read that the net electron drift is slow. But electricity travels through the wire, essentially at c, and I want to understand that mechanism. My apologies if my question is poorly stated, I know bare bones undergraduate quantum theory circa 1990s but it doesn't explain the motion of electricity in detail. Here's my conception, I'm hoping someone will fill in the holes, ha ha. An electron moves into the wire. It's got a kinetic energy. After travelling a short distance, it spontaneously emits a photon, which hits another electron in a valence shell. That electron then presumably does the same. If this conception is simply wrong, please enlighten me. The questions that arise: Presumably at this level, electrons are acting more like waves and less like particles, but is there any classical component in the picture, ie are electrons coming in imparting other electrons with kinetic energy through repulsion, or does it not work that way? If electrons momentarily have energy, then pass it on by a photon, what determines when that photon is emitted, and what frequency it will be? I assume that electrons in this cloud are not limited by any kind of exclusion principal, and that any frequencies are possible? Why should a photon emitted by an electron be in the direction of travel? Conservation of momentum tells me that if an electron is moving, the photon should be emitted in that direction, slowing the electron, but could an electron emit a photon in the opposite direction? If it did, I assume it would somehow have had to absorb energy from elsewhere? That sounds possible by analogy with quantum tunneling. What is the mechanism by which electrons propagating increase the temperature of the material? Are they transmitting energy to the electrons in the valence shell, which tug at the nucleus, do some photons hit the nuclei directly, or is there some other way? Presumably, electricity travels slower than light, because there is some time in each exchange, and some time when electrons are moving at sublight speeds before emitting a photon. By how much is this slower than light, and what is the speed of each interaction? | I will try to adress the misunderstandings first, then answer the question. Particle exchange force model is not causal There is a flaw in your thinking, in that you are formulating the electromagnetic interaction in terms of photon emission and absorption and at the same time telling a story forward in time. These two ideas are both ok separately, but not together. The particle emission/absorption picture is not a causal picture--- it requires that the particles go back and forth in time--- so you can't use causal language, like an electron emits a photon which kicks an electron etc. That's part of the story, but another part of the story is: an electron emits a photons which had already kicked an electron earlier, which emitted a different photon earlier than the first, etc, etc. If you go to a unitary Hamitonian causal picture, you renounce the idea that the field is due to particle emission and absorption (I only said unitary for a technical reason: it is conceivable somebody could make a nonunitary Hamiltonian formulation with unphysical photon polarizations which contain the Coulomb force, but then these unphysical photons would only be intermediate states, since the physical photons are not responsible for the Coulomb interaction anyway). The acausality in the Feynman description is not a problem with consistency, because there are causal formulations of QED, one of which is Dirac's. Here the electrostatic repulsion is not due to photon exchange, but is instantaneous action-at-a-distance, while photons travel with only the physical transverse polarization. In Feynman's particle push picture, the electrostatic interaction is due to unphysically polarized photons travelling much faster than the speed of light, and these photons just are not present in Dirac's equivalent formulation. Anyway, the best way to understand electron motion is using the classical electric and magnetic fields produced by the electrons. It's not the electrons pushing The electrons in a wire are not pushed by other electrons. They are pushed by the external voltage applied to the wire. The voltage is a real thing, it is a material field, it has a source somewhere at the power plant, and the power plant transmits the power through electric and magnetic fields, not by electron pushes. The electron repulsion in a metal is strongly screened, meaning that an electron travelling along at a certain speed will not repel an electron 100 atomic radii away. In many cases, it will even attract that electron due to weak phonon exchange (this weak attraction gives superconductivity, and essentially all ordinary metals become superconducting at some low enough temperature). You can completely neglect the interelectron repulsion for the problem of conduction, and just ask about external fields rearranging charges in the wire. Fermi surface, not wire surface The only electrons that carry current are those near the Fermi surface. The Fermi surface is in momentum space, it is not a surface in physical space. The electrons which carry the current are distributed everywhere throughtout the wire. But they all have nearly the same momentum magnitude (if the Fermi surface is spherical, which I will assume without comment in the remainder). The behavior of a Fermi gas is neither like a particle nor a wave. It is not a wave, because the occupation number is 0 or 1, so that there is no coherent superposition of a large number of particles in the same state, but it is also not like a particle, because the particle is not allowed to have momentum states lower than the Fermi momentum, by Pauli exclusion. The particle is traveling through a fluid of identical particles that jam up all the states with momentum smaller than the Fermi momentum. This strange new thing (new in the 1930s, at least), is the Fermi quasiparticle. It is the excitation of a cold quantum gas, and to picture it in some reasonable terms, you have to think of a single particle which is always required to move at faster than a certain speed, it cannot slow down below this speed, because all these states are already occupied, but its can vary its direction. It has an energy which is proportional to the difference in speed from the lower bound. This lower bound on the speed in the Fermi velocity, which in metals is the velocity of an electron with wavelength a few angstroms, which is about the orbital velocity in the Bohr model, or a few thousand meters per second. The Fermi liquid model of dense metals is the correct model, and it supersedes all previous models. The speed of the current carrying electrons is this few thousand meters per second, but at longer distances, there are impurities and phonons which scatter the electrons, and this can reduce the propagation to a diffusion process. The electronic diffusion doesn't have a speed, because distance in diffusion is not propotional to time. So the only reasonable answer to the question "what is the speed of an electron in a metal?" is the Fermi velocity, although one must emphasize that an injected electron will not travel a macroscopic distance at this speed in a metal with impurities. 1.Presumably at this level, electrons are acting more like waves and less like particles, but is there any classical component in the picture, ie are electrons coming in imparting other electrons with kinetic energy through repulsion, or does it not work that way? In order to using a time-ordered causal langauge (this does that, then this does that), you need electric and magnetic fields, not photons. The electrons are not what is coming into the wire to make it conduct, the thing that is coming in is an electric field. When you switch on a light, you touch a high voltage metal to a neutral metal, instantly raising the voltage, and making an electric field along the metal. This field accelerates the electrons near the Fermi surface (not on the wire surface, those near the fermi momentum) to travel faster in the direction of (minus) the electric field E. It can only accelerate those electrons which can be sped up into new states, so it only speeds up electrons which are already running around at the Fermi velocity. These electrons keep moving until they build up enough charge on the surface of the metal to cancel out the electric field, and to bend the electric field direction to follow the wire wherever the wire curves. This causal propagation is Field-Electrons-Field, and the only electrons which serve to shunt the field are those which are building up charges on the surface of the wire (and the protons on the surface which also redirect the field where there needs to be positive charge) When you apply a constant voltage, the electrons come to a steady state where they are carrying the current from the negative voltage to the positive voltage, making the voltage drops line up in space along the direction of the wire, no matter what the shape, and bouncing off impurities and phonons to dissipate the energy they get from the field into phonons (heat). The local electric field drives their motion, not their mutual repulsion. In that sense, it is not like water in a pipe. It is more like a collection of independent ball bearings pushed by a magnet, except that the ball bearings shunt the magnetic field to go along the direction of their motion. 2.If electrons momentarily have energy, then pass it on by a photon, what determines when that photon is emitted, and what frequency it will be? I assume that electrons in this cloud are not limited by any kind of exclusion principal, and that any frequencies are possible? The electrons in the cloud are not only limited by exclusion, they are dominated by exclusion, this is the Fermi gas. It is not the electrons pushing other electrons, it is the field pushing the electrons. The photon particle-exchange picture is irrelevant to this, but if you insist on using it, then the photons are coming out of the wall socket, having followed the high-voltage wires from the power-plant in a back-and-forth zig-zag in time, and a negligible fraction of the photons are emitted by the conduction electrons, since all those photons are absorbed into phonons by the metal within a screening length. The photons coming from the wall are bounced around by surface charges on the wire (static electrons and protons) so that they bounce around to follow the path of the wire in steady state. 3. Why should a photon emitted by an electron be in the direction of travel? Conservation of momentum tells me that if an electron is moving, the photon should be emitted in that direction, slowing the electron, but could an electron emit a photon in the opposite direction? If it did, I assume it would somehow have had to absorb energy from elsewhere? That sounds possible by analogy with quantum tunneling. Photons are emitted in all directions, and back in time. It just is not useful to think of Feynman picture when you want to think causally. 4. What is the mechanism by which electrons propagating increase the temperature of the material? Are they transmitting energy to the electrons in the valence shell, which tug at the nucleus, do some photons hit the nuclei directly, or is there some other way? So far, I have been treating the electrons as a gas of free particles. But you might be upset--- there are lots nuclei around! How can you treat them as a gas? Don't they bounce off the nuclei? The reason you can do this is that a quantum mechanical particle which is confined to a lattice, which has amplitudes to hop to neighboring points behaves exactly the same as a free particle obeying the Schrodinger equation (at least at long distances). It does not dissipate at all, it just travels along obeying a discrete version of the Schrodinger equation with a different mass, determined by the hopping amplitudes. In Solid State physics, this type of picture called the "tight binding model", but it is really more universal than this. In any potential, the electrons make bands, and the bands fill up to the Fermi surface. But the picture is not different from a free gas of particles, except for losing rotational symmetry. If the lattice were perfect, this picture would be exact, and the metal would not have any dissipation losses at all. But at finite temperature there are phonons, defects, and a thermal skin of electrons already excited at a little more energy than the Fermi surface. The phonons, defects, and thermal electrons can scatter the conducting electrons inelastically, and this is the mechanism of energy loss. The electrons can also emit phonons spontaneously, if their energy is far enough above the Fermi surface so that they are no longer stable. All of these effects tend to vanish at zero temperature (with the exception of defects, which can be frozen in, but then the defects become elastic). But at cold enough temperatures, you don't go to zero conduxctivity smoothly. Instead, you tend to have a phase transition to a superconducting state. 5. Presumably, electricity travels slower than light, because there is some time in each exchange, and some time when electrons are moving at sublight speeds before emitting a photon. By how much is this slower than light, and what is the speed of each interaction? This is again confusing Feynman description with a causal description. But I did this experiment as an undergraduate, and along a good coaxial cable, the speed was 2/3 the speed of light. I assume that if you use an ordinary wire in a coil on the floor, its going to be significantly slower, perhaps only 1% of the speed of light, because it requires more finnagling of surface charges for the wire to set up the field to follow it curves. | {
"source": [
"https://physics.stackexchange.com/questions/17741",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2653/"
]
} |
17,944 | I'm not a particle physicist, but I did manage to get through the Feynman lectures without getting too lost.
Is there a way to explain how the Higgs field works, in a way that people like me might have a hope of understanding? | The Higgs mechanism is no different from superconductivity, except the condensate responsible for superconductivity is a relativistically invariant scalar field. If you have a bosonic field, its particles can be in a Bose-Einstein condensate. When this condensate is charged, you call it a superconductor. A photon in a superconductor gets a mass, and this is the Higgs mechanism. For a relativistic boson described by a scalar field, you give the field a constant nonzero value to make a condensate. When the field has charge, this makes a superconducting condensate which gives the gauge boson a mass. The whole effect is described in detail on the Wikipedia page on the Higgs mechanism , starting from a nonrelativistic superconductivity model of bosonic particles, and continuing analogously to relativistic condensates. | {
"source": [
"https://physics.stackexchange.com/questions/17944",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5223/"
]
} |
17,966 | I've heard occasional mentions of the term "bootstraps" in connection with the S Matrix. I believe it applies to an old approach that was tried in the 1960s, whereby - well I'm not sure - but it sounds like they tried to compute the S Matrix without the interaction picture/perturbation theory approach that we currently use. I'm aware that the approach was abandoned, but my question is: how was it envisaged to work ? What was the input to the calculation supposed to be and how did the calculation proceed ? I know it's got something to do with analyticity properties in terms of the momenta, but that's all I know.... For example, from the wikipedia article: Chew and followers believed that it would be possible to use crossing symmetry and Regge behavior to formulate a consistent S-matrix for infinitely many particle types. The Regge hypothesis would determine the spectrum, crossing and analyticity would determine the scattering amplitude--- the forces, while unitarity would determine the self-consistent quantum corrections in a way analogous to including loops. For example - I can't really understand how you would hope to compute the scattering amplitude just given crossing symmetry and assuming analyticity | This approach most definitely does work, it just doesn't give the fundamental theory of the strong interactions, it gives string theory. String theory was originally defined by Venziano's bootstrap formula for the leading term in an S-matrix expansion, and the rest was worked out order by order to unitarize the S-matrix, not through a field theory expansion. This is good, because string theory isn't field theory, and it can't be derived from a field theory Lagrangian, at least not in the usual way. The result can nowadays be interpreted in terms of string field theory, or nonperturbative AdS/CFT constructions, but it is a new theory, which involves infinite towers of particles interacting consistently without a field theory underneath it all. The idea that S-matrix died is a political thing. The original 1960s S-matrix people found themselves out of a job when QCD took over, and the new generation that co-opted string theory in 1984 mostly wanted to pretend that they had come up with the theory, because they were all on the winning side of the S-matrix/field-theory battle. This is unfortunate, because, in my opinion, the most interesting physics of the 1960s and early 1970s was S-matrix physics (and this is a period that saw the greatest field theory work in history, including quarks and the standard model!) Chew Bootstrap A bootstrap is a requirement that you compute the S-matrix directly without a quantum field theory. In order for the theory to be interesting, the S-matrix should obey certain properties abstracted away from field theory It should be unitary It should be Lorentz invariant It should be crossing invariant: this means that the antiparticle scattering should be described by the analytic continuation of the particle scattering It should obey the Landau property--- that all singularities of scattering are poles and cuts corresponding to exchange of collections of real particles on shell. It should obey (Mandelstam) analyticity: the amplitude should be writable as an integral over the imaginary part of the cut discontinuity from production of physical particles. Further, this cut discontinuity itself can be expanded in terms of another cut discontinuity (these are the mysterious then and still mysterious now double dispersion relations of Mandelstam). This is a sketchy summary, because each of these conditions is involved. The unitarity condition in particular, is very difficult, because it is so nonlinear. The only practical way to solve it is in a perturbation series which starts with weakly interacting nearly stable particles (described by poles of the S-matrix) which exchange each other (the exchange picture is required by crossing, and the form of the scattering is fixed by the Landau and Mandelstam analyticity, once you know the spectrum). The "Bootstrap property" is then the following heuristic idea, which is included in the above formal relations: The particles and interactions which emerge as the spectrum of the S-matrix from the scattering of states, including their binding together into bound states, should be the same spectrum of particles that come in as in-states. This is a heuristic idea, because it is only saying that the S-matrix is consistent, and the formal consistency relations are those above. But the bootstrap was a slogan that implied that all the consistency conditions were not yet discovered, and there might be more. This idea was very inspirational to many great people in the 1960s, because it was an approach to strong interactions that could accommodate non-field theories of infinitely many particle types of high spin, without postulating constituent particles (like quarks and gluons). Regge theory The theory above doesn't get you anywhere without the following additional stuff. If you don't do this, you end up starting with a finite number of particles and interactions, and then you end up in effective field theory land. The finite-number-of-particles version of S-matrix theory is a dead end, or at least, it is equivalent to effective field theory, and this was understood in the late 1960s by Weinberg, and others, and this led S-matrix theory to die. This was the road the Chew travelled on, and the end of this road must be very personally painful to him. But there is another road for S-matrix theory which is much more interesting, so that Chew should not be disheartened. You need to know that the scattering amplitude is analytic in the angular momentum of the exchanged particles, so that the particles lie on Regge trajectories, which give their angular momentum as a function of their mass squared, s. Where the Regge trajectories hit an integer angular momentum, you see a particle. The trajectory interpolates the particle mass-squared vs. angular momentum graph, and it gives the asymptotic scattering caused by exchanging all these particles together . This scattering can be softer than the exchange of any one of these particles, because exchanging a particle of high spin necessarily has very singular scattering amplitudes at high energy. The Regge trajectory cancels out this growth with an infinite series of higher particles which soften the blowup, and lead to a power-law near-beam scattering at an angle which shrinks to zero as the energy goes to infinity in a way determined by the shape of the trajectory. So the Regge bootstrap adds the following conditions All the particles in the theory lie on Regge trajectories, and the scattering of these particles is by Regge theory. This condition is the most stringent, because you can't deform a pure Regge trajectory by adding a single particle--- you have to add new trajectories. The following restriction was suggested by experiment The Regge trajectories are linear in s This was suggested by Chew and Frautschi from the resonances known in 1960! The straight lines mostly had two points. The next condition is also ad-hoc and experimental The Regge slope is universal (for mesons), it's the same for all the trajectories. There are also "pomerons" in this approach which are not mesons, which have a different Regge slopem but ignore this for now. Finally, there is the following condition, which was experimentally motivated, but has derivations by Mandelstam and others from more theoretical foundations (although this is S-matrix theory, it doesn't have axioms, so derivation is a loose word). The exchange of trajectories is via the s-channel or the t-channel, but not both. It is double counting to exchange the same trajectories in both channels. These conditions essentially uniquely determine Veneziano's amplitude and bosonic string theory. Adding Fermion trajectories requires Ramond style supersymmetry, and then the road to string theory is to reinterpret all these conditions in the string picture which emerges. String theory incorporates and gives concrete form to all the boostrap ideas, so much so that anyone doing bootstrap today is doing string theory, especially since AdS/CFT showed why the bootstrap is relevant to gauge theories like QCD in the first place. The highlight of Regge theory is the Reggeon calculus, a full diagrammatic formalism, due to Gribov, for calculating the exchange of pomerons in a perturbation framework. This approach inspired a 2d parton picture of QCD which is studied heavily by several people, notably, Gribov, Lipatov, Feynman (as part of his parton program), and more recently Rajeev. Nearly every problem here is open and interesting. For an example of a reasearch field which (partly) emerged from this, one of the major motivations for taking PT quantum mechanics seriously was the strange non-Hermitian form of the Reggeon field theory Hamiltonian. Pomerons and Reggeon Field theory The main success of this picture is describing near-beam scattering, or diffractive scattering, at high energies. The idea here is that there is a Regge trajectory which is called the pomeron, which dominates high energy scattering, and which has no quantum numbers. This means that any particle will exchange the pomeron at high energies, so that p-pbar and p-p total cross sections will become equal. This idea is spectacularly confirmed by mid 90's measurements of total p-p and p-pbar cross sections, and in a better political climate, this would have won some boostrap theorists a Nobel prize. Instead, it is never mentioned. The pomeron in string theory becomes the closed string, which includes the graviton, which couples universally to stress energy. The relation between the closed string and the QCD pomeron is the subject of active research, associated with the names of Lipatov, Polchinski, Tan, and collaborators. Regge scattering also predicts near beam scattering amplitudes from the sum of the appropriate trajectory function you can exchange. These predictions have been known to roughly work since the late 1960s. Modern work The S-matrix bootstrap has had something of a revival in the last few years, due to the fact that Feynman diagrams are more cumbersome for SUGRA than the S-matrix amplitudes, which obey remarkable relations. These partly come from the Kawai-Lewellan-Tye open-string closed-string relations, which relate the gauge-sector of a string theory to the gravity sector. These relations are pure S-matrix theory, they are derived by a weird analytic continuation of the string scattering integral, and they are a highlight of the 1980s. People today are busy using unitarity and tree-level S-matrices to compute SUGRA amplitudes, with the goal of proving the all-but-certain finiteness of N=8 SUGRA. This work is reviving the interest in the bootstrap. There is also the top-down and bottom up AdS/QCD approach, which attempts to fit the strong interactions by a string theory model, or a more heuristic semi-string approximation. But the hard bootstrap work, of deriving Regge theory from QCD, is not even begun. The closest in the interpretation of a field theoretic BFKL pomeron in string theory by Brouwer, Polchinski, Strassler, Tan and the Lipatov group, which links the perturbative pomeron to the nonperturbative 1960s pomeron for the first time. I apologize for the sketch, but this is a huge field which I have only read a fraction of the literature, and only done a handful of the more trivial calculations, and I believe it is a scandelously neglected world. | {
"source": [
"https://physics.stackexchange.com/questions/17966",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3099/"
]
} |
18,088 | Noether's theorem relates symmetries to conserved quantities. For a central potential $V \propto \frac{1}{r}$, the Laplace-Runge-Lenz vector is conserved. What is the symmetry associated with the conservation of this vector? | Hamiltonian Problem. The Kepler problem has Hamiltonian $$\begin{align} H~=~&T+V, \cr
T~:=~& \frac{p^2}{2m}, \cr
V~:=~& -\frac{k}{q}, \end{align}\tag{1} $$ where $m$ is the 2-body reduced mass. The Laplace–Runge–Lenz vector is (up to an irrelevant normalization) $$\begin{align} A^j ~:=~&a^j + km\frac{q^j}{q}, \cr
a^j~:=~&({\bf L} \times {\bf p})^j\cr
~=~&{\bf q}\cdot{\bf p}~p^j- p^2~q^j, \cr
\cr {\bf L}~:=~& {\bf q} \times {\bf p}.\end{align} \tag{2}$$ Action. The Hamiltonian Lagrangian is $$ L_H~:=~ \dot{\bf q}\cdot{\bf p} - H,\tag{3} $$ and the action is $$ S[{\bf q},{\bf p}]~=~ \int {\rm d}t~L_H .\tag{4}$$ The non-zero fundamental canonical Poisson brackets are $$ \{ q^i , p^j\}~=~ \delta^{ij}. \tag{5}$$ Inverse Noether's Theorem. Quite generally in the Hamiltonian formulation, given a constant of motion $Q$ , then the infinitesimal variation $$\delta
~=~ -\varepsilon \{Q,\cdot\}\tag{6}$$ is a global off-shell symmetry of the action $S$ (modulo boundary terms). Here $\varepsilon$ is an infinitesimal global parameter, and $X_Q=\{Q,\cdot\}$ is a Hamiltonian vector field with Hamiltonian generator $Q$ . The full Noether charge is $Q$ , see e.g. my answer to this question . (The words on-shell and off-shell refer to whether the equations of motion are satisfied or not. The minus is conventional.) Variation. Let us check that the three Laplace–Runge–Lenz components $A^j$ are Hamiltonian generators of three continuous global off-shell symmetries of the action $S$ . In detail, the infinitesimal variations $\delta= \varepsilon_j \{A^j,\cdot\}$ read $$\begin{align} \delta q^i
~=~& \varepsilon_j \{A^j,q^i\} , \cr
\{A^j,q^i\} ~=~& 2 p^i q^j - q^i p^j - {\bf q}\cdot{\bf p}~\delta^{ij}, \cr
\delta p^i ~=~& \varepsilon_j \{A^j,p^i\} , \cr
\{A^j,p^i\}~=~& p^i p^j - p^2~\delta^{ij} +km\left(\frac{\delta^{ij}}{q}- \frac{q^i q^j}{q^3}\right), \cr
\delta t ~=~&0,\end{align} \tag{7}$$ where $\varepsilon_j$ are three infinitesimal parameters. Notice for later that $$ {\bf q}\cdot\delta {\bf q}~=~\varepsilon_j({\bf q}\cdot{\bf p}~q^j - q^2~p^j), \tag{8} $$ $$\begin{align} {\bf p}\cdot\delta {\bf p}
~=~&\varepsilon_j km(\frac{p^j}{q}-\frac{{\bf q}\cdot{\bf p}~q^j}{q^3})\cr
~=~& -\frac{km}{q^3}{\bf q}\cdot\delta {\bf q},\end{align} \tag{9} $$ $$\begin{align} {\bf q}\cdot\delta {\bf p}~=~&\varepsilon_j({\bf q}\cdot{\bf p}~p^j - p^2~q^j )\cr
~=~&\varepsilon_j a^j, \end{align} \tag{10} $$ $$\begin{align} {\bf p}\cdot\delta {\bf q}~=~&2\varepsilon_j( p^2~q^j - {\bf q}\cdot{\bf p}~p^j)\cr
~=~&-2\varepsilon_j a^j~.\end{align} \tag{11} $$ The Hamiltonian is invariant $$ \delta H ~=~ \frac{1}{m}{\bf p}\cdot\delta {\bf p} + \frac{k}{q^3}{\bf q}\cdot\delta {\bf q}~=~0, \tag{12}$$ showing that the Laplace–Runge–Lenz vector $A^j$ is classically a constant of motion $$\frac{dA^j}{dt} ~\approx~ \{ A^j, H\}+\frac{\partial A^j}{\partial t} ~=~ 0.\tag{13}$$ (We will use the $\approx$ sign to stress that an equation is an on-shell equation.) The variation of the Hamiltonian Lagrangian $L_H$ is a total time derivative $$\begin{align} \delta L_H
~=~& \delta (\dot{\bf q}\cdot{\bf p}) \cr
~=~& \dot{\bf q}\cdot\delta {\bf p} - \dot{\bf p}\cdot\delta {\bf q} + \frac{d({\bf p}\cdot\delta {\bf q})}{dt} \cr
~=~& \varepsilon_j\left( \dot{\bf q}\cdot{\bf p}~p^j - p^2~\dot{q}^j + km\left( \frac{\dot{q}^j}{q} - \frac{{\bf q} \cdot \dot{\bf q}~q^j}{q^3}\right)\right) \cr
~-~&\varepsilon_j\left(2 \dot{\bf p}\cdot{\bf p}~q^j - \dot{\bf p}\cdot{\bf q}~p^j- {\bf p}\cdot{\bf q}~\dot{p}^j \right) - 2\varepsilon_j\frac{da^j}{dt}\cr
~=~&\varepsilon_j\frac{df^j}{dt}, \cr
f^j ~:=~& A^j-2a^j, \end{align} \tag{14}$$ and hence the action $S$ is invariant off-shell up to boundary terms. Noether charge. The bare Noether charge $Q_{(0)}^j$ is $$\begin{align} Q_{(0)}^j~:=~& \frac{\partial L_H}{\partial \dot{q}^i} \{A^j,q^i\}+\frac{\partial L_H}{\partial \dot{p}^i} \{A^j,p^i\} \cr
~=~& p^i\{A^j,q^i\}\cr
~=~& -2a^j.\end{align} \tag{15}$$ The full Noether charge $Q^j$ (which takes the total time-derivative into account) becomes (minus) the Laplace–Runge–Lenz vector $$\begin{align} Q^j~:=~&Q_{(0)}^j-f^j\cr
~=~& -2a^j-(A^j-2a^j)\cr
~=~& -A^j.\end{align}\tag{16}$$ $Q^j$ is conserved on-shell $$\frac{dQ^j}{dt} ~\approx~ 0,\tag{17}$$ due to Noether's first Theorem . Here $j$ is an index that labels the three symmetries. Lagrangian Problem. The Kepler problem has Lagrangian $$\begin{align} L~=~&T-V, \cr
T~:=~& \frac{m}{2}\dot{q}^2, \cr
V~:=~& -\frac{k}{q}. \end{align} \tag{18} $$ The Lagrangian momentum is $$ {\bf p}~:=~\frac{\partial L}{\partial \dot{\bf q}}~=~m\dot{\bf q} \tag{19} . $$ Let us project the infinitesimal symmetry transformation (7) to the Lagrangian configuration space $$\begin{align} \delta q^i ~=~& \varepsilon_j m \left( 2 \dot{q}^i q^j - q^i \dot{q}^j - {\bf q}\cdot\dot{\bf q}~\delta^{ij}\right), \cr
\delta t ~=~&0.\end{align}\tag{20}$$ It would have been difficult to guess the infinitesimal symmetry transformation (20) without using the corresponding Hamiltonian formulation (7). But once we know it we can proceed within the Lagrangian formalism. The variation of the Lagrangian is a total time derivative $$\begin{align} \delta L~=~&\varepsilon_j\frac{df^j}{dt}, \cr
f_j~:=~& m\left(m\dot{q}^2q^j- m{\bf q}\cdot\dot{\bf q}~\dot{q}^j +k
\frac{q^j}{q}\right)\cr
~=~&A^j-2 a^j
. \end{align}\tag{21}$$ The bare Noether charge $Q_{(0)}^j$ is again $$Q_{(0)}^j~:=~2m^2\left(\dot{q}^2q^j- {\bf q}\cdot\dot{\bf q}~\dot{q}^j\right) ~=~-2a^j . \tag{22}$$ The full Noether charge $Q^j$ becomes (minus) the Laplace–Runge–Lenz vector $$\begin{align} Q^j~:=~&Q_{(0)}^j-f^j \cr
~=~& -2a^j-(A^j-2a^j)\cr
~=~& -A^j,\end{align}\tag{23}$$ similar to the Hamiltonian formulation (16). | {
"source": [
"https://physics.stackexchange.com/questions/18088",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3936/"
]
} |
18,228 | This might be a little naive question, but I am having difficulty grasping the concept of irreducible tensors. Particularly, why do we decompose tensors into symmetric and anti-symmetric parts? I have not found a justification for this in my readings and would be happy to gain some intuition here. | You can decompose a rank two tensor $X_{ab}$ into three parts: $$X_{ab} = X_{[ab]} + (1/n)\delta_{ab}\delta^{cd}X_{cd} + (X_{(ab)}-1/n \delta_{ab}\delta^{cd}X_{cd})$$ The first term is the antisymmetric part (the square brackets denote antisymmetrization). The second term is the trace, and the last term is the trace free symmetric part (the round brackets denote symmetrization). n is the dimension of the vector space. Now under, say, a rotation $X_{ab}$ is mapped to $\hat{X}_{ab}=R_{a}^{c}R_{b}^{d}X_{cd}$ where $R$ is the rotation matrix. The important thing is that, acting on a generic $X_{ab}$, this rotation will, for example, take symmetric trace free tensors to symmetric trace free tensors etc. So the rotations aren't "mixing" up the whole space of rank 2 tensors, they're keeping certain subspaces intact. It is in this sense that rotations acting on rank 2 tensors are reducible. It's almost like separate group actions are taking place, the antisymmetric tensors are moving around between themselves, the traceless symmetrics are doing the same. But none of these guys are getting rotated into members "of the other team". If, however, you look at what the rotations are doing to just , say the symmetric trace free tensors, they're churning them around amongst themselves, but they're not leaving any subspace of them intact. So in this sense, the action of the rotations on the symmetric traceless rank 2 tensors is "irreducible". Ditto for the other subspaces. | {
"source": [
"https://physics.stackexchange.com/questions/18228",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2599/"
]
} |
18,446 | Would the effect of gravity on me change if I were to dig a very deep hole and stand in it? If so, how would it change? Am I more likely to be pulled downwards, or pulled towards the edges of the hole? If there would be no change, why not? | The other answers provide a first-order approximation, assuming uniform density (though Adam Zalcman's does allude to deviations from linearity). (Summary: All the mass farther away from the center cancels out, and gravity decreases linearly with depth from 1 g at the surface to zero at the center.) But in fact, the Earth's core is substantially more dense than the outer layers (mantle and crust), and gravity actually increases a bit as you descend, reaching a maximum at the boundary between the outer core and the lower mantle. Within the core, it rapidly drops to zero as you approach the center, where the planet's entire mass is exerting a gravitational pull from all directions. The Wikipedia article on "gravity of Earth" goes into the details, including this graph: "PREM" in the figure refers to the Preliminary Reference Earth Model. Larger versions of the graph can be seen here And there are other, smaller, effects as well. The Earth's rotation results in a smaller effective gravity near the equator, the equatorial bulge that results from that rotation also has a small effect, and mass concentrations have local effects. | {
"source": [
"https://physics.stackexchange.com/questions/18446",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6736/"
]
} |
18,576 | My textbook says that microcanonical ensemble, canonical ensemble and grand canonical ensemble are essentially equivalent under thermodynamic limit. It also derives Fermi-Dirac and Bose-Einstein distribution from grand canonical ensemble. My question is then: How to derive Fermi-Dirac and Bose-Einstein distribution using canonical ensemble. The expression of canonical ensemble
$$\rho\propto e^{-\beta E}$$ seems to imply Boltzmann distribution only. | I don't really see the answer in the other answer so let me do the calculation here. Your general Boltzmann Ansatz says that the probability of a state $n$ depends on its energy as
$$ p_n = C \exp(-\beta E_n) $$
where $\beta = 1/kT$. Fermions are identical particles that, for each "box" or one-particle state they can occupy (given e.g. by $nlms$ in the case of the Hydrogen atom-like states), admit either $N=0$ or $N=1$ particles in it. Higher numbers are forbidden by the Pauli exclusion principle. The energies of the multi-particle state with $N=1$ and $N=0$ in a particular one-particle state $nlms$ differ by $\epsilon$. Consequently,
$$ \frac{p_1}{p_0} = \frac{C\exp(-\beta (E+\epsilon))}{\exp(-\beta E)} = \exp(-\beta \epsilon) $$
where I used the Boltzmann distribution. However, the probabilities that the number of particles in the given one-particle state is equal to $N=0$ or $N=1$ must add to one,
$$ p_0 + p_1 = 1.$$
These conditions are obviously solved by
$$ p_0 = \frac{1}{1+\exp(-\beta\epsilon)}, \qquad p_1 = \frac{\exp(-\beta\epsilon)}{1+\exp(-\beta\epsilon)}, $$
which implies that the expectation value of $n$ is equal to the right formula for the Fermi-Dirac distribution:
$$\langle N \rangle = p_0\times 0 + p_1 \times 1 = p_1= \frac{1}{\exp(\beta\epsilon)+1} $$
The calculation for bosons is analogous except that the Pauli exclusion principle doesn't restrict $N$. So the number of particles (indistinguishable bosons) in the given one-particle state may be $N=0,1,2,\dots $. For each such number $N$, we have exactly one distinct state (because we can't distinguish the particles). The probability of each such state is called $p_n$ where $n=0,1,2,\dots$. We still have
$$\frac{p_{n+1}}{p_n} = \exp(-\beta\epsilon) $$
and
$$ p_0 + p_1 + p_2 + \dots = 1 $$
These conditions are solved by
$$ p_n = \frac{\exp(-n\beta\epsilon)}{1+\exp(-\beta\epsilon)+\exp(-2\beta\epsilon)+\dots } $$
Note that the ratio of the adjacent $p_n$ is what it should be and the denominator was chosen so that all the $p_n$ from $n=0,1,2\dots$ sum up to one. The expectation value of the number of particles is
$$ \langle N \rangle = p_0 \times 0 + p_1 \times 1 + p_2\times 2 + \dots $$
because the number of particles, an integer, must be weighted by the probability of each such possibility. The denominator is still inherited from the denominator of $p_n$ above; it is equal to a geometric series that sums up to
$$ \frac{1}{1-q} = \frac{1}{1-\exp(-\beta\epsilon)} $$
Don't forget that $1-\exp(-\beta\epsilon)$ is in the denominator of the denominator, so it is effectively in the numerator. However, the numerator of $\langle N \rangle$ is tougher and contains the extra factor of $n$ in each term. Nevertheless, the sum is analytically calculable:
$$ \sum_{n=0}^\infty n \exp(-n \beta\epsilon) = - \frac{\partial}{\partial (\beta\epsilon)} \sum_{n=0}^\infty \exp(-n \beta\epsilon) =\dots$$
$$\dots = - \frac{\partial}{\partial (\beta\epsilon)} \frac{1}{1-\exp(-\beta\epsilon)} = \frac{\exp(-\beta\epsilon)}{(1-\exp(-\beta\epsilon))^2} $$
This result's denominator has a second power. One of the copies gets cancelled with the denominator before and the result is therefore
$$ \langle N \rangle = \frac{\exp(-\beta\epsilon)}{1-\exp(-\beta\epsilon)} = \frac{1}{\exp(\beta\epsilon)-1} $$
which is the Bose-Einstein distribution. You could also obtain another version of the Boltzmann distribution for distinguishable particles by a similar calculation. For such particles, $N$ could take the same values as it did for bosons. However, the multiparticle state with $N$ particles in the one-particle state would be degenerate because the particles are distinguishable. There would actually be $N!$ multiparticle states with $N$ particles in them. The sum would yield a Taylor expansion for the same exponential. Note added later : the derivation above was for $\mu=0$. When the chemical potential is nonzero, all appearances of $\epsilon$ have to be replaced by $(\epsilon-\mu)$. Of course that one may only talk about a well-defined value of $\mu$ when we deal with a grand canonical potential; it is impossible to derive a formula depending on $\mu$ from one that contains no $\mu$ and assumes it's ill-defined. The derivation above was meant to show that the difficult $1/(\exp\pm 1)$ structures do appear from a simpler $\exp(-\beta E)$ Ansatz because I think it's the only nontrivial thing to be shown while discussing the relations between the Boltzmann and BE/FD distributions. If that derivation proves the same link as the textbook does, then I apologize but I think there is "nothing else" of a similar kind to be proven. | {
"source": [
"https://physics.stackexchange.com/questions/18576",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3887/"
]
} |
18,981 | Why does there have to be a singularity in a black hole, and not just a very dense lump of matter of finite size? If there's any such thing as granularity of space, couldn't the "singularity" be just the smallest possible size? | It's important to understand the context in which statements like "there must be a singularity in a black hole" are made. This context is provided by the model used to derive the results. In this case, it was classical (meaning "non quantum") general relativity theory that was used to predict the existence of singularities in spacetime. Hawking and Penrose proved that, under certain reasonable assumptions, there would be curves in spacetime that represented the paths of bodies freely falling under gravity that just "came to an end". For these curves, spacetime behaved like it had a boundary or an "edge". This was the singularity the theory predicted. The results were proved rigorously mathematically, using certain properties of differential equations and topology. Now in this framework, spacetime is assumed to be smooth - it's a manifold - it doesn't have any granularity or minimum length. As soon as you start to include the possibilities of granular spacetime, you've moved outside the framework for which the original Hawking Penrose theorems apply, and you have to come up with new proofs for or against the existence of singularities. | {
"source": [
"https://physics.stackexchange.com/questions/18981",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6934/"
]
} |
18,998 | I have almost no background in physics and I had a question related to Schrodinger's Equation. I think, it is not really research level so feel free to close it, but I would request you to kindly suggest some existing literature which can help me develop a better understanding for the same. While reading up about it from an introductory text on Quantum Physics, I wondered for a little while how does one derive this equation. Very soon, the author ( HC Verma ) added the detail that he just took it on faith when he was himself a student. He goes on to say Schrodinger equation rightly predicts the behavior of atomic transitions etc and people believe that it is a fundamental law of nature for quantum systems. Then he raises the question himself which I wanted to know the answer to. Namely What made Schrodinger write such an equation which became a fundamental equation, not to be derived from more fundamental equations? He then adds that it will be an interesting topic for students of history of science which does not answer my question. Could you please try answering this question (or is this question really useless) ? Thanks for your time | Schrodinger was following Hamilton, deBroglie and Einstein. DeBroglie had noted that matter waves obeyed a relation between momentum and wavenumber, and energy and frequency, $$ E = \hbar \omega $$
$$ p = \hbar k $$ For plane waves of the form $\psi(x) = e^{ikx - i \omega t}$, you learn that the $\omega$ and the $k$ of the wave are the energy and the momentum, up to a unit-conversion factor of $\hbar$. Einstein then noted that the DeBrodlie waves will obey the Hamilton Jacobi equation in a semi-classical approximation, and Schrodinger just went about looking for a real wave equation which would reproduce the Hamilton Jacobi equation when you use phases. But the end result is easier than the Hamilton Jacobi equation. For pure sinusoidal waves, the energy and wavenumber are related by $$ E = {p^2\over 2m}$$ Which means that the plane wave satisfies the free Schrodinger equation $$ i\hbar {\partial\over \partial t} \psi = -{\hbar^2 \over 2m} \nabla^2\psi $$ You can check that for a sinusoid, this reproduced the energy/momentum relation. If there is an additional potential, when the wavelength is short, the wavefronts should follow the changing potential to reproduce Newton's laws. The way this is done is to add the potential in the most obvious way $$ i\hbar {\partial\over \partial t} \psi = -{\hbar^2\over 2m} \nabla^2 \psi + V(x) \psi $$ When $V(x) = A - F\cdot x$, where A is a constant offset and B is a constant force vector, the local frequency is slowed down in the direction of bigger potential, curving the wavefronts downward according to Newton's laws. One way of seeing that the equation reproduces Newton's laws comes from Fourier transforms. There is a group-velocity formula for the motion of wavepackets centered at a certain frequency and wavenumber: $$ {dx\over dt} = {\partial \omega \over \partial k} = {p\over m} $$ This equation comes from the idea of beating--- waves with a common frequency move together, but the location of constructive interference changes according to the derivative of the frequency with respect to the wavenumber. Identifying the freqency with the energy and the wavenumber with the momentum, this relation reproduces one of Hamilton's equations of motion as a law of motion for the wavepacket solutions of Schrodinger's equation (in the limit of short wavelengths). The other Hamilton equation can be found by Fourier transform, which makes the wavepacket in k become a wavepacket in x, and the group velocity relation becomes the equation for the changing k as a function of time. $$ {dp\over dt} = - {\partial \omega \over \partial x} = -{\partial V\over \partial x}$$ Schrodinger's equation is really is the first thing you would guess, and there is no need to make Schrodinger's straightforward ideas look intimidating or axiomatic. It is much more transparent than Heisenberg's reasoning of the time, or for that matter, Einstein's. | {
"source": [
"https://physics.stackexchange.com/questions/18998",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6939/"
]
} |
19,216 | My physics instructor told the class, when lecturing about energy, that it can't be created or destroyed. Why is that? Is there a theory or scientific evidence that proves his statement true or false? I apologize for the elementary question, but I sometimes tend to over-think things, and this is one of those times. :) | At the physics 101 level, you pretty much just have to accept this as an experimental fact. At the upper division or early grad school level, you'll be introduced to Noether's Theorem , and we can talk about the invariance of physical law under displacements in time. Really this just replaces one experimental fact (energy is conserved) with another (the character of physical law is independent of time), but at least it seems like a deeper understanding. When you study general relativity and/or cosmology in depth, you may encounter claims that under the right circumstances it is hard to define a unique time to use for "invariance under translation in time" , leaving energy conservation in question. Even on Physics.SE you'll find rather a lot of disagreement on the matter. It is far enough beyond my understanding that I won't venture an opinion. This may (or may not) overturn what you've been told, but not in a way that you care about. An education in physics is often like that. People tell you about firm, unbreakable rules and then later they say "well, that was just an approximation valid when such and such conditions are met and the real rule is this other thing" . Then eventually you more or less catch up with some part of the leading edge of science and you get to participate in learning the new rules. | {
"source": [
"https://physics.stackexchange.com/questions/19216",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6994/"
]
} |
19,636 | I'm quite familiar with SR, but I have very limited understanding in GR, singularities, and black holes. My friend, which is well-read and is interested in general physics, said that we can "jump" into another universe by entering a black hole. Suppose that we and our equipments can withstand the tidal forces near black holes. We jumped from our spaceship into a black hole. As we had passed the event horizon, we couldn't send any information to the outside anymore. Can this situation be interpreted as that we were in another universe
separate from our previous universe? Is there corrections or anything else to
be added to above statement? | Let me attempt a more "popular science" answer (Ron please be gentle with me!). In GR a geodesic is the path followed by a freely moving object. There's nothing especially complex about this; if you throw a stone (in a vacuum to avoid air resistance) it follows a geodesic. If the universe is simply connected you'd expect to be able to get anywhere and back by following geodesics. However in a static black hole, described by the Schwartzchild geometry, something a bit odd happens to the geodesics. Firstly anything following a geodesic through the event horizon can't go back the way it came, and secondly all geodesics passing through the event horizon end at a single point i.e. the singularity at the centre of the black hole. If you now rotate the black hole, or you add electric charge to it, or both, you can find geodesics that pass through the event horizons (there are now two of them!), miss the singularity and travel back out of the black hole again. But, and this is where the separate universes idea comes in, you now can't find any geodesics that will take you back to your starting point. So you seem to be back in the normal universe but you're in a region that is disconnected from where you started. Does this mean it's a "separate universe". That's really a matter of terminology, but I would say not. After all you just got there by coasting - you didn't pass through any portals of the type so beloved by SciFi films. And there's no reason to think that physics is any different where you ended than where you started. If you're interested in pursuing this further I strongly recommend The Cosmic Frontiers of General Relativity by William J. Kaufmann. It claims to be a layman's guide, but few laymen I know could understand it. However if you know SR you shouldn't have any problems with it. | {
"source": [
"https://physics.stackexchange.com/questions/19636",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7140/"
]
} |
19,770 | The Poisson bracket is defined as: $$\{f,g\} ~:=~ \sum_{i=1}^{N} \left[
\frac{\partial f}{\partial q_{i}} \frac{\partial g}{\partial p_{i}} -
\frac{\partial f}{\partial p_{i}} \frac{\partial g}{\partial q_{i}}
\right]. $$ The anticommutator is defined as: $$ \{a,b\} ~:=~ ab + ba. $$ The commutator is defined as: $$ [a,b] ~:=~ ab - ba. $$ What are the connections between all of them? Edit: Does the Poisson bracket define some uncertainty principle as well? | Poisson brackets play more or less the same role in classical mechanics that commutators do in quantum mechanics. For example, Hamilton's equation in classical mechanics is analogous to the Heisenberg equation in quantum mechanics: $$\begin{align}\frac{\mathrm{d}f}{\mathrm{d}t} &= \{f,H\} + \frac{\partial f}{\partial t} & \frac{\mathrm{d}\hat f}{\mathrm{d}t} &= -\frac{i}{\hbar}[\hat f,\hat H] + \frac{\partial \hat f}{\partial t}\end{align}$$ where $H$ is the Hamiltonian and $f$ is either a function of the state variables $q$ and $p$ (in the classical equation), or an operator acting on the quantum state $|\psi\rangle$ (in the quantum equation). The hat indicates that it's an operator. Also, when you're converting a classical theory to its quantum version, the way to do it is to reinterpret all the variables as operators, and then impose a commutation relation on the fundamental operators: $[\hat q,\hat p] = C$ where $C$ is some constant. To determine the value of that constant, you can use the Poisson bracket of the corresponding quantities in the classical theory as motivation, according to the formula $[\hat q,\hat p] = i\hbar \{q,p\}$. For example, in basic quantum mechanics, the commutator of position and momentum is $[\hat x,\hat p] = i\hbar$, because in classical mechanics, $\{x,p\} = 1$. Anticommutators are not directly related to Poisson brackets, but they are a logical extension of commutators. After all, if you can fix the value of $\hat{A}\hat{B} - \hat{B}\hat{A}$ and get a sensible theory out of that, it's natural to wonder what sort of theory you'd get if you fixed the value of $\hat{A}\hat{B} + \hat{B}\hat{A}$ instead. This plays a major role in quantum field theory, where fixing the commutator gives you a theory of bosons and fixing the anticommutator gives you a theory of fermions. | {
"source": [
"https://physics.stackexchange.com/questions/19770",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6336/"
]
} |
19,775 | Wikipedia describes many variants of quantum field theory: conformal quantum field theory topological quantum field theory axiomatic/constructive quantum field theory algebraic quantum field theory Are these approaches to the same thing or actually different? | They're variants, different kinds of quantum field theory, but they're not mutually exclusive. The different adjectives you mention separate quantum field theory to "pieces" in different ways. The different sorts of variants you mention are being used and studied by different people, the classification has different purposes, the degree of usefulness and validity is different for the different adjectives, and so on. Conformal quantum field theory is a special subset of quantum field theories that differ by dynamics (the equations that govern the evolution in time), namely by the laws' respect for the conformal symmetry (essentially scaling: only the angles and/or length ratios, and not the absolute length of things, can be directly measured). Conformal field theories have local degrees of freedom and the forces are always long-range forces, which never decrease at infinity faster than a power law. They're omnipresent in both classification of quantum field theories - almost every quantum field theory becomes scale-invariant at long distances - and in the structure of string theory - conformal field theories control the behavior of the world sheets of strings (here, the CFT is meant to contain two-dimensional gravity but the latter carries no local degrees of freedom so it doesn't locally affect the dynamics) as well as boundary physics in the holographic AdS/CFT correspondence (here, CFTs on a boundary of an anti de Sitter spacetime are physically equivalent to a gravitational QFT/string theory defined in the bulk of the anti de Sitter space). Conformal field theories are the most important class among those you mentioned for the practicing physicists who ultimately want to talk about the empirical data but these theories are still very special; generic field theories they study (e.g. the Standard Model) aren't conformal. Topological quantum field theory is one that contains no excitations that may propagate "in the bulk" of the spacetime so it is not appropriate to describe any waves we know in the real world. The characteristic quantity describing a spacetime configuration - the action - remains unchanged under any continuous changes of the fields and shapes. So only the qualitative, topological differences between the configurations matter. Topological quantum field theory (like Chern-Simons theory ) is studied by the very mathematically oriented people and it's useful to classify knots in knot theory and other "combinatorial" things. They're the main reason behind Edward Witten's Fields medal etc. Axiomatic or algebraic (and mostly also "constructive" ) quantum field theory isn't a subset of different "dynamical equations". Instead, it is another approach to define any quantum field theory via axioms etc. That's why it's a passion of mathematicians or extremely mathematically formally oriented physicists and one must add that according to almost all practicing particle physicists, they're obsolete and failed (related) approaches which really can't describe those quantum field theories that have become important. In particular, AQFTs of both types start with naive assumptions about the short-distance behavior of theories and aren't really compatible with renormalization and all the lessons physics has taught us about these things. Constructive QFTs are mainly tools to understand the relativistic invariance of a quantum field theory by a specific method. Then there are many special quantum field theories, like the extremely important class of gauge theories etc. They have some dynamics including gauge fields: that's a classification according to the content. QFTs are often classified according to various symmetries (or their absence) which also constrain their dynamical laws: supersymmetric QFTs , gravitational QFTs based on general relativity, theories of supergravity which are QFTs that combine general relativity and supersymmetry, chiral QFTs which are left-right-asymmetric, relativistic QFTs (almost all QFTs that are being talked about in particle physics), lattice gauge theory (gauge theory where the spacetime is replaced by a discrete grid), and many others. Gauge theories may also be divided according to the fate of the gauge field to confining gauge theories , spontaneously broken QFTs , unbroken phases , and others. String field theory is a QFT with infinitely many fields which is designed to be physically equivalent to perturbative string theory in the same spacetime but it only works smoothly for open strings and only in the research of tachyon condensation, it has led to results that were not quite obtained by other general methods of string theory. We also talk about effective quantum field theories which is an approach to interpret many (almost all) quantum field theories as an approximate theory to describe all phenomena at some distance scale (and all longer ones); one remains agnostic about the laws governing the short-distance physics. That's a different classification, one according to the interpretation. Effective field theories don't have to be predictive or consistent up to arbitrarily high energies; they may have a "cutoff energy" above which they break down. It doesn't make much sense to spend too much time by learning dictionary definitions; one must actually learn some quantum field theory and then the relevance or irrelevance and meaning and mutual relationships between the "variants" become more clear. At any rate, it's not true that the classification into adjectives is as trivial as the list of colors, red, green, blue. The different adjectives look at the framework of quantum field theory from very different directions - symmetries that particular quantum field theories (defined with particular equations) respect; number of local excitations; ability to extend the theory to arbitrary length scales; ways to define (all of) them using a rigorous mathematical framework, and others. | {
"source": [
"https://physics.stackexchange.com/questions/19775",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7240/"
]
} |
20,003 | I'm having trouble understanding the simple "planetary" model of the atom that I'm being taught in my basic chemistry course. In particular, I can't see how a negatively charged electron can stay in "orbit" around a positively charged nucleus. Even if the electron actually orbits the nucleus, wouldn't that orbit eventually decay? I can't reconcile the rapidly moving electrons required by the planetary model with the way atoms are described as forming bonds. If electrons are zooming around in orbits, how do they suddenly "stop" to form bonds. I understand that certain aspects of quantum mechanics were created to address these problems, and that there are other models of atoms. My question here is whether the planetary model itself addresses these concerns in some way (that I'm missing) and whether I'm right to be uncomfortable with it. | You are right, the planetary model of the atom does not make sense when one considers the electromagnetic forces involved. The electron in an orbit is accelerating continuously and would thus radiate away its energy and fall into the nucleus. One of the reasons for "inventing" quantum mechanics was exactly this conundrum. The Bohr model was proposed to solve this, by stipulating that the orbits were closed and quantized and no energy could be lost while the electron was in orbit, thus creating the stability of the atom necessary to form solids and liquids. It also explained the lines observed in the spectra from excited atoms as transitions between orbits. If you study further into physics you will learn about quantum mechanics and the axioms and postulates that form the equations whose solutions give exact numbers for what was the first guess at a model of the atom. Quantum mechanics is accepted as the underlying level of all physical forces at the microscopic level, and sometimes quantum mechanics can be seen macroscopically, as with superconductivity , for example. Macroscopic forces, like those due to classical electric and magnetic fields, are limiting cases of the real forces which reign microscopically. | {
"source": [
"https://physics.stackexchange.com/questions/20003",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2540/"
]
} |
20,034 | From what I've read on thorium reactors, there's enormous benefit to them. Their fuel is abundant enough to power human civilization for centuries, their fission products are relatively short-lived, they're far less prone to catastrophic failure, and they don't produce anything you could feasibly use as a source of material for nuclear weapons. So what technical issues need to be resolved so that Thorium reactors become practical and put into wide use? Is it purely engineering issues that need to be overcome? Or are there problems of physics as well? If so, what are the technical problems and what research is occurring to overcome them? If none of the problems that face thorium reactors are insurmountable, then why aren't they the focus of research and development that nuclear fusion is? Are there real environmental issues? (If so what are they?) | I'm not sure what all you've read on them, but I'll try to clarify at least a few things. I would certainly disagree with several of your assertions. For starters, you say "...they don't produce anything you could feasibly use as a source of material for nuclear weapons." Thorium reactors use Thorium as a fertile fuel that transmutes into fissile U233. While the spent fuel does not contain the same ratios of elements as a uranium fuel cycle, it does indeed contain bomb worthy isotopes as well as some longer lived fission and daughter products . In fact, the thorium cycle was used to produce some of the fuel for Operation Teapot in 1955. You say "...they're far less prone to catastrophic failure..." While it may be the case that thorium reactors have traditionally had fewer catastrophic failures than uranium reactors, it is also true that the statistics are too small to make reasonable conclusions as to the reliability of such systems. To my knowledge, no commercial reactors use a thorium fuel cycle. In other words, all of the thorium reactors are one-off, uniquely designed pieces of equipment with well trained and knowledgable working staff. There are roughtly 435 commercial nuclear plants in operation with another 63 under construction. There have been on the order of 20 major nuclear accidents over the years. There are only 15 thorium reactors. Statisitcally, thorium reactors might have a worse accident rate. There is certainly ongoing research into commercial applications of a thorium fuel cycle. Interestingly, as that article suggests, a thorium cycle requires another isotope to get the reaction going so there will always be a need for some uranium cycle reactors. Like P3trus said, even outside of India (where the thorium reserves provide good economic incentive) there are people considering thorium. Ultimately, the preference for a uranium fuel cycle is a pragmatic one. The nuclear industry has a great deal of experience with uranium. It's true that there is more thorium than uranium, but uranium is hardly rare. It is sufficiently common, in fact, that there aren't even very many estimates of the size of the reserves. With respect to public opinion, thorium does not offer a tangible difference to uranium other than a change of name. As long as public opinion is against nuclear, that will include thorium. If they turn to support nuclear, the economics still point to uranium. | {
"source": [
"https://physics.stackexchange.com/questions/20034",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4502/"
]
} |
20,071 | Maxwell's equations specify two vector and two scalar (differential) equations. That implies 8 components in the equations. But between vector fields $\vec{E}=(E_x,E_y,E_z)$ and $\vec{B}=(B_x,B_y,B_z)$, there are only 6 unknowns. So we have 8 equations for 6 unknowns. Why isn't this a problem? As far as I know, the answer is basically because the equations aren't actually independent but I've never found a clear explanation. Perhaps the right direction is in this article on arXiv . Apologies if this is a repost. I found some discussions on PhysicsForums but no similar question here. | It isn't a problem because two of the eight equations are constraints and they're not quite independent from the remaining six. The constraint equations are the scalar ones,
$$ {\rm div}\,\,\vec D = \rho, \qquad {\rm div}\,\,\vec B = 0$$
Imagine $\vec D=\epsilon_0\vec E$ and $\vec B=\mu_0\vec H$ everywhere for the sake of simplicity. If these equations are satisfied in the initial state, they will immediately be satisfied at all times. That's because the time derivatives of these non-dynamical equations ("non-dynamical" means that they're not designed to determine time derivatives of fields themselves; they don't really contain any time derivatives) may be calculated from the remaining 6 equations. Just apply ${\rm div}$ on the remaining 6 component equations,
$$ {\rm curl}\,\, \vec E+ \frac{\partial\vec B}{\partial t} = 0, \qquad {\rm curl}\,\, \vec H- \frac{\partial\vec D}{\partial t} = \vec j. $$
When you apply ${\rm div}$, the curl terms disappear because ${\rm div}\,\,{\rm curl} \,\,\vec V\equiv 0$ is an identity and you get
$$\frac{\partial({\rm div}\,\,\vec B)}{\partial t} =0,\qquad
\frac{\partial({\rm div}\,\,\vec D)}{\partial t} =-{\rm div}\,\,\vec j. $$
The first equation implies that ${\rm div}\,\,\vec B$ remains zero if it were zero in the initial state. The second equation may be rewritten using the continuity equation for $\vec j$,
$$ \frac{\partial \rho}{\partial t}+{\rm div}\,\,\vec j = 0$$
(i.e. we are assuming this holds for the sources) to get
$$ \frac{\partial ({\rm div}\,\,\vec D-\rho)}{\partial t} = 0 $$
so ${\rm div}\,\,\vec D-\rho$ also remains zero at all times if it is zero in the initial state. Let me mention that among the 6+2 component Maxwell's equations, 4 of them, those involving $\vec E,\vec B$, may be solved by writing $\vec E,\vec B$ in terms of four components $\Phi,\vec A$. In this language, we are left with the remaining 4 Maxwell's equations only. However, only 3 of them are really independent at each time, as shown above. That's also OK because the four components of $\Phi,\vec A$ are not quite determined: one of these components (or one function) may be changed by the 1-parameter $U(1)$ gauge invariance. | {
"source": [
"https://physics.stackexchange.com/questions/20071",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4551/"
]
} |
20,187 | I am wondering how fast electrons travel inside of atomic electron orbitals. Surely there is a range of speeds? Is there a minimum speed? I am not asking about electron movement through a conductor. | The state of an electron (or electrons) in the atoms isn't an eigenstate of the velocity (or speed) operator, so the speed isn't sharply determined. However, it's very interesting to make an order-of-magnitude estimate of the speed of electrons in the Hydrogen atom (and it's similar for other atoms). The speed $v$ satisfies
$$ \frac{mv^2}2\sim \frac{e^2}{4\pi\epsilon_0 r}, \qquad mv\sim \frac{\hbar}{r} $$
The first condition is a virial theorem – the kinetic and potential energies are comparable - while the second is the uncertainty principle. The second one tells you $r\sim \hbar / mv$ which can be substituted to the first one (elimination of $r$) to get (let's ignore $1/2$)
$$ mv^2 \sim \frac{e^2 \cdot mv}{4\pi\epsilon_0\hbar},\qquad v \sim \frac{e^2}{4\pi\epsilon_0\hbar c} c = \alpha c $$
so $v/c$, the speed in the units of the speed of light, is equal to the fine-structure constant $\alpha$, approximately $1/137.036$. The smallness of this speed is why the non-relativistic approximation to the Hydrogen atom is so good (although a non relativistic kinetic energy was assumed from the start): the relativistic corrections are suppressed by higher powers of the fine-structure constant! One could discuss how the speed of inner-shell electrons and valence electrons is scaling with $Z$ etc. But the speed $v\sim \alpha c$ would still be the key factor in the formula for the speed. | {
"source": [
"https://physics.stackexchange.com/questions/20187",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7402/"
]
} |
20,259 | What makes the beam of some lasers: visible?
such as the ones used in clubs or such as the laser pointers sold at amazon which if pointed to the sky look like a solid visible beam of light crossing the sky (it reminds me of the lightsaber in Star Wars). invisible?
such as the ones used in pointers for presentations. | As previous answers have stated, the wavelength (or frequency) and intensity of the beam are important, as well as the type and amount of impurities in the air. The beam must be of a wavelength that is visible to humans, and fog or dust scatters the light very strongly so that you can see it. However, even in pure, clean air, you will be able to see a laser beam under certain conditions. This is because light can scatter from air molecules themselves via Rayleigh scattering. Rayleigh scattering has a strong inverse dependance on wavelength, specifically $\lambda^{-4}$, so it will be easier to see with a green, and especially a blue, laser 1 . It also has a scattering angle dependance that goes like $1+\cos^2 \theta$, so it may be easier to see if your viewing angle is very close to the beam 2 . With a 5mW green laser pointer, Rayleigh scattering is pretty easy to see. I imagine it would be even easier with blue/violet, but I'm not sure, since human eyes are most sensitive at green, so that may tip the balance. A more intense beam, like those used at night clubs or laser light shows, would be very easy to see if the beam were held still, but in those situations the beams are moving around rapidly to produce the light show, so Rayleigh scattering alone wouldn't really let you see much. In situations like night clubs, the scattering from fog produced by fog machines is much more important. You are correct that, in space, because there is no atmosphere and nothing to scatter off of, you wouldn't see any sort of laser beam. 1: This is also why the sky is blue, incidentally. 2: DO NOT EVER TRY TO TEST THIS WITH A BEAM POINTED TOWARDS YOU If you want to try this out, take a laser pointer and hold it near your head (eg. against your temple) and point it away from you, in the dark. | {
"source": [
"https://physics.stackexchange.com/questions/20259",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4521/"
]
} |
20,371 | Recent research at IBM has found a way to store 1 bit of data in 12 atoms. While that is a big accomplishment compared to what we have today, it does seem like a waste to a non-physics eye like me. From this figure on the same page: it looks like we can determine 1 and 0 based on the alignment of magnetic properties of 12 atoms. But why is a smaller unit, like just one atom not good enough? | I don't think that this is a physics restriction, but one of current engineering capability.
As you link points out, using 12 atoms allowed the information to be retained without effecting the information stored next to it.
You will also need enough data-mass to allow for the reading and writing of the information without affecting the data next to the one of interest.
In theory the binary data could be stored in other characteristic so an individual atom, but we (IBM) currently don't have a way to do this. | {
"source": [
"https://physics.stackexchange.com/questions/20371",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/230/"
]
} |
20,394 | Big bang cosmology, as far as I understand it, says that the universe was super hot and super dense and super small. It looks like that all the current matter, seen and unseen, were compressed to infinitesimal distance, which means it was a black hole. Is the big bang, and our universe expansion in this case, hence is nothing but a black evaporation via Hawking radiation? Are we living inside that primordial black hole explosion? | Here is a copy of an answer I wrote some time ago for the Physics FAQ http://math.ucr.edu/home/baez/physics/Relativity/BlackHoles/universe.html Is the Big Bang a black hole? This question can be made into several more specific questions with different answers. Why did the universe not collapse and form a black hole at the beginning? Sometimes people find it hard to understand why the Big Bang is not a black hole. After all, the density of matter in the first fraction of a second was much higher than that found in any star, and dense matter is supposed to curve spacetime strongly. At sufficient density there must be matter contained within a region smaller than the Schwarzschild radius for its mass. Nevertheless, the Big Bang manages to avoid being trapped inside a black hole of its own making and paradoxically the space near the singularity is actually flat rather than curving tightly. How can this be? The short answer is that the Big Bang gets away with it because it is expanding rapidly near the beginning and the rate of expansion is slowing down. Space can be flat even when spacetime is not. Spacetime's curvature can come from the temporal parts of the spacetime metric which measures the deceleration of the expansion of the universe. So the total curvature of spacetime is related to the density of matter, but there is a contribution to curvature from the expansion as well as from any curvature of space. The Schwarzschild solution of the gravitational equations is static and demonstrates the limits placed on a static spherical body before it must collapse to a black hole. The Schwarzschild limit does not apply to rapidly expanding matter. What is the distinction between the Big Bang model and a black hole? The standard Big Bang models are the Friedmann-Robertson-Walker (FRW) solutions of the gravitational field equations of general relativity. These can describe open or closed universes. All of these FRW universes have a singularity at their beginning, which represents the Big Bang. Black holes also have singularities. Furthermore, in the case of a closed universe no light can escape, which is just the common definition of a black hole. So what is the difference? The first clear difference is that the Big Bang singularity of the FRW models lies in the past of all events in the universe, whereas the singularity of a black hole lies in the future. The Big Bang is therefore more like a "white hole": the time-reversed version of a black hole. According to classical general relativity white holes should not exist, since they cannot be created for the same (time-reversed) reasons that black holes cannot be destroyed. But this might not apply if they have always existed. But the standard FRW Big Bang models are also different from a white hole. A white hole has an event horizon that is the reverse of a black hole event horizon. Nothing can pass into this horizon, just as nothing can escape from a black hole horizon. Roughly speaking, this is the definition of a white hole. Notice that it would have been easy to show that the FRW model is different from a standard black- or white hole solution such as the static Schwarzschild solutions or rotating Kerr solutions, but it is more difficult to demonstrate the difference from a more general black- or white hole. The real difference is that the FRW models do not have the same type of event horizon as a black- or white hole. Outside a white hole event horizon there are world lines that can be traced back into the past indefinitely without ever meeting the white hole singularity, whereas in an FRW cosmology all worldlines originate at the singularity. Even so, could the Big Bang be a black- or white hole? In the previous answer I was careful only to argue that the standard FRW Big Bang model is distinct from a black- or white hole. The real universe may be different from the FRW universe, so can we rule out the possibility that it is a black- or white hole? I am not going to enter into such issues as to whether there was actually a singularity, and I will assume here that general relativity is correct. The previous argument against the Big Bang's being a black hole still applies. The black hole singularity always lies on the future light cone, whereas astronomical observations clearly indicate a hot Big Bang in the past. The possibility that the Big Bang is actually a white hole remains. The major assumption of the FRW cosmologies is that the universe is homogeneous and isotropic on large scales. That is, it looks the same everywhere and in every direction at any given time. There is good astronomical evidence that the distribution of galaxies is fairly homogeneous and isotropic on scales larger than a few hundred million light years. The high level of isotropy of the cosmic background radiation is strong supporting evidence for homogeneity. However, the size of the observable universe is limited by the speed of light and the age of the universe. We see only as far as about ten to twenty thousand million light years, which is about 100 times larger than the scales on which structure is seen in galaxy distributions. Homogeniety has always been a debated topic. The universe itself may well be many orders of magnitude larger than what we can observe, or it may even be infinite. Astronomer Martin Rees compares our view with looking out to sea from a ship in the middle of the ocean. As we look out beyond the local disturbances of the waves, we see an apparently endless and featureless seascape. From a ship the horizon will be only a few miles away, and the ocean may stretch for hundreds of miles before there is land. When we look out into space with our largest telescopes, our view is also limited to a finite distance. No matter how smooth it seems, we cannot assume that it continues like that beyond what we can see. So homogeneity is not certain on scales much larger than the observable universe. We might argue in favour of it on philosophical grounds, but we cannot prove it. In that case, we must ask if there is a white hole model for the universe that would be as consistent with observations as the FRW models. Some people initially think that the answer must be no, because white holes (like black holes) produce tidal forces that stretch and compress in different directions. Hence they are quite different from what we observe. This is not conclusive, because it applies only to the spacetime of a black hole in the absence of matter. Inside a star the tidal forces can be absent. A white hole model that fits cosmological observations would have to be the time reverse of a star collapsing to form a black hole. To a good approximation, we could ignore pressure and treat it like a spherical cloud of dust with no internal forces other than gravity. Stellar collapse has been intensively studied since the seminal work of Snyder and Oppenheimer in 1939 and this simple case is well understood. It is possible to construct an exact model of stellar collapse in the absence of pressure by gluing together any FRW solution inside the spherical star and a Schwarzschild solution outside. Spacetime within the star remains homogeneous and isotropic during the collapse. It follows that the time reversal of this model for a collapsing sphere of dust is indistinguishable from the FRW models if the dust sphere is larger than the observable universe. In other words, we cannot rule out the possibility that the universe is a very large white hole. Only by waiting many billions of years until the edge of the sphere comes into view could we know. It has to be admitted that if we drop the assumptions of homogeneity and isotropy then there are many other possible cosmological models, including many with non-trivial topologies. This makes it difficult to derive anything concrete from such theories. But this has not stopped some brave and imaginative cosmologists thinking about them. One of the most exciting possibilities was considered by C. Hellaby in 1987, who envisaged the universe being created as a string of beads of isolated while holes that explode independently and coalesce into one universe at a certain moment. This is all described by a single exact solution of general relativity. There is one final twist in the answer to this question. It has been suggested by Stephen Hawking that once quantum effects are accounted for, the distinction between black holes and white holes might not be as clear as it first seems. This is due to "Hawking radiation", a mechanism by which black holes can lose matter. (See the relativity FAQ article on Hawking radiation.) A black hole in thermal equilibrium with surrounding radiation might have to be time symmetric, in which case it would be the same as a white hole. This idea is controversial, but if true it would mean that the universe could be both a white hole and a black hole at the same time. Perhaps the truth is even stranger. In other words, who knows? | {
"source": [
"https://physics.stackexchange.com/questions/20394",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4521/"
]
} |
20,437 | Tensors are mathematical objects that are needed in physics to define certain quantities. I have a couple of questions regarding them that need to be clarified: Are matrices and second rank tensors the same thing? If the answer to 1 is yes, then can we think of a 3rd rank tensor as an ordered set of numbers in 3D lattice (just in the same way as we can think of a matrix as an ordered set of numbers in 2D lattice)? | Matrices are often first introduced to students to represent linear transformations taking vectors from $\mathbb{R}^n$ and mapping them to vectors in $\mathbb{R}^m$. A given linear transformation may be represented by infinitely many different matrices depending on the basis vectors chosen for $\mathbb{R}^n$ and $\mathbb{R}^m$, and a well-defined transformation law allows one to rewrite the linear operation for each choice of basis vectors. Second rank tensors are quite similar, but there is one important difference that comes up for applications in which non-Euclidean (non-flat) distance metrics are considered, such as general relativity. 2nd rank tensors may map not just $\mathbb{R}^n$ to $\mathbb{R}^m$, but may also map between the dual spaces of either $\mathbb{R}^n$ or $\mathbb{R}^m$. The transformation law for tensors is similar to the one first learned for linear operators, but allows for the added flexibility of allowing the tensor to switch between acting on dual spaces or not. Note that for Euclidean distance metrics, the dual space and the original vector space are the same, so this distinction doesn't matter in that case. Moreover, 2nd rank tensors can act not just as maps from one vector space to another. The operation of tensor "contraction" (a generalization of the dot product for vectors) allows 2nd rank tensors to act on other second rank tensors to produce a scalar. This contraction process is generalizable for higher dimensional tensors, allowing for contractions between tensors of varying ranks to produce products of varying ranks. To echo another answer posted here, a 2nd rank tensor at any time can indeed be represented by a matrix, which simply means rows and columns of numbers on a page. What I'm trying to do is offer a distinction between matrices as they are first introduced to represent linear operators from vector spaces, and matrices that represent the slightly more flexible objects I've described | {
"source": [
"https://physics.stackexchange.com/questions/20437",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4521/"
]
} |
20,460 | What are the key differences between Gauge pressure and absolute pressure? Are there any other forms of pressure? | just wiki it. Anyways I will give you a oneliner from wiki itself- Absolute pressure is zero-referenced against a perfect vacuum, so it is equal to gauge pressure plus atmospheric pressure. Gauge pressure is zero-referenced against ambient air pressure, so it is equal to absolute pressure minus atmospheric pressure. Negative signs are usually omitted. Differential pressure is the difference in pressure between two points. | {
"source": [
"https://physics.stackexchange.com/questions/20460",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7492/"
]
} |
20,477 | The electromagnetic force on a charge $ e $ is $$ \vec F = e(\vec E + \vec v\times \vec B),$$ the Lorentz force. But, is this a separate assumption added to the full Maxwell's equations? (the result of some empirical evidence?) Or is it somewhere hidden in Maxwell's equations? | Maxwell's equations do not contain any information about the effect of fields on charges. One can imagine an alternate universe where electric and magnetic fields create no forces on any charges, yet Maxwell's equations still hold. ( $ \vec{E} $ and $ \vec{B} $ would be unobservable and totally pointless to calculate in this universe, but you could still calculate them!) So you can't derive the Lorentz force law from Maxwell's equations alone. It is a separate law. However... Some people count a broad version of "Faraday's law" as part of "Maxwell's equations". The broad version of Faraday's law is "EMF = derivative of flux" (as opposed to the narrow version $ \nabla\times\vec E = -\partial_t \vec B $ ). EMF is defined as the energy gain of charges traveling through a circuit, so this law gives information about forces on charges, and I think you can derive the Lorentz force starting from here. (By comparison, $ \nabla\times\vec E = -\partial_t \vec B $ talks about electric and magnetic fields, but doesn't explicitly say how or whether those fields affect charges.) Some people take the Lorentz force law to be essentially the definition of electric and magnetic fields, in which case it's part of the foundation on which Maxwell's equations are built. If you assume the electric force part of the Lorentz force law ( $ \vec F = q \vec E $ ), AND you assume special relativity, you can derive the magnetic force part ( $ \vec F = q \vec v \times \vec B $ ) from Maxwell's equations, because an electric force in one frame is magnetic in other frames. The reverse is also true: If you assume the magnetic force formula and you assume special relativity, then you can derive the electric force formula. If you assume the formulas for the energy and/or momentum of electromagnetic fields, then conservation of energy and/or momentum implies that the fields have to generate forces on charges, and presumably you can derive the exact Lorentz force law. | {
"source": [
"https://physics.stackexchange.com/questions/20477",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5885/"
]
} |
20,714 | What is your interpretation of Laplace operator? When evaluating Laplacian of some scalar field at a given point one can get a value. What does this value tell us about the field or it's behaviour in the given spot? I can grasp the meaning of gradient and divergence. But viewing Laplace operator as divergence of gradient gives me interpretation "sources of gradient" which to be honest doesn't make sense to me. It seems a bit easier to interpret Laplacian in certain physical situations or to interpret Laplace's equation, that might be a good place to start. Or misleading. I seek an interpretation that would be as universal as gradients interpretation seems to me - applicable, correct and understandable on any scalar field. | The Laplacian measures what you could call the « curvature » or stress of the field. It tells you how much the value of the field differs from its average value taken over the surrounding points. This is because it is the divergence of the gradient..it tells you how much the rate of changes of the field differ from the kind of steady variation you expect in a divergence-free flow. Look at one dimension: the Laplacian simply is $\partial^2\over\partial x^2$, i.e., the curvature. When this is zero, the function is linear so its value at the centre of any interval is the average of the extremes. In three dimensions, if the Laplacian is zero, the function is harmonic and satisfies the averaging principle. See http://en.wikipedia.org/wiki/Harmonic_function#The_mean_value_property . If not, the Laplacian measures its deviation from this. | {
"source": [
"https://physics.stackexchange.com/questions/20714",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12727/"
]
} |
20,797 | For the following quantities respectively, could someone write down the common definitions, their meaning, the field of study in which one would typically find these under their actual name, and most foremost the associated abuse of language as well as difference and correlation (no pun intended): Propagator Two-point function Wightman function Green's function Kernel Linear Response function Correlation function Covariance function Maybe including side notes regarding the distinction between Covariance , Covariance function and Cross-Covariance , the pair correlation function for different observables, relations to the autocorrelation function , the $n$- point function , the Schwinger function , the relation to transition amplitudes , retardation and related adjectives for Greens functions and/or propagators, the Heat-Kernel and its seemingly privileged position, the spectral density , spectra and the resolvent . Edit: I'd still like to hear about the " Correlation fuction interpretation" of the quantum field theoretical framework. Can transition amplitudes be seen as a sort of auto-correlation? Like... such that the QFT dynamics at hand just determine the structure of the temporal and spatial overlaps? | The main distinction you want to make is between the Green function and the kernel. (I prefer the terminology "Green function" without the 's. Imagine a different name, say, Feynman. People would definitely say the Feynman function, not the Feynman's function. But I digress...) Start with a differential operator, call it $L$. E.g., in the case of Laplace's equation, then $L$ is the Laplacian $L = \nabla^2$. Then, the Green function of $L$ is the solution of the inhomogenous differential equation
$$
L_x G(x, x^\prime) = \delta(x - x^\prime)\,.
$$
We'll talk about its boundary conditions later on. The kernel is a solution of the homogeneous equation
$$
L_x K(x, x^\prime) = 0\,,
$$
subject to a Dirichlet boundary condition $\lim_{x \rightarrow x^\prime}K(x,x^\prime) = \delta (x-x^\prime)$, or Neumann boundary condition $\lim_{x \rightarrow x^\prime} \partial K(x,x^\prime) = \delta(x-x^\prime)$. So, how do we use them? The Green function solves linear differential equations with driving terms. $L_x u(x) = \rho(x)$ is solved by
$$
u(x) = \int G(x,x^\prime)\rho(x^\prime)dx^\prime\,.
$$
Whichever boundary conditions we what to impose on the solution $u$ specify the boundary conditions we impose on $G$. For example, a retarded Green function propagates influence strictly forward in time, so that $G(x,x^\prime) = 0$ whenever $x^0 < x^{\prime\,0}$. (The 0 here denotes the time coordinate.) One would use this if the boundary condition on $u$ was that $u(x) = 0$ far in the past, before the source term $\rho$ "turns on." The kernel solves boundary value problems. Say we're solving the equation $L_x u(x) = 0$ on a manifold $M$, and specify $u$ on the boundary $\partial M$ to be $v$. Then,
$$
u(x) = \int_{\partial M} K(x,x^\prime)v(x^\prime)dx^\prime\,.
$$
In this case, we're using the kernel with Dirichlet boundary conditions. For example, the heat kernel is the kernel of the heat equation, in which
$$
L = \frac{\partial}{\partial t} - \nabla_{R^d}^2\,.
$$
We can see that
$$
K(x,t; x^\prime, t^\prime) = \frac{1}{[4\pi (t-t^\prime)]^{d/2}}\,e^{-|x-x^\prime|^2/4(t-t^\prime)},
$$
solves $L_{x,t} K(x,t;x^\prime,t^\prime) = 0$ and moreover satisfies
$$
\lim_{t \rightarrow t^\prime} \, K(x,t;x^\prime,t^\prime) = \delta^{(d)}(x-x^\prime)\,.
$$
(We must be careful to consider only $t > t^\prime$ and hence also take a directional limit.) Say you're given some shape $v(x)$ at time $0$ and want to "melt" is according to the heat equation. Then later on, this shape has become
$$
u(x,t) = \int_{R^d} K(x,t;x^\prime,0)v(x^\prime)d^dx^\prime\,.
$$
So in this case, the boundary was the time-slice at $t^\prime = 0$. Now for the rest of them. Propagator is sometimes used to mean Green function, sometimes used to mean kernel. The Klein-Gordon propagator is a Green function, because it satisfies $L_x D(x,x^\prime) = \delta(x-x^\prime)$ for $L_x = \partial_x^2 + m^2$. The boundary conditions specify the difference between the retarded, advanced and Feynman propagators. (See? Not Feynman's propagator) In the case of a Klein-Gordon field, the retarded propagator is defined as
$$
D_R(x,x^\prime) = \Theta(x^0 - x^{\prime\,0})\,\langle0| \varphi(x) \varphi(x^\prime) |0\rangle\,
$$
where $\Theta(x) = 1$ for $x > 0$ and $= 0$ otherwise. The Wightman function is defined as
$$
W(x,x^\prime) = \langle0| \varphi(x) \varphi(x^\prime) |0\rangle\,,
$$
i.e. without the time ordering constraint. But guess what? It solves $L_x W(x,x^\prime) = 0$. It's a kernel. The difference is that $\Theta$ out front, which becomes a Dirac $\delta$ upon taking one time derivative. If one uses the kernel with Neumann boundary conditions on a time-slice boundary, the relationship
$$
G_R(x,x^\prime) = \Theta(x^0 - x^{\prime\,0}) K(x,x^\prime)
$$
is general. In quantum mechanics, the evolution operator
$$
U(x,t; x^\prime, t^\prime) = \langle x | e^{-i (t-t^\prime) \hat{H}} | x^\prime \rangle
$$
is a kernel. It solves the Schroedinger equation and equals $\delta(x - x^\prime)$ for $t = t^\prime$. People sometimes call it the propagator. It can also be written in path integral form. Linear response and impulse response functions are Green functions. These are all two-point correlation functions. "Two-point" because they're all functions of two points in space(time). In quantum field theory, statistical field theory, etc. one can also consider correlation functions with more field insertions/random variables. That's where the real work begins! | {
"source": [
"https://physics.stackexchange.com/questions/20797",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5374/"
]
} |
20,881 | LEP II eliminated the Higgs up to 114.5GeV. If it had been run for longer could it have detected a Higgs at 125GeV? I Googled for this without any luck, though I did find a comment that LEP II topped out at 209GeV collision energy, so it seems as though production of a 125GeV Higgs would have been possible. If so, how much longer would it have had to run? | The LEP experiment's limits on the Higgs mass were set by looking for a process where the experiment would have produced a Higgs boson together with a Z boson. The highest energy they achieved for the electron-positron pair which annihilated to make Z,Higgs was 209 GeV, and that was only achieved in the last months of the experiment. Since the Z boson mass is 91 GeV, the highest energy Higgs boson which could be produced this way would have a mass of 209-91=118 GeV. Some of the energy is always lost to getting the Z and Higgs to move apart from each other, so in practice the limit they could achieve was a little lower than this, 114 GeV. By running much longer and accumulating statistics they could have extended their reach a little bit, perhaps to 116 GeV; but not to 124 GeV. That could only have been achieved by significantly increasing the energy of the beams -- which I believe they had already pushed as far as they could. | {
"source": [
"https://physics.stackexchange.com/questions/20881",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1325/"
]
} |
20,947 | It is known that a superconductor is a material with electrical resistance zero. My question is, it is exactly zero, a theoretical zero, or for practical realistic reasons it is effectively zero? | Physics theory and experimental reality have something like a mathematical epsilon delta relationship, imo. Here is a review of the matter. From the introduction in the PDF of the paper Resistance in Superconductors : The ability of a wire to carry an electrical current with no apparent dissipation is doubtless the most dramatic property of the superconducting state. Under favorable conditions, the electrical resistance of a superconducting wire can be very low indeed. Mathematical models predict lifetimes that far exceed the age of the universe for sufficiently thick wires under appropriate conditions. In one experiment,a superconducting ring was observed to carry a persistent current for more than ayear without measurable decay, with an upper bound for the decay rate of a part in 10^5 in the course of a year. However, in other circumstances, as for sufficiently
thin wires or films, or in the presence of penetrating strong magnetic fields, non-zero resistances are observed. Some experimental plots are included. | {
"source": [
"https://physics.stackexchange.com/questions/20947",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4521/"
]
} |
21,319 | The event horizon of a black hole is where gravity is such that not even light can escape. This is also the point I understand that according to Einstein time dilation will be infinite for a far-away-observer. If this is the case how can anything ever fall into a black hole. In my thought experiment I am in a spaceship with a powerful telescope that can detect light at a wide range of wavelengths. I have it focused on the black hole and watch as a large rock approaches the event horizon. Am I correct in saying that from my far-away-position the rock would freeze outside the event horizon and would never pass it? If this is the case how can a black hole ever consume any material, let alone grow to millions of solar masses. If I was able to train the telescope onto the black hole for millions of years would I still see the rock at the edge of the event horizon? I am getting ready for the response of the object would slowly fade. Why would it slowly fade and if it would how long would this fading take? If it is going to red shift at some point would the red shifting not slow down to a standstill? This question has been bugging me for years! OK - just an edit based on responses so far. Again, please keep thinking from an observers point of view. If observers see objects slowly fade and slowly disappear as they approach the event horizon would that mean that over time the event horizon would be "lumpy" with objects invisible, but not passed through? We should be able to detect the "lumpiness" should we not through? | It is true that, from an outside perspective, nothing can ever pass the event horizon. I will attempt to describe the situation as best I can, to the best of my knowledge. First, let's imagine a classical black hole. By "classical" I mean a black-hole solution to Einstein's equations, which we imagine not to emit Hawking radiation (for now). Such an object would persist for ever. Let's imagine throwing a clock into it. We will stand a long way from the black hole and watch the clock fall in. What we notice as the clock approaches the event horizon is that it slows down compared to our clock. In fact its hands will asymptotically approach a certain time, which we might as well call 12:00. The light from the clock will also slow down, becoming red-shifted quite rapidly towards the radio end of the spectrum. Because of this red shift, and because we can only ever see photons emitted by the clock before it struck twelve, it will rapidly become very hard to detect. Eventually it will get to the point where we'd have to wait billions of years in between photons. Nevertheless, as you say, it is always possible in principle to detect the clock, because it never passes the event horizon. I had the opportunity to chat to a cosmologist about this subject a few months ago, and what he said was that this red-shifting towards undetectability happens very quickly. (I believe the "no hair theorem" provides the justification for this.) He also said that the black-hole-with-an-essentially-undetectable-object-just-outside-its-event-horizon is a very good approximation to a black hole of a slightly larger mass. (At this point I want to note in passing that any "real" black hole will emit Hawking radiation until it eventually evaporates away to nothing. Since our clock will still not have passed the event horizon by the time this happens, it must eventually escape - although presumably the Hawking radiation interacts with it on the way out. Presumably, from the clock's perspective all those billions of years of radiation will appear in the split-second before 12:00, so it won't come out looking much like a clock any more. To my mind the resolution to the black hole information paradox lies along this line of reasoning and not in any specifics of string theory. But of course that's just my opinion.) Now, this idea seems a bit weird (to me and I think to you as well) because if nothing ever passes the event horizon, how can there ever be a black hole in the first place? My friendly cosmologist's answer boiled down to this: the black hole itself is only ever an approximation . When a bunch of matter collapses in on itself it very rapidly converges towards something that looks like a black-hole solution to Einstein's equations, to the point where to all intents and purposes you can treat it as if the matter is inside the event horizon rather than outside it. But this is only ever an approximation because from our perspective none of the infalling matter can ever pass the event horizon. | {
"source": [
"https://physics.stackexchange.com/questions/21319",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7795/"
]
} |
21,336 | What determines the color of light -- is it the wavelength of the light or the frequency? (i.e. If you put light through a medium other than air, in order to keep its color the same, which one would you need to keep constant: the wavelength or the frequency?) | For almost all detectors, it is actually the energy of the photon that is the attribute that is detected and the energy is not changed by a refractive medium. So the "color" is unchanged by the medium... | {
"source": [
"https://physics.stackexchange.com/questions/21336",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/853/"
]
} |
21,678 | I read this answer a while ago, and while thinking about $\nabla$, I realized something. Since the cross product can be written as a determinant, in higher dimensions we require extra vector inputs. IIRC it's called the "wedge product" in higher dimensions. Alright, how does this work when we generalize Maxwell to higher dimensions? Curl can be written (abuse of notation, yes) as a cross product with $\nabla$. But, to generalise it to higher dimensions, we need multiple inputs. We need something like $\nabla_4(\mathbf{B_1},\mathbf{B_2})$ in four dimensions, and so on. So we have two ways to get out of this: We can either use a different way to write curl in multiple dimensions (The wikipedia page has stuff on this which I don't understand), or there are more than one $E$ and $B$ fields in higher dimensions. So which is it? How are Maxwell's laws generalised to higher dimensions? Just a note: I never understood the linked answer after the first sentence (didn't know enough), so if there's something obvious there that answers this question, I missed it. I know nothing of higher-dimensional analysis, so if complex notation is going to be unavoidable for the multiple dimensions, I'd be fine if you showed me what happens in four dimensions. | Qmechanic's answer has the full details, but I figure it might be useful to break this down in a little more detail. The proper way to generalize Maxwell's equations to higher-dimensional spaces is to use the field tensor $F^{\mu\nu}$. In our normal 3+1D space, it basically looks like this: $$F = \begin{pmatrix}0 & E_1 & E_2 & E_3 \\ -E_1 & 0 & B_{12} & B_{13} \\ -E_2 & -B_{12} & 0 & B_{23} \\ -E_3 & -B_{13} & -B_{23} & 0\end{pmatrix}$$ If you apply a rotation matrix corresponding to a 3D spatial rotation to this tensor, you will find that it mixes up the three components $E_1$, $E_2$, and $E_3$, and it separately mixes up the three components $B_{12}$, $B_{13}$, and $B_{23}$. The way in which these components mix is consistent with $E$ being a vector in 3D space and with $B$ being a separate vector in 3D space; for example, if you rotate by $\phi$ around the $z$ axis, you'll find that $E'_1 = E_1\cos\phi - E_2\sin\phi$ and $B_{23}' = B_{23}\cos\phi + B_{13}\sin\phi$. Doing this for a few rotations allows you to conclude that $(E_1,E_2,E_3)$ are respectively the $(x,y,z)$ components of $\vec{E}$, and that $(B_{23},-B_{13},B_{12})$ are the $(x,y,z)$ components of $\vec{B}$. However, if you apply a Lorentz boost (a change in velocity), which is just a rotation that incorporates the time dimension as well as the three spatial dimensions, you will find that it mixes the components of $\vec{E}$ with the components of $\vec{B}$. Nothing in the theory of normal 3D vectors allows this to happen. So if you didn't know about $F$, this would be your first clue that $\vec{E}$ and $\vec{B}$ are not really vectors, but instead are part of some more complex structure. This is in fact exactly what happened in the 1830s-ish, when Michael Faraday discovered that moving a magnet past a wire would induce an electrical current in the wire. The Lorentz boost corresponds to moving the magnet, and the rotation of components of $\vec{B}$ into components of $\vec{E}$ corresponds to the induction of an electric field by the moving magnet. Faraday and the others who capitalized on his work later in that century knew enough to recognize that $\vec{E}$ and $\vec{B}$ were related somehow, in a way that ordinary vectors weren't, but it wasn't until the introduction of special relativity that anyone figured out the actual mathematical structure that both $\vec{E}$ and $\vec{B}$ are part of (namely, the antisymmetric tensor $F$). Anyway, once you know what $F$ really looks like, it's straightforward to generalize it to higher dimensions. In 4+1D, for example, $$F = \begin{pmatrix}0 & E_1 & E_2 & E_3 & E_4 \\ -E_1 & 0 & B_{12} & B_{13} & B_{14} \\ -E_2 & -B_{12} & 0 & B_{23} & B_{24} \\ -E_3 & -B_{13} & -B_{23} & 0 & B_{34} \\ -E_4 & -B_{14} & -B_{24} & -B_{34} & 0\end{pmatrix}$$ This has 4 elements of an electric nature, but 6 of a magnetic nature. If you apply a 4D spatial rotation matrix to this, you will still find that the 4 components $E_1, E_2, E_3, E_4$ mix in the way you expect of a 4D vector, but the 6 magnetic components don't. So if we lived in a 4+1D universe, it would be obvious that the magnetic field is not a vector, if nothing else because it has more elements than a 4+1D spatial vector does. OK, so what about Maxwell's equations? Well, it turns out that you can express the cross product in three dimensions using the antisymmetric tensor $\epsilon^{\alpha\beta\gamma}$, which is defined component-wise as follows: +1 if $\alpha\beta\gamma$ is a cyclic permutation of $123$ -1 if it's a cyclic permutation of $321$ 0 if any two of the indices are equal Same goes for the curl; you can write the curl operator in tensor form as $$\bigl(\vec\nabla\times\vec{V}\bigr)^\alpha = \epsilon^{\alpha\beta\gamma}\frac{\partial V_\beta}{\partial x^\gamma}$$ using the Einstein summation convention. Basically, when you take the curl of a vector, you're constructing all possible antisymmetric combinations of the vector, the derivative operator, and the directional unit vectors: $$\begin{matrix} \hat{x}_1 \frac{\partial V_2}{\partial x^3} & -\hat{x}_1 \frac{\partial V_3}{\partial x^2} & \hat{x}_2 \frac{\partial V_3}{\partial x^1} & -\hat{x}_2 \frac{\partial V_1}{\partial x^3} & \hat{x}_3 \frac{\partial V_1}{\partial x^2} & -\hat{x}_3 \frac{\partial V_2}{\partial x^1}\end{matrix}$$ The antisymmetric tensor $\epsilon$ is easy to generalize to additional dimensions; you just add on an additional index per dimension, and keep the same rule for assigning components in terms of permutations of indices. However, Maxwell's equations actually involve two different curls, $\vec\nabla\times\vec{E}$ and $\vec\nabla\times\vec{B}$. Since the electric and magnetic fields don't generalize to higher-dimensional spaces in the same way, it stands to reason that their curls may not either. Let's look at the "magnetic curl" first. The magnetic field generalizes to higher dimensions as an antisymmetric piece of a tensor, so we should write its curl as an operation on that antisymmetric piece of a tensor. Start with the tensor-notation cross product rule, $$\bigl(\vec\nabla\times\vec{B}\bigr)^\alpha = \epsilon^{\alpha\beta\gamma}\frac{\partial B_\beta}{\partial x^\gamma}$$ and put in the following identity which expresses the components of $\vec{B}$ in terms of components of $F$, $$B_\beta = -\frac{1}{2}\epsilon_{\beta\mu\nu}F^{\mu\nu}$$ (here the indices range over values 1 to 3), and after some simplifications you get $$\bigl(\vec\nabla\times\vec{B}\bigr)^\alpha = \frac{\partial F^{\alpha\mu}}{\partial x^\mu}$$ So the curl of something that can be expressed an antisymmetric piece of a tensor is really not a curl at all! That makes it very easy to generalize: the equation $$\frac{\partial F^{\alpha\beta}}{\partial x^{\beta}} = \mu J^{\alpha}$$ gives you Ampère's law in any number of dimensions $N$, if you just let $\alpha$ and $\beta$ range from $1$ to $N$. (Conveniently, if you let $\alpha$ be equal to zero, you get Gauss's law.) Now what about the "electric curl"? Well, the electric field generalizes to higher dimensions as a vector (as long as you ignore Lorentz boosts), so it's not really an antisymmetric tensor - at least, the components of $\vec{E}$ don't form a square block of $F$ the way $\vec{B}$ did. But remember, we do have that equation that related $B_\beta$ to $F^{\mu\nu}$. You can actually flip that around and use it to define an antisymmetric tensor that will contain the components of $\vec{E}$. We call this new tensor $G$, the dual tensor to $F$. $$G^{\mu\nu} = g_{\alpha\beta}\epsilon^{\beta\mu\nu}E^{\alpha}$$ This is the piece of it that contains $E$, anyway. (I might be off by a numerical factor or a sign or something, but that's the gist of it.) If you write out the components of $G$ in 3+1D space, it looks just like $F$ except that the positions of $E$ and $B$ are switched. Using this new dual tensor, you can do the same thing we did with $\vec{B}$ to $\vec{E}$, namely write its curl as $$\bigl(\vec\nabla\times\vec{E}\bigr)^\alpha = \frac{\partial G^{\alpha\mu}}{\partial x^\mu}$$ and thus Maxwell's other equations are given by $$\frac{\partial G^{\alpha\beta}}{\partial x^\beta} = 0$$ This can also be easily generalized to higher dimensions, but there is a trick to how you define $G$. The thing is, since it is a dual tensor, it doesn't always have 2 indices. Remember that when you go to higher dimensions, you have to put extra indices on $\epsilon$, and so the definition of $G$ changes. For example, the definition above was for a 3D subset of G. The real $G$ in 3+1D is defined like this: $$G^{\mu\nu} = \frac{1}{2}g_{\kappa\alpha}g_{\lambda\beta}\epsilon^{\kappa\lambda\mu\nu}F^{\alpha\beta}$$ In 4+1D, it's defined like this $$G^{\mu\nu\rho} = \frac{1}{6}g_{\kappa\alpha}g_{\lambda\beta}\epsilon^{\kappa\lambda\mu\nu\rho}F^{\alpha\beta}$$ and so on. Notice that $G$ always has $N - 2$ indices, so that the total number of indices on $F$ and $G$ is $N$. This is one key property of dual tensors: in a rough sense, the basis of one is kind of orthogonal to the basis of the other. This is where the exterior calculus that Qmechanic mentioned comes into play: it shows a sort of equivalence between tensors and their duals, and it has ways of neatly dealing with dual tensors which make it possible to write Maxwell's equations very compactly. $$\begin{align}\mathbf{d}F &= 0 & \mathbf{d}G &= 0\end{align}$$ The exterior derivative $\mathbf{d}$ is an operation that applies to both a tensor field and its dual in similar ways, such that in both cases it reproduces Maxwell's equations. | {
"source": [
"https://physics.stackexchange.com/questions/21678",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7433/"
]
} |
21,753 | What is the explanation between equality of proton and electron charges (up to a sign)? This is connected to the gauge invariance and renormalization of charge is connected to the renormalization of photon field, but is this explanation enough? Do we have some experimental evidence that quarks have 2/3 and -1/3 charges? By the way I think about bare charge of electron and proton. And I am also wondering if this can be explained by Standard Model. | Because a proton can decay to a positron. It is an experimental fact that the proton and positron charges are very close. To conclude that they are exactly equal requires an argument. If a proton could theoretically decay to a positron and neutral stuff, this is enough. In QED, charge quantization is equivalent to the statement that the gauge group is compact. This means that there is a gauge transformation by a full $2\pi$ rotation of the fields which is equivalent to nothing at all. Under these circumstances you have the following: Charge is quantized There are Dirac string solutions which have a magnetic flux indistinguishable from no flux (the magnetic flux is the phase around a loop). If you have any sort of ultraviolet regulator, either a GUT or gravity, the existence of Dirac strings leads to monopoles. If you don't have an ultraviolet regulator, it is consistent to make all the monopoles infinitely massive. So the question is why is the U(1) of electromagnetism compact. There are two avenues for answering this: A compact U(1) emerges from a higher gauge group, because all higher gauge groups must be compact for the kinetic terms to have the right sign. Breaking a compact group produces a subgroup, which is necessarily compact. It is also true that in any GUT theory producing electromagnetism, you get monopoles, so you automatically get charge quantization by Dirac's argument. But even if you have a U(1) which is not part of a GUT, there are constraints from gravity. If you have particle with charge q and a particle with charge q', and they aren't rational multiples of each other, you can produce a particle with charge $nq - m q'$ by throwing n q particles into a black hole, waiting for m q' particles to come out, and letting the resulting black hole decay, while throwing back any charge particle that comes out. This means that in a consistent quantum gravity, you need either charge quantization or a spectrum of charges that accumulates near zero. Further, in order for the theory to be consistent, a black hole made from the wee charges must be able to naturally decay to wee charged things, and barring a conspiratorial spectrum of charges and masses, this strongly suggests that the mass of the wee charges must be smaller than the charge, meaning that as the charge gets small they become massless. So in quantum gravity, the only alternative to charge quantization is a theory with nearly massless particles with extremely tiny charges, and this has clear experimental signatures. I should point out that if you believe that the standard model matter is complete, then anomaly cancellation requires that the charge of the proton is equal to the charge of the positron, because there is instanton mediated proton decay as discovered by t'Hooft, and this is something we might concievable soon observe in accelerators. So in order to make the charge of the proton slightly different from the electron, you can't modify parameters in the standard model, you need to add a heck of a lot of unobserved nearly massless fermions with tiny U(1) charge. This is enough conspiratorial implausibility, that together with the experimental bound, you can say with certainty that the proton and electron have exactly the same charge. | {
"source": [
"https://physics.stackexchange.com/questions/21753",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4107/"
]
} |
21,851 | Could someone provide me with a mathematical proof of why, a system with an absolute negative Kelvin temperature (such that of a spin system) is hotter than any system with a positive temperature (in the sense that if a negative-temperature system and a positive-temperature system come in contact, heat will flow from the negative- to the positive-temperature system). | From a fundamental (i.e., statistical mechanics) point of view, the physically relevant parameter is coldness = inverse temperature $\beta=1/k_BT$ . This changes continuously. If it passes from a positive value through zero to a negative value, the temperature changes from very large positive to infinite (with indefinite sign) to very large negative. Therefore systems with negative temperature have a smaller coldness and hence are hotter than systems with positive temperature. Some references: D. Montgomery and G. Joyce. Statistical mechanics of “negative temperature” states.
Phys. Fluids, 17:1139–1145, 1974. http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19730013937_1973013937.pdf E.M. Purcell and R.V. Pound. A nuclear spin system at negative temperature. Phys. Rev., 81:279–280, 1951. Link Section 73 of Landau and E.M. Lifshits. Statistical Physics: Part 1, Example 9.2.5 in my online book Classical and Quantum Mechanics via Lie algebras . | {
"source": [
"https://physics.stackexchange.com/questions/21851",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
21,866 | Undergraduate classical mechanics introduces both Lagrangians and Hamiltonians, while undergrad quantum mechanics seems to only use the Hamiltonian. But particle physics, and more generally quantum field theory seem to only use the Lagrangian, e.g. you hear about the Klein-Gordon Lagrangian, Dirac Lagrangian, Standard Model Lagrangian and so on. Why is there a mismatch here? Why does it seem like only Hamiltonians are used in undergraduate quantum mechanics, but only Lagrangians are used in quantum field theory? | In order to use Lagrangians in QM, one has to use the path integral formalism. This is usually not covered in a undergrad QM course and therefore only Hamiltonians are used. In current research, Lagrangians are used a lot in non-relativistic QM. In relativistic QM, one uses both Hamiltonians and Lagrangians. The reason Lagrangians are more popular is that it sets time and spacial coordinates on the same footing, which makes it possible to write down relativistic theories in a covariant way. Using Hamiltonians, relativistic invariance is not explicit and it can complicate many things. So both formalism are used in both relativistic and non-relativistic quantum physics. This is the very short answer. | {
"source": [
"https://physics.stackexchange.com/questions/21866",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4521/"
]
} |
21,955 | Consider this penny on my desc. It is a particular piece of metal,
well described by statistical mechanics, which assigns to it a state,
namely the density matrix $\rho_0=\frac{1}{Z}e^{-\beta H}$ (in the
simplest model). This is an operator in a space of functions depending
on the coordinates of a huge number $N$ of particles. The ignorance interpretation of statistical mechanics , the orthodoxy to
which all introductions to statistical mechanics pay lipservice, claims
that the density matrix is a description of ignorance, and that the
true description should be one in terms of a wave function; any pure
state consistent with the density matrix should produce the same
macroscopic result. Howewer, it would be very surprising if Nature would change its
behavior depending on how much we ignore. Thus the talk about
ignorance must have an objective formalizable basis independent of
anyones particular ignorant behavior. On the other hand, statistical mechanics always works exclusively with the
density matrix (except in the very beginning where it is motivated).
Nowhere (except there) one makes any use of the assumption that the
density matrix expresses ignorance. Thus it seems to me that the whole
concept of ignorance is spurious, a relic of the early days of
statistical mechanics. Thus I'd like to invite the defenders of orthodoxy to answer the
following questions: (i) Can the claim be checked experimentally that the density matrix
(a canonical ensemble, say, which correctly describes a macroscopic
system in equilibrium) describes ignorance?
- If yes, how, and whose ignorance?
- If not, why is this ignorance interpretation assumed though nothing at all depends on it? (ii) In a though experiment, suppose Alice and Bob have different
amounts of ignorance about a system. Thus Alice's knowledge amounts to
a density matrix $\rho_A$, whereas Bob's knowledge amounts to
a density matrix $\rho_B$. Given $\rho_A$ and $\rho_B$, how can one
check in principle whether Bob's description is consistent
with that of Alice? (iii) How does one decide whether a pure state $\psi$ is adequately
represented by a statistical mechanics state $\rho_0$?
In terms of (ii), assume that Alice knows the true state of the
system (according to the ignorance interpretation of statistical
mechanics a pure state $\psi$, corresponding to $\rho_A=\psi\psi^*$),
whereas Bob only knows the statistical mechanics description,
$\rho_B=\rho_0$. Presumably, there should be a kind of quantitative measure
$M(\rho_A,\rho_B)\ge 0$ that vanishes when $\rho_A=\rho_B)$ and tells
how compatible the two descriptions are. Otherwise, what can it mean
that two descriptions are consistent? However, the mathematically
natural candidate, the relative entropy (= Kullback-Leibler divergence) $M(\rho_A,\rho_B)$, the trace of $\rho_A\log\frac{\rho_A}{\rho_B}$,
[edit: I corrected a sign mistake pointed out in the discussion below]
does not work. Indeed, in the situation (iii), $M(\rho_A,\rho_B)$
equals the expectation of $\beta H+\log Z$ in the
pure state; this is minimal in the ground state of the Hamiltonian.
But this would say that the ground state would be most consistent with
the density matrix of any temperature, an unacceptable condition. Edit: After reading the paper http://bayes.wustl.edu/etj/articles/gibbs.paradox.pdf by E.T. Jaynes pointed to in the discussion below, I can make more precise the query in (iii): In the terminology of p.5 there, the density matrix $\rho_0$ represents a macrostate, while each wave function $\psi$ represents a microstate. The question is then: When may (or may not) a microstate $\psi$ be regarded as a macrostate $\rho_0$ without affecting the predictability of the macroscopic observations? In the above case, how do I compute the temperature of the macrostate corresponding to a particular microstate $\psi$ so that the macroscopic behavior is the same - if it is, and which criterion allows me to decide whether (given $\psi$) this approximation is reasonable? An example where it is not reasonable to regard $\psi$ as a canonical ensemble is if $\psi$ represents a composite system made of two pieces of the penny at different temperature. Clearly no canonical ensemble can describe this situation macroscopically correct. Thus the criterion sought must be able to decide between a state representing such a composite system and the state of a penny of uniform temperature, and in the latter case, must give a recipe how to assign a temperature to $\psi$, namely the temperature that nature allows me
to measure. The temperature of my penny is determined by Nature, hence must be determined by a microstate that claims to be a complete description of the penny. I have never seen a discussion of such an identification criterion, although they are essential if one want to give the idea - underlying the ignorance interpretation - that a completely specified quantum state must be a pure state. Part of the discussion on this is now at: http://chat.stackexchange.com/rooms/2712/discussion-between-arnold-neumaier-and-nathaniel Edit (March 11, 2012): I accepted Nathaniel's answer as satisfying under the given circumstances, though he forgot to mention a fouth possibility that I prefer; namely that the complete knowledge about a quantum system is in fact described by a density matrix, so that microstates are arbitrary density matrces and a macrostate is simply a density matrix of a special form by which an arbitrary microstate (density matrix) can be well approximated when only macroscopic consequences are of interest. These special density matrices have the form $\rho=e^{-S/k_B}$ with a simple operator $S$ - in the equilibrium case a linear combination of 1, $H$ (and vaiious number operators $N_j$ if conserved), defining the canonical or grand canonical ensemble. This is consistent with all of statistical mechanics, and has the advantage of simplicity and completeness, compared to the ignorance interpretation, which needs the additional qualitative concept of ignorance and with it all sorts of questions that are too imprecise or too difficult to answer. | I wouldn't say the ignorance interpretation is a relic of the early days of statistical mechanics. It was first proposed by Edwin Jaynes in 1957 (see http://bayes.wustl.edu/etj/node1.html , papers 9 and 10, and also number 36 for a more detailed version of the argument) and proved controversial up until fairly recently. (Jaynes argued that the ignorance interpretation was implicit in the work of Gibbs, but Gibbs himself never spelt it out.) Until recently, most authors preferred an interpretation in which (for a classical system at least) the probabilities in statistical mechanics represented the fraction of time the system spends in each state, rather than the probability of it being in a particular state at the present time. This old interpretation makes it impossible to reason about transient behaviour using statistical mechanics, and this is ultimately what makes switching to the ignorance interpretation useful. In response to your numbered points: (i) I'll answer the "whose ignorance?" part first. The answer to this is "an experimenter with access to macroscopic measuring instruments that can measure, for example, pressure and temperature, but cannot determine the precise microscopic state of the system." If you knew precisely the underlying wavefunction of the system (together with the complete wavefunction of all the particles in the heat bath if there is one, along with the Hamiltonian for the combined system) then there would be no need to use statistical mechanics at all, because you could simply integrate the Schrödinger equation instead. The ignorance interpretation of statistical mechanics does not claim that Nature changes her behaviour depending on our ignorance; rather, it claims that statistical mechanics is a tool that is only useful in those cases where we have some ignorance about the underlying state or its time evolution. Given this, it doesn't really make sense to ask whether the ignorance interpretation can be confirmed experimentally. (ii) I guess this depends on what you mean by "consistent with." If two people have different knowledge about a system then there's no reason in principle that they should agree on their predictions about its future behaviour. However, I can see one way in which to approach this question. I don't know how to express it in terms of density matrices (quantum mechanics isn't really my thing), so let's switch to a classical system. Alice and Bob both express their knowledge about the system as a probability density function over $x$, the set of possible states of the system (i.e. the vector of positions and velocities of each particle) at some particular time. Now, if there is no value of $x$ for which both Alice and Bob assign a positive probability density then they can be said to be inconsistent, since every state that Alice accepts the system might be in Bob says it is not, and vice versa. If any such value of $x$ does exist then Alice and Bob can both be "correct" in their state of knowledge if the system turns out to be in that particular state. I will continue this idea below. (iii) Again I don't really know how to convert this into the density matrix formalism, but in the classical version of statistical mechanics, a macroscopic ensemble assigns a probability (or a probability density) to every possible microscopic state, and this is what you use to determine how heavily represented a particular microstate is in a given ensemble. In the density matrix formalism the pure states are analogous to the microscopic states in the classical one. I guess you have to do something with projection operators to get the probability of a particular pure state out of a density matrix (I did learn it once but it was too long ago), and I'm sure the principles are similar in both formalisms. I agree that the measure you are looking for is $D_\textrm{KL}(A||B) = \sum_i p_A(i) \log \frac{p_A(i)}{p_B(i)}$. (I guess this is $\mathrm{tr}(\rho_A (\log \rho_A - \log \rho_B))$ in the density matrix case, which looks like what you wrote apart from a change of sign.) In the case where A is a pure state, this just gives $-\log p_B(i)$, the negative logarithm of the probability that Bob assigns to that particular pure state. In information theory terms, this can be interpreted as the "surprisal" of state $i$, i.e. the amount of information that must be supplied to Bob in order to convince him that state $i$ is indeed the correct one. If Bob considers state $i$ to be unlikely then he will be very surprised to discover it is the correct one. If B assigns zero probability to state $i$ then the measure will diverge to infinity, meaning that Bob would take an infinite amount of convincing in order to accept something that he was absolutely certain was false. If A is a mixed state, this will happen as long as A assigns a positive probability to any state to which B assigns zero probability. If A and B are the same then this measure will be 0. Therefore the measure $D_\textrm{KL}(A||B)$ can be seen as a measure of how "incompatible" two states of knowledge are. Since the KL divergence is asymmetric I guess you also have to consider $D_\textrm{KL}(B||A)$, which is something like the degree of implausibility of B from A's perspective. I'm aware that I've skipped over some things, as there was quite a lot to write and I don't have much time to do it. I'll be happy to expand it if any of it is unclear. Edit (in reply to the edit at the end of the question): The answer to the question "When may (or may not) a microstate $\phi$ be regarded as a macrostate $\rho_0$ without affecting the predictability of the macroscopic observations?" is "basically never." I will address this is classical mechanics terms because it's easier for me to write in that language. Macrostates are probability distributions over microstates, so the only time a macrostate can behave in the same way as a microstate is if the macrostate happens to be a fully peaked probability distribution (with entropy 0, assigning $p=1$ to one microstate and $p=0$ to the rest), and to remain that way throughout the time evolution. You write in a comment "if I have a definite penny on my desk with a definite temperature, how can it have several different pure states?" But (at least in Jaynes' version of the MaxEnt interpretation of statistical mechanics), the temperature is not a property of the microstate but of the macrostate. It is the partial differential of the entropy with respect to the internal energy. Essentially what you're doing is (1) finding the macrostate with the maximum (information) entropy compatible with the internal energy being equal to $U$, then (2) finding the macrostate with the maximum entropy compatible with the internal energy being equal to $U+dU$, then (3) taking the difference and dividing by $dU$. When you're talking about microstates instead of macrostates the entropy is always 0 (precisely because you have no ignorance) and so it makes no sense to do this. Now you might want to say something like "but if my penny does have a definite pure state that I happen to be ignorant of, then surely it would behave in exactly the same way if I did know that pure state." This is true, but if you knew precisely the pure state then you would (in principle) no longer have any need to use temperature in your calculations, because you would (in principle) be able to calculate precisely the fluxes in and out of the penny, and hence you'd be able to give exact answers to the questions that statistical mechanics can only answer statistically. Of course, you would only be able to calculate the penny's future behaviour over very short time scales, because the penny is in contact with your desk, whose precise quantum state you (presumably) do not know. You will therefore have to replace your pure-state-macrostate of the penny with a mixed one pretty rapidly. The fact that this happens is one reason why you can't in general simply replace the mixed state with a single "most representative" pure state and use the evolution of that pure state to predict the future evolution of the system. Edit 2: the classical versus quantum cases. (This edit is the result of a long conversation with Arnold Neumaier in chat, linked in the question.) In most of the above I've been talking about the classical case, in which a microstate is something like a big vector containing the positions and velocities of every particle, and a macrostate is simply a probability distribution over a set of possible microstates. Systems are conceived of as having a definite microstate, but the practicalities of macroscopic measurements mean that for all but the simplest systems we cannot know what it is, and hence we model it statistically. In this classical case, Jaynes' arguments are (to my mind) pretty much unassailable: if we lived in a classical world, we would have no practical way to know precisely the position and velocity of every particle in a system like a penny on a desk, and so we would need some kind of calculus to allow us to make predictions about the system's behaviour in spite of our ignorance. When one examines what an optimal such calculus would look like, one arrives precisely at the mathematical framework of statistical mechanics (Boltzmann distributions and all the rest). By considering how one's ignorance about a system can change over time one arrives at results that (it seems to me at least) would be impossible to state, let alone derive, in the traditional frequentist interpretation. The fluctuation theorem is an example of such a result. In a classical world there would be no reason in principle why we couldn't know the precise microstate of a penny (along with that of anything it's in contact with). The only reasons for not knowing it are practical ones. If we could overcome such issues then we could predict the microstate's time-evolution precisely. Such predictions could be made without reference to concepts such as entropy and temperature. In Jaynes' view at least, these are purely macroscopic concepts and don't strictly have meaning on the microscopic level. The temperature of your penny is determined both by Nature and by what you are able to measure about Nature (which depends on the equipment you have available). If you could measure the (classical) microstate in enough detail then you would be able to see which particles had the highest velocities and thus be able to extract work via a Maxwell's demon type of apparatus. Effectively you would be partitioning the penny into two subsystems, one containing the high-energy particles and one containing the lower-energy ones; these two systems would effectively have different temperatures. My feeling is that all of this should carry over on to the quantum level without difficulty, and indeed Jaynes presented much of his work in terms of the density matrix rather than classical probability distributions. However there is a large and (I think it's fair to say) unresolved subtlety involved in the quantum case, which is the question of what really counts as a microstate for a quantum system. One possibility is to say that the microstate of a quantum system is a pure state. This has a certain amount of appeal: pure states evolve deterministically like classical microstates, and the density matrix can be derived by considering probability distributions over pure states. However the problem with this is distinguishability: some information is lost when going from a probability distribution over pure states to a density matrix. For example, there is no experimentally distinguishable difference between the mixed states $\frac{1}{2}(\mid \uparrow \rangle \langle \uparrow \mid + \mid \downarrow \rangle \langle \downarrow \mid)$ and $\frac{1}{2}(\mid \leftarrow \rangle \langle \leftarrow \mid + \mid \rightarrow \rangle \langle \rightarrow \mid)$ for a spin-$\frac{1}{2}$ system. If one considers the microstate of a quantum system to be a pure state then one is committed to saying there is a difference between these two states, it's just that it's impossible to measure. This is a philosophically difficult position to maintain, as it's open to being attacked with Occam's razor. However, this is not the only possibility. Another possibility is to say that even pure quantum states represent our ignorance about some underlying, deeper level of physical reality. If one is willing to sacrifice locality then one can arrive at such a view by interpreting quantum states in terms of a non-local hidden variable theory. Another possibility is to say that the probabilities one obtains from the density matrix do not represent our ignorance about any underlying microstate at all, but instead they represent our ignorance about the results of future measurements we might make on the system. I'm not sure which of these possibilities I prefer. The point is just that on the philosophical level the ignorance interpretation is trickier in the quantum case than in the classical one. But in practical terms it makes very little difference - the results derived from the much clearer classical case can almost always be re-stated in terms of the density matrix with very little modification. | {
"source": [
"https://physics.stackexchange.com/questions/21955",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7924/"
]
} |
22,385 | When light waves enter a medium of higher refractive index than the previous, why is it that: Its wavelength decreases?
The frequency of it has to stay the same? | (This is an intuitive explanation on my part, it may or may not be correct) Symbols used: $\lambda$ is wavelength, $\nu$ is frequency, $c,v$ are speeds of light in vacuum and in the medium. Alright. First, we can look at just frequency and determine if frequency should change on passing through a medium. Frequency can't change Now, let's take a glass-air interface and pass light through it. (In SI units) In one second, $\nu$ "crest"s will pass through the interface. Now, a crest cannot be distroyed except via interference, so that many crests must exit. Remember, a crest is a zone of maximum amplitude. Since amplitude is related to energy, when there is max amplitude going in, there is max amplitude going out, though the two maxima need not have the same value. Also, we can directly say that, to conserve energy (which is dependent solely on frequency), the frequency must remain constant. Speed can change There doesn't seem to be any reason for the speed to change, as long as the energy associated with unit length of the wave decreases. It's like having a wide pipe with water flowing through it. The speed is slow, but there is a lot of mass being carried through the pipe. If we constrict the pipe, we get a jet of fast water. Here, there is less mass per unit length, but the speed is higher, so the net rate of transfer of mass is the same. In this case, since $\lambda\nu=v$ , and $\nu$ is constant, change of speed requires change of wavelength. This is analogous to the pipe, where increase of speed required decrease of cross-section (alternatively mass per unit length) Why does it have to change? Alright. Now we have established that speed can change, lets look at why. Now, an EM wave(like light), carries alternating electric and magnetic fields with it. Here's an animation . Now, in any medium, the electric and magnetic fields are altered due to interaction with the medium. Basically, the permittivities/permeabilities change. This means that the light wave is altered in some manner. Since we can't alter frequency, the only thing left is speed/wavelength (and amplitude, but that's not it as we shall see) Using the relation between light and permittivity/permeability ( $\mu_0\varepsilon_0=1/c^2$ and $\mu\varepsilon=1/v^2$ ), and $\mu=\mu_r\mu_0,\varepsilon=\varepsilon_r\varepsilon_0, n=c/v$ (n is refractive index), we get $n=\sqrt{\mu_r\epsilon_r}$ , which explicitly states the relationship between electromagnetic properties of a material and its RI. Basically, the relation $\mu\varepsilon=1/v^2$ guarantees that the speed of light must change as it passes through a medium, and we get the change in wavelength as a consequence of this. | {
"source": [
"https://physics.stackexchange.com/questions/22385",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8082/"
]
} |
22,559 | It is said that string theory is a unification of particle physics and gravitation. Is there a reasonably simple explanation for how the standard model arises as a limit of string theory? How does string theory account for the observed particle spectrum and the three generations? Edit (March 23, 2012): In the mean time, I read the paper arXiv:1101.2457 suggested in the answer by
John Rennie. My impression from reading this paper is that string theory currently does not predict any particular particle content, and that (p.13) to get close to a derivation of the standard model one must assume that string theory reduces at low energies to a SUSY GUT. If this is correct, wouldn't this mean that part of what is to be predicted is instead assumed? Thus one would have to wait for a specific prediction of the resulting parameters in order to see whether or not string theory indeed describes particle physics. Some particular observations/quotes substantiating the above: (15) looks like input from the standard model The masses of the superparticles after (27) are apparently freely chosen to yield the subsequent prediction. This sort of arguments only shows that some SUSY GUT (and hence perhaps string theory) is compatible with the standard model, but has no predictive value. p.39: ''The authors impose an intermediate SO(10) SUSY GUT.'' p.58: ''As discussed earlier in Section 4.1, random searches in the string landscape suggest that the Standard Model is very rare. This may also suggest that string theory cannot make predictions for low energy physics.'' p.59: ''Perhaps string theory can be predictive, IF we understood the rules for choosing the correct position in the string landscape.'' So my followup question is: Is the above impression correct, or do I lack information available elsewhere? Edit (March 25, 2012): Ron Maimon's answer clarified to some extent what can be expected from string theory, but leaves details open that in my opinion are needed to justify his narrative. Upon his request, I posted the new questions separately as More questions on string theory and the standard model | String theory includes every self-consistent conceivable quantum gravity situation, including 11 dimensional M-theory vacuum, and various compactifications with SUSY (and zero cosmological constant), and so on. It can't pick out the standard model uniquely, or uniquely predict the parameters of the standard model, anymore than Newtonian mechanics can predict the ratio of the orbit of Jupiter to that of Saturn. This doesn't make string theory a bad theory. Newtonian mechanics is still incredibly predictive for the solar system. String theory is maximally predictive, it predicts as much as can be predicted, and no more . This should be enough to make severe testable predictions, even for experiments strictly at low energies--- because the theory has no adjustable parameters. Unless we are extremely unfortunate, and a bazillion standard model vacua exist, with the right dark-matter and cosmological constant, we should be able to discriminate between all the possibilities by just going through them conceptually until we find the right one, or rule them all out. What "no adjustable parameters" means is that if you want to get the standard model out, you need to make a consistent geometrical or string-geometrical ansatz for how the universe looks at small distances, and then you get the standard model for certain geometries. If we could do extremely high energy experiments, like make Planckian black holes, we could explore this geometry directly, and then string theory would predict relations between the geometry and low-energy particle physics. We can't explore the geometry directly, but we are lucky in that these geometries at short distances are not infinitely rich. They are tightly constrained, so you don't have infinite freedom. You can't stuff too much structure without making the size of the small dimensions wrong, you can't put arbitrary stuff, you are limited by constraints of forcing the low-energy stuff to be connected to high energy stuff. Most phenomenological string work since the 1990s does not take any of these constraints into account, because they aren't present if you go to large extra dimensions. You don't have infinitely many different vacua which are qualitatively like our universe, you only have a finite (very large) number, on the order of the number of sentences that fit on a napkin. You can go through all the vacua, and find the one that fits our universe, or fail to find it. The vacua which are like our universe are not supersymmetric, and will not have any continuously adjustible parameters. You might say "it is hopeless to search through these possibilities", but consider that the number of possible solar systems is greater, and we only have data that is available from Earth. There is no more way of predicting which compactification will come out of the big-bang than of predicting how a plate will smash (although you possibly can make statistics). But there are some constraints on how a plate smashes--- you can't get more pieces than the plate had originally: if you have a big piece, you have to have fewer small piece elsewhere. This procedure is most tightly constrained by the assumption of low-energy supersymmetry, which requires analytic manifolds of a type studied by mathematicians, the Calabi-Yaus, and so observation of low-energy SUSY would be a tremendous clue for the geometry. Of course, the real world might not be supersymmetric until the quntum gravity scale, it might have a SUSY breaking which makes a non-SUSY low-energy spectrum. We know such vacua exist, but they generally have a big cosmological constant. But the example of SO(16) SO(16) heterotic strings shows that there are simple examples where you get a non-SUSY low energy vacuum without work. If your intuition is from field theory, you think that you can just make up whatever you want. This is just not so in string theory. You can't make up anything without geoemtry, and you only have so much geometry to go around. The theory should be able to, from the qualitative structure of the standard model, plus the SUSY, plus say 2-decimal place data on 20 parameters (that's enough to discrimnate between 10^40 possibilities which are qualitatively identical to the SM), it should predict the rest of the decimal places with absolutely no adjustible anything. Further, finding the right vacuum will predict as much as can be predicted about every experiment you can perform. This is the best we can do. The idea that we can predict the standard model uniquely was only suggested in string propaganda from the 1980s, which nobody in the field really took seriously, which claimed that the string vacuum will be unique and identical to ours. This was the 1980s fib that string theorists pushed, because they could tell people "We will predict the SM parameters". This is mostly true, but not by predicting them from scratch, but from the clues they give us to the microscopic geometry (which is certainly enough when the extra dimensions are small). | {
"source": [
"https://physics.stackexchange.com/questions/22559",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7924/"
]
} |
22,876 | I know a photon has zero rest mass, but it does have plenty of energy. Since energy and mass are equivalent does this mean that a photon (or more practically, a light beam) exerts a gravitational pull on other objects? If so, does it depend on the frequency of the photon? | Yes, in fact one of the comments made to a question mentions this. If you stick to Newtonian gravity it's not obvious how a photon acts as a source of gravity, but then photons are inherently relativistic so it's not surprising a non-relativistic approximation doesn't describe them well. If you use General Relativity instead you'll find that photons make a contribution to the stress energy tensor, and therefore to the curvature of space. See the Wikipedia article on EM Stress Energy Tensor for info on the photon contribution to the stress energy tensor, though I don't think that's a terribly well written article. | {
"source": [
"https://physics.stackexchange.com/questions/22876",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6627/"
]
} |
22,916 | In reading through old course material, I found the assignment (my translation): Show that a single photon cannot produce an electron-positron pair, but needs additional matter or light quanta. My idea was to calculate the wavelength required to contain the required energy ($1.02$ MeV), which turned out to be $1.2\times 10^{-3}$ nm, but I don't know about any minimum wavelength of electromagnetic waves. I can't motivate it with the conservation laws for momentum or energy either. How to solve this task? | Another way of solving such problems is to go to another reference frame, where you obviously don't have enough energy. For example you've got a $5 MeV$ photon, so you think that there is plenty of energy to make $e^-e^+$ pair. Now you make a boost (a change by a constant velocity to another inertial reference frame) along the direction of the photon momentum with $v=0.99\,c$ and you get a $0.35 MeV$ photon. That is not enough even for one electron. | {
"source": [
"https://physics.stackexchange.com/questions/22916",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8350/"
]
} |
23,028 | The standard treatment of the one-dimensional quantum simple harmonic oscillator (SHO) using the raising and lowering operators arrives at the countable basis of eigenstates $\{\vert n \rangle\}_{n = 0}^{\infty}$ each with corresponding eigenvalue $E_n = \omega \left(n + \frac{1}{2}\right)$. Refer to this construction as the abstract solution . How does the abstract solution also prove uniqueness? Why is there only one unique sequence of countable eigenstates? In particular, can one prove the state $\vert 0\rangle$ is the unique ground state without resorting to coordinate representation? (It would then follow that the set $\{\vert n \rangle\}_{n = 0}^{\infty}$ is also unique.) The uniqueness condition is obvious if one solves the problem in coordinate representation since then one works in the realm of differential equations where uniqueness theorems abound. Most textbooks ignore this detail (especially since they often solve the problem both in coordinate representation and abstractly), however I have found two exceptions: Shankar appeals to a theorem which proves one-dimensional systems are
non-degenerate, however this is unsatisfactory for two reasons: Not every one-dimensional system is non-degenerate, however a general result can be proven for a large class of potentials (the SHO potential is in such a class). The proof requires a departure from the abstract solution since it classifies the potentials according to their functional properties. Griffiths addresses this concern in a footnote stating that the equation $a \vert 0\rangle = 0$ uniquely determines the state $\vert 0\rangle$. Perhaps this follows from the abstract solution, however I do not see how. | I) It depends on how abstract OP wants it to be. Say that we discard any reference to 1D geometry, and position and momentum operators $\hat{q}$ and $\hat{p}$ . Say that we only know that $$\frac{\hat{H}}{\hbar\omega} ~:=~ \hat{N}+\nu{\bf 1},
\qquad\qquad \nu\in\mathbb{R},\tag{1}$$ $$ \hat{N}~:=~\hat{a}^{\dagger}\hat{a}, \tag{2}$$ $$ [\hat{a},\hat{a}^{\dagger}]~=~{\bf 1},
\qquad\qquad[{\bf 1}, \cdot]~=~0.\tag{3}$$ (Since we have cut any reference to geometry, there is no longer any reason why $\nu$ should be a half, so we have generalized it to an arbitrary real number $\nu\in\mathbb{R}$ .) II) Next assume that the physical states live in an inner product space $(V,\langle \cdot,\cdot \rangle )$ , and that $V$ form a non-trivial irreducible unitary representation of the Heisenberg algebra, $$ {\cal A}~:=~ \text{associative algebra generated by $\hat{a}$, $\hat{a}^{\dagger}$, and ${\bf 1}$}.\tag{4}$$ The spectrum of a semi- positive operator $\hat{N}=\hat{a}^{\dagger}\hat{a}$ is always non-negative, $$ {\rm Spec}(\hat{N})~\subseteq~ [0,\infty[.\tag{5}$$ In particular, the spectrum ${\rm Spec}(\hat{N})$ is bounded from below. Since the operator $\hat{N}$ commutes with the Hamiltonian $\hat{H}$ , we can use $\hat{N}$ to classify the physical states. Let us sketch how the standard argument goes. Say that $|n_0\rangle\neq 0$ is a normalized eigenstate for $\hat{N}$ with eigenvalue $n_0\in[0,\infty[$ . We can use the lowering ladder (annihilation) operator $\hat{a}$ repeatedly to define new eigenstates $$ |n_0- 1\rangle,\quad |n_0- 2\rangle, \quad\ldots \tag{6}$$ which however could have zero norm. Since the spectrum ${\rm Spec}(\hat{N})$ is bounded from below, this lowering procedure (6) must stop in finite many steps. There must exists an integer $m\in\mathbb{N}_0$ such that zero-norm occurs $$ \hat{a}|n_0 - m\rangle~=~0.\tag{7}$$ Assume that $m$ is the smallest of such integers. The norm is $$\begin{align} 0 ~=~& || ~\hat{a}|n_0 - m\rangle ~||^2 \cr
~=~& \langle n_0 - m|\hat{N}|n_0 - m\rangle \cr
~=~& ( n_0 - m) \underbrace{||~|n_0 - m\rangle~||^2}_{>0},
\end{align}\tag{8}$$ so the original eigenvalue is an integer $$ n_0 ~=~ m\in\mathbb{N}_0,\tag{9}$$ and eq. (7) becomes $$ \hat{a}|0\rangle ~=~0,\qquad\qquad \langle 0 |0\rangle ~\neq~0.\tag{10}$$ We can next use the raising ladder (creation) operator $\hat{a}^{\dagger}$ repeatedly to define new eigenstates $$ |1\rangle,\quad |2\rangle,\quad \ldots.\tag{11}$$ By a similar norm argument, one may see that this raising procedure (11) cannot eventually create a zero-norm state, and hence it goes on forever/doesn't stop. Inductively, at stage $n\in\mathbb{N}_0$ , the norm remains non-zero, $$ \begin{align} || ~\hat{a}^{\dagger}|n\rangle ~||^2
~=~& \langle n|\hat{a}\hat{a}^{\dagger}|n\rangle\cr
~=~& \langle n|(\hat{N}+1)|n\rangle\cr
~=~& (n+1) ~\langle n|n\rangle~>~0. \end{align}\tag{12}$$ So $V$ contains at least one full copy of a standard Fock space $${\rm span}_{\mathbb{C}}\left\{|n\rangle \mid n\in\mathbb{N}_0\right\}. \tag{13}$$ Notice that the Fock space (13) is invariant under the action of the Heisenberg algebra (4), i.e. it is a representation thereof. On the other hand, by the irreducibility assumption, the vector space $V$ cannot be bigger, and $V$ is hence just a standard Fock space (up to isomorphism). In this case the normalized eigenstates $|n\rangle$ are unique up to phase factors. III) Finally, if $V$ is not irreducible, then the operator $\hat{N}$ has an eigenstate outside the Fock space (13). We can then repeat the argument in section II to find a linearly independent copy of a standard Fock space inside $V$ , i.e. $V$ becomes a direct sum of several Fock spaces. In this latter case, the ground state energy-level is degenerate. | {
"source": [
"https://physics.stackexchange.com/questions/23028",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8281/"
]
} |
23,098 | I'm a newbie in physics. Sorry, if the following questions are dumb. I began reading "Mechanics" by Landau and Lifshitz recently and hit a few roadblocks right away. Proving that a free particle moves with a constant velocity in an inertial frame of reference ( $\S$ 3. Galileo's relativity principle). The proof begins with explaining that the Lagrangian must only depend on the speed of the particle ( $v^2={\bf v}^2$ ): $$L=L(v^2).$$ Hence the Lagrance's equations will be $$\frac{d}{dt}\left(\frac{\partial L}{\partial {\bf v}}\right)=0,$$ so $$\frac{\partial L}{\partial {\bf v}}=\text{constant}.$$ And this is where the authors say Since $\partial L/\partial \bf v$ is a function of the velocity only, it follows that $${\bf v}=\text{constant}.$$ Why so? I can put $L=\|{\bf v}\|=\sqrt{v^2_x+v^2_y+v^2_z}$ . Then $$\frac{\partial L}{\partial {\bf v}}=\frac{2}{\sqrt{v^2_x+v^2_y+v^2_z}}\begin{pmatrix} v_x \\ v_y \\ v_z \end{pmatrix},$$ which will remain a constant vector $\begin{pmatrix} 2 \\ 0 \\ 0 \end{pmatrix}$ as the particle moves with an arbitrary non-constant positive $v_x$ and $v_y=v_z=0$ . Where am I wrong here? If I am, how does one prove the quoted statement? Proving that $L=\frac{m v^2}2$ ( $\S$ 4. The Lagrangian for a free particle). The authors consider an inertial frame of reference $K$ moving with a velocity ${\bf\epsilon}$ relative to another frame of reference $K'$ , so ${\bf v'=v+\epsilon}$ . Here is what troubles me: Since the equations of motion must have same form in every frame, the Lagrangian $L(v^2)$ must be converted by this transformation into a function $L'$ which differs from $L(v^2)$ , if at all, only by the total time derivative of a function of coordinates and time (see the end of $\S$ 2). First of all, what does same form mean? I think the equations should be the same, but if I'm right, why wouldn't the authors write so?
Second, it was shown in $\S$ 2 that adding a total derivative will not change the equations. There was nothing about total derivatives of time and coordinates being the only functions , adding which does not change the equations (or their form , whatever it means). Where am I wrong now? If I'm not, how does one prove the quoted statement and why haven't the authors done it? | In physics, it is often implicitly assumed that the Lagrangian $L=L(\vec{q},\vec{v},t)$ depends smoothly on the (generalized) positions $q^i$ , velocities $v^i$ , and time $t$ , i.e. that the Lagrangian $L$ is a differentiable function. Let us now assume that the Lagrangian is of the form $$L~=~\ell\left(v^2\right),\qquad\qquad v~:=~|\vec{v}|,\tag{1}$$ where $\ell$ is a differentiable function. The equations of motion (eom) become $$ \vec{0}~=~\frac{\partial L}{\partial \vec{q}}
~\approx~\frac{\mathrm d}{\mathrm dt}\frac{\partial L}{\partial \vec{v}}
~=~\frac{\mathrm d }{\mathrm dt} \left(2\vec{v}~\ell^{\prime}\right)
~=~2\vec{a}~\ell^{\prime}+4\vec{v}~(\vec{a}\cdot\vec{v}) \ell^{\prime\prime}.\tag{2}$$ (Here the $\approx$ symbol means equality modulo eom.) If $\ell$ is a constant function, the eom becomes a trivial identity $\vec{0}\equiv \vec{0}$ . This is unacceptable. Hence let us assume from now on that $\ell$ is not a constant function. This means that generically $\ell^{\prime}$ is not zero. We conclude from eq. (2) that on-shell $$\vec{a} \parallel \vec{v},\tag{3}$$ i.e. the vectors $\vec{a}$ and $\vec{v}$ are linearly dependent on-shell. (The words on-shell and off-shell refer to whether eom is satisfied or not.) Therefore by taking the length on both sides of the vector eq. (2), we get $$ 0~\approx~2a(\ell^{\prime}+2v^2\ell^{\prime\prime}),\qquad\qquad a~:=~|\vec{a}|.\tag{4}$$ This has two branches. The first branch is that there is no acceleration, $$ \qquad \vec{a}~\approx~\vec{0},\tag{5}$$ or equivalently, a constant velocity. The second branch imposes a condition on the speed $v$ , $$\ell^{\prime}+2v^2\ell^{\prime\prime}~\approx~0.\tag{6}$$ To take the second branch (6) seriously, we must demand that it works for all speeds $v$ , not just for a few isolated speeds $v$ . Hence eq. (6) becomes a 2nd order ODE for the $\ell$ function. The full solution is precisely OP's counterexample $$L~=~ \ell\left(v^2\right)~=~\alpha \sqrt{v^2}+\beta~=~\alpha v+\beta,\tag{7}$$ where $\alpha$ and $\beta$ are two integration constants. This is differentiable wrt. the speed $v=|\vec{v}|$ , but it is not differentiable wrt. the velocity $\vec{v}$ at $\vec{v}=\vec{0}$ if $\alpha\neq 0$ . Therefore the second branch (6) is discarded. Thus the eom is the standard first branch (5). $\Box$ Firstly, the definition of form invariance is discussed in this Phys.SE post. Concretely, Landau and Lifshitz mean by form invariance that if the Lagrangian is $$L~=~\ell\left(v^2\right)\tag{8}$$ in the frame $K$ , it should be $$L^\prime~=~\ell\left(v^{\prime 2}\right)\tag{9}$$ in the frame $K^{\prime}$ . Here $$\vec{v}^{\prime }~=~\vec{v}+\vec{\epsilon}\tag{10}$$ is a Galilean transformation . Secondly, OP asks if adding a total time derivative to the Lagrangian $$L ~\longrightarrow~ L+\frac{\mathrm dF}{\mathrm dt}\tag{11}$$ is the the only thing that would not change the eom? No, e.g. scaling the Lagrangian $$L ~\longrightarrow~ \alpha L\tag{12}$$ with an overall factor $\alpha$ also leaves the eom unaltered. See also Wikibooks . However, we already know that all Lagrangians of the form (8) and (9) lead to the same eom (5). (Recall that acceleration is an absolute notion under Galilean transformations.) Instead, I interpret the argument of Landau and Lifshitz as that they want to manifestly implement Galilean invariance via Noether Theorem by requiring that an (infinitesimal) change $$ \Delta L~:=~L^\prime-L ~=~2(\vec{v}\cdot\vec{\epsilon})\ell^{\prime} \tag{13}$$ of the Lagrangian is always a total time derivative $$\Delta L~=~\frac{\mathrm dF}{\mathrm dt}\tag{14}$$ even off-shell. Question: In general, how do we know/correctly identify if an expression $\Delta L$ is a total time derivative (14), or not? Example: The expression $q^2 +2t\vec{q}\cdot \vec{v}$ happens to be a total time derivative, but this fact may be easy to miss at a first glance. The lesson is that one should be very careful in claiming that a total time derivative must be on such and such form. It is easy to overlook possibilities. Well, one surefire (albeit admittedly a bit heavy-handed) test is to apply the Euler-Lagrange operator on the expression (13), and check if it is identically zero off-shell, or not. (Amusingly, this test actually happens to be both a necessary and sufficient condition, but that's another story .) We calculate: $$\begin{align} \vec{0} &~=~ \frac{\mathrm d}{\mathrm dt}\frac{\partial \Delta L}{\partial \vec{v}} -\frac{\partial \Delta L}{\partial \vec{q}} \\ &~=~4\vec{\epsilon}~(\vec{a}\cdot\vec{v}) \ell^{\prime\prime}
+4\vec{v}~(\vec{a}\cdot\vec{\epsilon}) \ell^{\prime\prime}
+4\vec{a}~(\vec{v}\cdot\vec{\epsilon}) \ell^{\prime\prime}
+8\vec{v}~(\vec{v}\cdot\vec{\epsilon})(\vec{a}\cdot\vec{v}) \ell^{\prime\prime\prime}. \tag{15}\end{align}$$ Since eq. (15) should hold for any off-shell configuration, we can e.g. pick $$ \vec{a}~\parallel~\vec{v}~\perp~\vec{\epsilon}.\tag{16}$$ Then eq. (15) reduces to $$ \vec{0}~=~ 4\vec{\epsilon} ~(\pm a v) \ell^{\prime\prime}. \tag{17}$$ We may assume that $\vec{\epsilon}\neq\vec{0}$ . Arbitrariness of $a$ and $v$ implies that $$\ell^{\prime\prime}~=~0.\tag{18}$$ (Conversely, it is easy to check that eq. (18) implies eq. (15).)
The full solution to eq. (18) is the standard non-relativistic Lagrangian for a free particle, $$L~=~ \ell\left(v^2\right)~=~\alpha v^2+\beta, \tag{19}$$ where $\alpha$ and $\beta$ are two integration constants. Eq. (19) is the main result. Alternatively, the main result (19) follows directly from the following Lemma. Lemma: If $F(\vec{q}, \vec{v}, \vec{a}, \vec{j}, \ldots, t)$ in eq. (14) is a local function , and if $\Delta L(\vec{q}, \vec{v}, \vec{a}, \vec{j}, \ldots, t)$ does not depend on higher time derivatives $\vec{a}$ , $\vec{j}$ , $\ldots$ , then $F$ cannot not depend on time derivatives $\vec{v}, \vec{a}, \vec{j}, \ldots$ . This in turn implies that $\Delta L(\vec{q}, \vec{v}, t)$ is an affine function of $\vec{v}$ . We leave the proof of the Lemma as an exercise to the reader. The Lemma and eq. (13) yield that $\ell^{\prime}$ is independent of $\vec{v}$ , which again leads to the main result (19). $\Box$ For more on Galilean invariance, see also this Phys.SE post. | {
"source": [
"https://physics.stackexchange.com/questions/23098",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8410/"
]
} |
23,100 | Matter-- I guess I know what it is ;) somehow, at least intuitively. So, I can feel it in terms of the weight when picking something up. It may be explained by gravity which is itself is defined by definition of the matter! What is anti-matter ? Can you explain it to me? Conceptually simplified Real world evidence | So, what is antimatter? Even from the name it is obviously the "opposite" of ordinary matter, but what does that really mean? As it happens there are several equally valid ways to describe the difference. However, the one that I think is easiest to explain is that in antimatter, all of the electrical charges on all of the particles, at every level, have been switched around. Thus ordinary electrons have negative charges, so their antimatter equivalents have positive charges. Protons are positive, so in antimatter they get the negative charges. Even neutrons, which have no overall charge, still have internal parts (quarks) that very definitely have charges, and those also get flipped around. Now to me the most remarkable characteristic of antimatter is not how it is differs from ordinary matter, but how amazingly similar it is to ordinary matter. It is like an almost perfect mirror image of matter -- and I don't use that expression lightly, since it turns out that forcing ordinary matter into becoming its own mirror image is one of those other routes I mentioned for explaining what antimatter is! The similarity is so close that large quantities antimatter would, for example, possess the same chemistry as ordinary matter. For that matter there is no reason why an entire living person could not be composed of antimatter. But if you do happen to meet such a person, such as while floating outside a space ship above earth, I strongly recommend that you be highly antisocial. Don't shake hands or invite them over, whatever you do! The reason has to do with those charges, along with some related factors. Everyone knows that opposite charges attract. Thus in ordinary matter, electrons seek out the close company of protons. They like to hang out there, forming hydrogen. However, in ordinary matter it also turns out that there are also all sorts of barriers -- I like to think of them as unpaid debts to a very strict bank -- that keep the negative charges of electrons from getting too close to the positive charges of the protons. Thus while the oppositely charged electrons and protons could in principle merge together and form some new entity without any charge, what really happens is a lot more complicated. Except for their opposite charges, electrons don't have the right "debts" to pay off everything the protons "owe," and vice-versa. It's like mixing positive apples with negative oranges. The debts, which are really called conservation laws, make it possible for the powerfully attracted protons and electrons to get very close, but never close enough to fully cancel out each other's charges. That's a really good thing, too. Without that close-but-not-quite-there mixing of apples and oranges, all the fantastic complexity and specificity of atoms and chemistry and biochemistry and DNA and proteins and us would not be here! Now let's look at antimatter again. The electrons in antimatter are positively charged -- in fact, they were renamed "positrons" a long time ago -- so like protons, they too are strongly attracted to the electrons found in ordinary matter. However, when you add electrons to positrons, you are now mixing positive apples with negative apples. That very similarity turns out to result in a very dangerous mix, one not at all like mixing electrons and protons. That's because for electrons and positrons the various debts they contain match up exactly , and are also exactly opposite. This means they can cancel each other's debts all the way down to their simplest and most absolute shared quantity, which is pure energy. That energy is given off in the form of a very dangerous and high-intensity version of light called gamma rays. So why do electrons and positrons behave so very badly when they get together? Here's a simple analogy: Hold a rubber band tightly at its two ends. Next, place an AAA between the strands in the middle. (This is easier for people with three arms.) Next, use the battery to wind up the rubber band until it is quite tight. Now look at the result carefully. Notice in particular that the left and right sides are twisted in opposite directions, and in fact are roughly mirror images of each other. These two oppositely twisted sides of the rubber band provides a simple analog to an electron and a positron, in the sense that both store energy and both have a sort of defining "twistiness" that is associated with that energy. You could easily take the analogy a bit farther by bracing each half somehow and snipping the rubber band in the middle. With that more elaborate analogy the two "particles" could potentially wander off on their own. For now, however, just release the battery and watch what happens. (Important: Wear eye goggles if you really do try this!) Since your two mirror-image "particles" on either side of battery have exactly opposite twists, they unravel each other very quickly, with a release of energy that may send the battery flying off somewhere. The twistiness that defined both of the "particles" is at the same time completely destroyed, leaving only a bland and twist-free rubber band. It is of course a huge simplification, but if you think of electrons and positrons as similar to the two sides of a twisted rubber band, you end up with a surprisingly good feel for why matter and antimatter are dangerous when placed close together. Like the sides of the rubber band, both electrons and positrons store energy, are mirror images of each other, and "unravel" each other if allowed to touch, releasing their stored energy. If you could mix large quantities of both, the result would be an unraveling whose accompanying release of energy would be truly amazing (and very likely fatal!) to behold. Now, given all of that, how "real" is antimatter? Very, very real. Its signatures are everywhere! This is especially true for the positron (antimatter electron), which is the easiest form of antimatter to create. For example, have you ever heard of a medical procedures called a PET scan? PET stands for Positron Emission Tomography... and yes, that really does mean that doctors use extremely tiny amounts of antimatter to annihilate bits of someone's body. The antimatter in that case is generated by certain radioactive processes, and the bursts of radiation (those gamma rays) released by axing a few electrons help see the doctors see what is going on inside someone's body. Signatures of positrons are also remarkably common in astrophysics, where for example some black holes are unusually good at producing them. No one really understands why certain regions produce so many positrons, unless someone has has some good insights recently. Positrons were the first form of antimatter predicted, by a very sharp fellow named Paul Dirac. Not too long after that prediction, they were also the first form of antimatter detected. Heavier antimatter particles such as antiprotons are much harder to make than positrons, but they too have been created and studied in huge numbers using particle colliders. Despite all of that, there is also a great mystery regarding antimatter. The mystery is this: Where did the rest of the antimatter go? Recall those debts I mentioned? Well, when creating universes physicists, like other notable entities, like to start the whole shebang off with pure energy -- that is to say, with light. But since matter has all those unbalanced debts, the only way you can move smoothly back and forth between light and matter is by having an equal quantity of antimatter somewhere in the universe. An amount of antimatter that large flat-out does not seem to exist, anywhere. Astrophysicists have by now mapped out the universe well enough to leave no easy hiding places for large quantities of antimatter. Recall how I said antimatter is very much like a mirror image of matter? That's an example of a symmetry. A symmetry in physics is just a way of "turning" or "reflecting" or "moving" something in a way that leaves you with something that looks just like the original. Flipping a cube between its various sides is a good example of a "cubic symmetry," for example (there are fancier words for it, but they mean the same thing). Symmetries are a very big deal in modern physics, and are absolutely critical to many of our deepest understandings of how our universe works. So matter and antimatter form an almost exact symmetry. However, that symmetry is broken rather spectacularly in astrophysics, and also much more subtly in certain physics experiments. Exactly how this symmetry can be broken so badly at the universe level while being only very subtly broken at the particle level really is quite a bit of a mystery. So, there you have it, a mini-tutorial on both what antimatter is and where it occurs. While it's a bit of overkill, your question is a good one on a fascinating topic. And if you have read through all of this, and have found any of what I just said interesting, don't just stop here! Physics is one of those topics that gets more fascinating as you dig deeper you get into it. For example, some of those cryptic-looking equations you will see in many of the answers here are also arguably some of the most beautiful objects ever uncovered in human history. Learning to read them well enough to appreciate their beauty is like learning to read great poetry in another language, or how to "hear" the deep structure of a really good piece of classical music. For physics, the reward is a deep revelation of structure, beauty, and insight that few other disciplines can offer. Don't stop here! | {
"source": [
"https://physics.stackexchange.com/questions/23100",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5607/"
]
} |
23,469 | Is Fire a Plasma ? If not, what is it then? If yes why, don't we teach kids this basic example? UPDATE: I probably meant a regular commonplace fire of the usual temperature. That should simplify the answer. | Broadly speaking, fire is a fast exothermic oxidation reaction. The flame is composed of hot, glowing gases, much like a metal that is heated sufficiently that it begins to glow. The atoms in the flame are a vapor, which is why it has the characteristic wispy quality we associate with fire, as opposed to the more rigid structure we associate with hot metal. Now, to be fair, it is possible for a fire to burn sufficiently hot that it can ionize atoms. However, when we talk about common examples of fire, such as a candle flame, a campfire, or something of that kind, we are not dealing with anything sufficiently energetic to ionize atoms. So, when it comes to using something as an example of a plasma for kids, I'm afraid fire wouldn't be an accurate choice. | {
"source": [
"https://physics.stackexchange.com/questions/23469",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2394/"
]
} |
23,523 | Are there any non-metal objects that are attracted by magnets? | Oxygen, for one. In its gaseous state it moves too fast to be affected, but liquid oxygen can be trapped between the poles of a magnet: Materials can be broadly classified into three sets: Diamagnetism : All materials are diamagnetic, but their diamagnetic propoerties are easily masked by paramagnetic/ferromagnetic nature. Diamagnetism is the property of an object to be weakly repelled by all magnetic fields . doesn't matter if its near a north or south pole. It will always be repelled. With stronger magnets, the "weakly" becomes less weak, and we get levitating frogs: Yup, that's a live frog, but more importantly(except to the frog I guess), he's diamagnetic. And he floats in the magnetic field--poor chap must be confounded. Paramagnetism This is basically the opposite of diamagnetism. Paramagnetism is the property of a material to be attracted towards a magnetic field--again, it doesn't matter north or south. The strength of the attraction varies widely, but its always greater than the diamagnetic repulsion, and generally much less than ferromagnetic attraction. Paramagnetism is only observed in materials with unpaired electrons. Oxygen is paramagnetic (so is diatomic boron), so its attracted by the magnetic field. Note that not all metals are paramagnetic--in fact many are just plain diamagnetic materials (not sure of this) Ferromagnetism : This is the property of a material to get permanently magnetised. Only a few elements are ferromagnetic (iron, cobalt,nickel, neodymium, and a few others). These are generally strongly attracted to a magnetic field. | {
"source": [
"https://physics.stackexchange.com/questions/23523",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8582/"
]
} |
23,615 | When experiencing alpha decay, atoms shed alpha particles made of 2 protons and 2 neutrons. Why can't we have other types of particles made of more or less protons? | The reason why alpha particles heavily dominate as the proton-neutron mix most likely to be emitted from most (not all!) radioactive components is the extreme stability of this particular combination. That same stability is also why helium dominates after hydrogen as the most common element in the universe, and why other higher elements had to be forged in the hearts and shells of supernovas in order to come into existence at all. Here's one way to think of it: You could in principle pop off something like helium-3 from an unstable nucleus - that's two protons and one neutron - and very likely give a net reduction in nuclear stress. But what would happen is this: The moment the trio started to depart, a neutron would come screaming in saying look how much better it would be if I joined you!! And the neutron would be correct: The total reduction in energy obtained by forming a helium-4 nucleus instead of helium-3 would in almost any instance be so superior that any self-respecting (and energy-respecting) nucleus would just have to go along with the idea. Now all of what I just said can (and in the right circumstances should) be said far more precisely in terms of issues such as tunneling probabilities, but it would not really change the message much: Helium-4 nuclei pop off preferentially because they are so hugely stable that it just makes sense from a stability viewpoint for them to do so. The next most likely candidates are isolated neutrons and protons, incidentally. Other mixed versions are rare until you get up into the fission range, in which case the whole nucleus is so unstable that it can rip apart in very creative ways (as aptly noted by the earlier comment). | {
"source": [
"https://physics.stackexchange.com/questions/23615",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1559/"
]
} |
23,725 | In 2006, New Scientist magazine published an article titled Relativity drive: The end of wings and wheels 1 [ 1 ] about the EmDrive [ Wikipedia ] which stirred up a fair degree of controversy and some claims that New Scientist was engaging in pseudo-science. Since the original article the inventor claims that a "Technology Transfer contract with a major US aerospace company was successfully completed", and that papers have been published by Professor Yang Juan of The North Western Polytechnical University, Xi'an, China. 2 Furthermore, it was reported in Wired magazine that the Chinese were going to attempt to build the device. Assuming that the inventor is operating in good faith and that the device actually works, is there another explanation of the claimed resulting propulsion? Notes: 1. Direct links to the article may not work as it seems to have been archived. 2. The abstracts provided on the EmDrive website claim that they are Chinese language journals which makes them very difficult to chase down and verify. | It is impossible to generate momentum in a closed object without emitting something, so the drive is either not generating thrust, or throwing something backwards. There is no doubt about this. Assuming that the thrust measurement is accurate, that something could be radiation. This explanation is exceedingly unlikely, since to get mN of radiation pressure you need an enormous amount of energy, since in 1s you get 1 ${\rm gm s^{-1}}$ of momentum, which in radiation can only be carried by $3 \times 10^5$ J (multiply by c), so you need 30,000 Watts of energy to push with mN force, or at least a million Watts for 80 mN. So, it's not radiation. But a leaky microwave cavity can heat the water-vapor in the air around the object, and the heat can lead to a current of air away from the object. With a air current, you can produce mN thrusts from a relatively small amount of energy, and with a barely noticible breeze. To get mN force, you need to accelerate $300 \ {\rm cm^3}$ of air (1 gram) to 1 m/s every second, or to get 80 mN, accelerate $1 {\rm m^3}$ of air (3000 g) to 0.2 m/s (barely perceptible) and this can be done with a hot-cold thermal gradient behind the device which is hard to notice. If the thrust measurements are not in error, this is the certain cause. So at best, Shawyer has invented a very inefficient and expensive fan. EDIT: The initial tests were at atmospheric pressure. To test the fan hypothesis, an easy way is to vary the pressure, another easy way is to put dust in the air to see the air-currents. The experimenters didn't do any of this (or at least didn't publish it if they did), instead, they ran the device inside a vacuum chamber but at ambient pressure after putting it through a vacuum cycle to simulate space. This is not a vacuum test, but it can mislead one on a first read. In response to criticism of this faux-vacuum test, they did a second test in a real vacuum. This time, they used a torsion pendulum to find a teeny-tiny thrust of no relation to the first purported thrust. The second run in vacuum has completely different effects, possibly due to interactions between charge building up on the device and metallic components of the torsion pendulum, possibly due to deliberate misreporting by these folks, who didn't bother to explain what was going on in the first experiments they hyped up. Since they didn't bother to do a any systematic analysis of the effect on the first run, to vary air-pressure, look at air flows with dust, whatever, or if they did this they didn't bother to admit their initial error, this is not particularly honest experimental work, and there's not much point in talking about it any more. These folks are simply wasting people's time. | {
"source": [
"https://physics.stackexchange.com/questions/23725",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
23,797 | If you've ever been annoyingly poked by a geek, you might be familiar with the semi-nerdy obnoxious response of "I'm not actually touching you! The electrons in the atoms of my
skin are just getting really close to yours!" Expanding on this a little bit, it seems the obnoxious geek is right. After all, consider Zeno's paradox. Every time you try to touch two objects together, you have to get them halfway there, then quarter-way, etc. In other words, there's always a infinitesimal distance in between the two objects. Atoms don't "touch" each other; even the protons and neutrons in the nucleus of an atom aren't "touching" each other. So what does it mean for two objects to touch each other? Are atoms that join to form a molecule "touching"? I suppose the atoms are touching, because their is some overlap, but the subatomic particles are just whizzing around avoiding each other. If this is the case, should "touching" just be defined relative to some context? I.e, if I touch your hand, our hands are touching, but unless you pick up some of my DNA, the molecules in our hands aren't touching? And since the molecules aren't changing, the atoms aren't touching either? Is there really no such thing as "touching"? | Wow, this one has been over-answered already, I know... but it is such a fun question! So, here's an answer that hasn't been, um, "touched" on yet... :) You, sir, whatever your age may be (anyone with kids will know what I mean), have asked for an answer to one of the deepest questions of quantum mechanics. In the quantum physics dialect of High Nerdese, your question boils down to this: Why do half-integer spin particles exhibit Pauli exclusion - that is, why do they refuse to the be in the same state, including the same location in space, at the same time? You are quite correct that matter as a whole is mostly space. However, the specific example of bound atoms is arguably not so much an example of touching as it is of bonding . It would be the equivalent of a 10-year-old son not just poking his 12-year-old sister, but of poking her with superglue on his hand , which is a considerably more drastic offense that I don't think anyone would be much amused by. Touching, in contrast, means that you have to push - that is, exert some real energy - into making the two objects contact each other. And characteristically, after that push, the two object remain separate (in most cases) and even bound back a bit after the contact is made. So, I think one can argue that the real question behind "what is touching?" is "why do solid objects not want to be compressed when you try to push them together?" If that were not the case, the whole concept of touching sort of falls apart. We would all become at best ghostly entities who cannot make contact with each other, a bit like Chihiro as she tries to push Haku away during their second meeting in Spirited Away . Now with that as the sharpened version of the query, why do objects such a people not just zip right through each other when they meet, especially since they are (as noted) almost entirely made of empty space? Now the reflex answer - and it's not a bad one - is likely to be electrical charge. That's because we all know that atoms are positive nuclei surrounded by negatively charged electrons, and that negative charges repel. So, stated that way, it's perhaps not too surprising that, when the outer "edges" of these rather fuzzy atoms get too close, their respective sets of electrons would get close enough to repel each other. So by this answer, "touching" would simply be a matter of atoms getting so close to each other that their negatively charged clouds of electrons start bumping into each other. This repulsion requires force to overcome, so the the two objects "touch" - reversibly compress each other without merging - through the electric fields that surround the electrons of their atoms. This sounds awfully right, and it even is right... to a limited degree. Here's one way to think of the issue: If charge was the only issue involved, then why do some atoms have exactly the opposite reaction when their electron clouds are pushed close to each other? For example, if you push sodium atoms close to chlorine atoms, what you get is the two atoms leaping to embrace each other more closely, with a resulting release of energy that at larger scales is often described by words such as "BOOM!" So clearly something more than just charge repulsion is going on here, since at least some combinations of electrons around atoms like to nuzzle up much closer to each other instead of farther away. What, then, guarantees that two molecules will come up to each other and instead say "Howdy, nice day... but, er, could you please back off a bit, it's getting stuffy?" That general resistance to getting too close turns out to result not so much from electrical charge (which does still play a role), but rather from the Pauli exclusion effect I mentioned earlier. Pauli exclusion is often skipped over in starting texts on chemistry, which may be why issues such as what touching means are also often left dangling a bit. Without Pauli exclusion, touching - the ability of two large objects to make contact without merging or joining - will always remain a bit mysterious. So what is Pauli exclusion? It's just this: Very small, very simple particles that spin (rotate) in a very peculiar way always, always insist on being different in some way, sort of like kids in large families where everyone wants their unique role or ability or distinction. But particles, unlike people, are very simple things, so they only have a very limited set of options to choose from. When they run out of those simple options, they have only one option left: they need their own bit of space, apart from any other particle. They will then defend that bit of space very fiercely indeed. It is that defense of their own space that leads large collections of electrons to insist on taking up more and more overall space, as each tiny electron carves out its own unique and fiercely defended bit of turf. Particles that have this peculiar type of spin are called fermions , and ordinary matter is made of three main types of fermions: Protons, neutrons, and electrons. For the electrons, there is only one identifying feature that distinguishes them from each other, and that is how they spin: counterclockwise (called "up") or clockwise (called "down"). You'd think they'd have other options, but that, too, is a deep mystery of physics: Very small objects are so limited in the information they carry that they can't even have more than two directions from which to choose when spinning around. However, that one option is very important for understanding that issue of bonding that must be dealt with before atoms can engage in touching . Two electrons with opposite spins, or with spins that can be made opposite of each other by turning atoms around the right way, do not repel each other: They attract. In fact, they attract so much that they are an important part of that "BOOM!" I mentioned earlier for sodium and chlorine, both of which have lonely electrons without spin partners, waiting. There are other factors on how energetic the boom is, but the point is that, until electrons have formed such nice, neat pairs, they don't have as much need to occupy space. Once the bonding has happened, however - once the atoms are in arrangements that don't leave unhappy electrons sitting around wanting to engage in close bonds - then the territorial aspect of electrons comes to the forefront: They begin defending their turf fiercely. This defense of turf first shows itself in the ways electrons orbit around atoms, since even there the electrons insist on carving out their own unique and physically separate orbits, after that first pairing of two electrons is resolved. As you can imagine, trying to orbit around an atom while at the same time trying very hard to stay away from other electron pairs can lead to some pretty complicated geometries. And that, too, is a very good thing, because those complicated geometries lead to something called chemistry, where different numbers of electrons can exhibit very different properties due to new electrons being squeezed out into all sorts of curious and often highly exposed outside orbits. In metals, it gets so bad that the outermost electrons essentially become community children that zip around the entire metal crystal instead of sticking to single atoms. That's why metals carry heat and electricity so well. In fact, when you look at a shiny metallic mirror, you are looking directly at the fastest-moving of these community-wide electrons. It's also why, in outer space, you have to be very careful about touching two pieces of clean metal to each other, because with all those electrons zipping around, the two pieces may very well decide to bond into a single new piece of metal instead of just touching. This effect is called vacuum welding, and it's an example of why you need to be careful about assuming that solids that make contact will always remain separate. But many materials, such a you and your skin, don't have many of these community electrons, and are instead full of pairs of electrons that are very happy with the situations they already have, thank you. And when these kinds of materials and these kinds of electrons approach, the Pauli exclusion effect takes hold, and the electrons become very defensive of their turf. The result at out large-scale level is what we call touching: the ability to make contact without easily pushing through or merging, a large-scale sum of all of those individual highly content electrons defending their small bits of turf. So to end, why do electrons and other fermions want so desperately to have their own bits of unique state and space all to themselves? And why, in every experiment ever done, is this resistance to merger always associated with that peculiar kind of spin I mentioned, a form of spin that is so minimal and so odd that it can't quite be described within ordinary three-dimensional space? We have fantastically effective mathematical models of this effect. It has to do with antisymmetric wave functions. These amazing models are instrumental to things such as the semiconductor industry behind all of our modern electronic devices, as well as chemistry in general, and of course research into fundamental physics. But if you ask the "why" question, that becomes a lot harder. The most honest answer is, I think, "because that is what we see: half-spin particles have antisymmetric wave functions, and that means they defend their spaces." But linking the two together tightly - something called the spin-statistics problem - has never really been answered in a way that Richard Feynman would have called satisfactory. In fact, he flatly declared more than once that this (and several other items in quantum physics) were still basically mysteries for which we lacked really deep insights into why the universe we know works that way. And that, sir, is why your question of "what is touching?" touches more deeply on profound mysteries of physics than you may have realized. It's a good question. 2012-07-01 Addendum Here is a related answer I did for S.E. Chemistry . It touches on many of the same issues, but with more emphasis on why "spin pairing" of electrons allows atoms to share and steal electrons from each other -- that is, it lets them form bonds. It is not a classic textbook explanation of bonding, and I use a lot of informal English words that are not mathematically accurate. But the physics concepts are accurate. My hope is that it can provide a better intuitive feel for the rather remarkable mystery of how an uncharged atom (e.g. chlorine) can overcome the tremendous electrostatic attraction of a neutral atom (e.g. sodium) to steal one or more of its electrons. | {
"source": [
"https://physics.stackexchange.com/questions/23797",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5539/"
]
} |
23,808 | During an interval of time, a tennis ball is moved so that the angle between the velocity and the acceleration of the ball is kept at a constant 120º. Which statement is true about the tennis ball during this interval of time?
Choose one answer. a. Its speed decreases and it is changing its direction of travel. b. Its speed remains constant, but it is changing its direction of travel. c. Its speed decreases and it is not changing its direction of travel. d. Its speed increases and it is changing its direction of travel. e. Its speed remains constant and it is not changing its direction of travel In my mind the ball is travelling with a negative velocity, south, all in the y component. At the time it is being accelerated at 120 degrees, thus it is slowing down in the negative Y direction. "Its speed decreases and it is not changing direction of travel." That is, it will eventually change its direction of travel, but just because it is accelerating in the opposite direction of the current vector does not mean that it has changed direction; yet. The answer is given as "a." But couldn't "a." or "c." be true? | Wow, this one has been over-answered already, I know... but it is such a fun question! So, here's an answer that hasn't been, um, "touched" on yet... :) You, sir, whatever your age may be (anyone with kids will know what I mean), have asked for an answer to one of the deepest questions of quantum mechanics. In the quantum physics dialect of High Nerdese, your question boils down to this: Why do half-integer spin particles exhibit Pauli exclusion - that is, why do they refuse to the be in the same state, including the same location in space, at the same time? You are quite correct that matter as a whole is mostly space. However, the specific example of bound atoms is arguably not so much an example of touching as it is of bonding . It would be the equivalent of a 10-year-old son not just poking his 12-year-old sister, but of poking her with superglue on his hand , which is a considerably more drastic offense that I don't think anyone would be much amused by. Touching, in contrast, means that you have to push - that is, exert some real energy - into making the two objects contact each other. And characteristically, after that push, the two object remain separate (in most cases) and even bound back a bit after the contact is made. So, I think one can argue that the real question behind "what is touching?" is "why do solid objects not want to be compressed when you try to push them together?" If that were not the case, the whole concept of touching sort of falls apart. We would all become at best ghostly entities who cannot make contact with each other, a bit like Chihiro as she tries to push Haku away during their second meeting in Spirited Away . Now with that as the sharpened version of the query, why do objects such a people not just zip right through each other when they meet, especially since they are (as noted) almost entirely made of empty space? Now the reflex answer - and it's not a bad one - is likely to be electrical charge. That's because we all know that atoms are positive nuclei surrounded by negatively charged electrons, and that negative charges repel. So, stated that way, it's perhaps not too surprising that, when the outer "edges" of these rather fuzzy atoms get too close, their respective sets of electrons would get close enough to repel each other. So by this answer, "touching" would simply be a matter of atoms getting so close to each other that their negatively charged clouds of electrons start bumping into each other. This repulsion requires force to overcome, so the the two objects "touch" - reversibly compress each other without merging - through the electric fields that surround the electrons of their atoms. This sounds awfully right, and it even is right... to a limited degree. Here's one way to think of the issue: If charge was the only issue involved, then why do some atoms have exactly the opposite reaction when their electron clouds are pushed close to each other? For example, if you push sodium atoms close to chlorine atoms, what you get is the two atoms leaping to embrace each other more closely, with a resulting release of energy that at larger scales is often described by words such as "BOOM!" So clearly something more than just charge repulsion is going on here, since at least some combinations of electrons around atoms like to nuzzle up much closer to each other instead of farther away. What, then, guarantees that two molecules will come up to each other and instead say "Howdy, nice day... but, er, could you please back off a bit, it's getting stuffy?" That general resistance to getting too close turns out to result not so much from electrical charge (which does still play a role), but rather from the Pauli exclusion effect I mentioned earlier. Pauli exclusion is often skipped over in starting texts on chemistry, which may be why issues such as what touching means are also often left dangling a bit. Without Pauli exclusion, touching - the ability of two large objects to make contact without merging or joining - will always remain a bit mysterious. So what is Pauli exclusion? It's just this: Very small, very simple particles that spin (rotate) in a very peculiar way always, always insist on being different in some way, sort of like kids in large families where everyone wants their unique role or ability or distinction. But particles, unlike people, are very simple things, so they only have a very limited set of options to choose from. When they run out of those simple options, they have only one option left: they need their own bit of space, apart from any other particle. They will then defend that bit of space very fiercely indeed. It is that defense of their own space that leads large collections of electrons to insist on taking up more and more overall space, as each tiny electron carves out its own unique and fiercely defended bit of turf. Particles that have this peculiar type of spin are called fermions , and ordinary matter is made of three main types of fermions: Protons, neutrons, and electrons. For the electrons, there is only one identifying feature that distinguishes them from each other, and that is how they spin: counterclockwise (called "up") or clockwise (called "down"). You'd think they'd have other options, but that, too, is a deep mystery of physics: Very small objects are so limited in the information they carry that they can't even have more than two directions from which to choose when spinning around. However, that one option is very important for understanding that issue of bonding that must be dealt with before atoms can engage in touching . Two electrons with opposite spins, or with spins that can be made opposite of each other by turning atoms around the right way, do not repel each other: They attract. In fact, they attract so much that they are an important part of that "BOOM!" I mentioned earlier for sodium and chlorine, both of which have lonely electrons without spin partners, waiting. There are other factors on how energetic the boom is, but the point is that, until electrons have formed such nice, neat pairs, they don't have as much need to occupy space. Once the bonding has happened, however - once the atoms are in arrangements that don't leave unhappy electrons sitting around wanting to engage in close bonds - then the territorial aspect of electrons comes to the forefront: They begin defending their turf fiercely. This defense of turf first shows itself in the ways electrons orbit around atoms, since even there the electrons insist on carving out their own unique and physically separate orbits, after that first pairing of two electrons is resolved. As you can imagine, trying to orbit around an atom while at the same time trying very hard to stay away from other electron pairs can lead to some pretty complicated geometries. And that, too, is a very good thing, because those complicated geometries lead to something called chemistry, where different numbers of electrons can exhibit very different properties due to new electrons being squeezed out into all sorts of curious and often highly exposed outside orbits. In metals, it gets so bad that the outermost electrons essentially become community children that zip around the entire metal crystal instead of sticking to single atoms. That's why metals carry heat and electricity so well. In fact, when you look at a shiny metallic mirror, you are looking directly at the fastest-moving of these community-wide electrons. It's also why, in outer space, you have to be very careful about touching two pieces of clean metal to each other, because with all those electrons zipping around, the two pieces may very well decide to bond into a single new piece of metal instead of just touching. This effect is called vacuum welding, and it's an example of why you need to be careful about assuming that solids that make contact will always remain separate. But many materials, such a you and your skin, don't have many of these community electrons, and are instead full of pairs of electrons that are very happy with the situations they already have, thank you. And when these kinds of materials and these kinds of electrons approach, the Pauli exclusion effect takes hold, and the electrons become very defensive of their turf. The result at out large-scale level is what we call touching: the ability to make contact without easily pushing through or merging, a large-scale sum of all of those individual highly content electrons defending their small bits of turf. So to end, why do electrons and other fermions want so desperately to have their own bits of unique state and space all to themselves? And why, in every experiment ever done, is this resistance to merger always associated with that peculiar kind of spin I mentioned, a form of spin that is so minimal and so odd that it can't quite be described within ordinary three-dimensional space? We have fantastically effective mathematical models of this effect. It has to do with antisymmetric wave functions. These amazing models are instrumental to things such as the semiconductor industry behind all of our modern electronic devices, as well as chemistry in general, and of course research into fundamental physics. But if you ask the "why" question, that becomes a lot harder. The most honest answer is, I think, "because that is what we see: half-spin particles have antisymmetric wave functions, and that means they defend their spaces." But linking the two together tightly - something called the spin-statistics problem - has never really been answered in a way that Richard Feynman would have called satisfactory. In fact, he flatly declared more than once that this (and several other items in quantum physics) were still basically mysteries for which we lacked really deep insights into why the universe we know works that way. And that, sir, is why your question of "what is touching?" touches more deeply on profound mysteries of physics than you may have realized. It's a good question. 2012-07-01 Addendum Here is a related answer I did for S.E. Chemistry . It touches on many of the same issues, but with more emphasis on why "spin pairing" of electrons allows atoms to share and steal electrons from each other -- that is, it lets them form bonds. It is not a classic textbook explanation of bonding, and I use a lot of informal English words that are not mathematically accurate. But the physics concepts are accurate. My hope is that it can provide a better intuitive feel for the rather remarkable mystery of how an uncharged atom (e.g. chlorine) can overcome the tremendous electrostatic attraction of a neutral atom (e.g. sodium) to steal one or more of its electrons. | {
"source": [
"https://physics.stackexchange.com/questions/23808",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8687/"
]
} |
23,930 | What happens to the energy when waves perfectly cancel each other (destructive interference)? It appears that the energy "disappear" but the law of conservation of energy states that it can't be destructed. My guess is that the kinetic energy is transformed into potential energy. Or maybe it depends on the context of the waves where do the energy goes? Can someone elaborate on that or correct me if I'm wrong? | Waves always travel. Even standing waves can always be interpreted as two traveling waves that are moving in opposite directions (more on that below). Keeping the idea that waves must travel in mind, here's what happens whenever you figure out a way to build a region in which the energy of such a moving wave cancels out fully: If you look closely, you will find that you have created a mirror, and that the missing energy has simply bounced off the region you created. Examples include opals, peacock feathers, and ordinary light mirrors. The first two reflect specific frequencies of light because repeating internal structures create a physical regions in which that frequency of light cannot travel - that is, a region in which near-total energy cancellation occurs. An optical mirror uses electrons at the top of their Fermi seas to cancel out light over a much broader range of frequencies. In all three examples the light bounces off the region, with only a little of its energy being absorbed (converted to heat). A skip rope (or perhaps a garden hose) provides a more accessible example. First, lay out the rope or hose along its length, then give it quick, sharp clockwise motion. You get a helical wave that travels quickly away from you like a moving corkscrew. No standing wave, that! You put a friend at the other end, but she does not want your wave hitting her. So what does she do? First she tries sending a clockwise wave at you too, but that seems to backfire. Your wave if anything seems to hit harder and faster. So she tries a counterclockwise motion instead. That seems to work much better. It halts the forward progress of the wave you launched at her, converting it instead to a loop. That loop still has lots of energy, but at least now it stays in one place. It has become a standing wave, in this case a classic skip-rope loop, or maybe two or more loops if you are good at skip rope. What happened is that she used a canceling motion to keep your wave from hitting her. But curiously, her cancelling motion also created a wave, one that is twisted in the opposite way (counterclockwise) and moving towards you, just as your clockwise wave moved towards her. As it turns out, the motion you are already doing cancels her wave too, sending it right back at her. The wave is now trapped between your two cancelling actions. The sum of the two waves, which now looks sinusoidal instead of helical, has the same energy as your two individual helical waves added together. I should note that you really only need one person driving the wave, since any sufficiently solid anchor for one end of the rope will also prevent the wave from entering it, and so end up reflecting that wave just as your friend did using a more active approach. Physical media such as peacock features and Fermi sea electrons also use a passive approach to reflection, with the same result: The energy is forbidden by cancellation from entering into some region of space. So, while this is by no means a complete explanation, I hope it provides some "feel" for what complete energy cancellation really means: It's more about keeping waves out . Thinking of cancellation as the art of building wave mirrors provides a different and less paradoxical-sounding perspective on a wide variety of phenomena that alter, cancel, or redirect waves. | {
"source": [
"https://physics.stackexchange.com/questions/23930",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7823/"
]
} |
23,936 | If the fields in QFT are representations of the Poincare group (or generally speaking the symmetry group of interest), then I think it's a straight forward consequence that the matrix elements and therefore the observables, are also invariant. What about the converse: If I want the matrix elements of my field theory to be invariant scalars, how do I show that this implies that my fields must be corresponding representations? How does this relate to S-matrix theory? | Waves always travel. Even standing waves can always be interpreted as two traveling waves that are moving in opposite directions (more on that below). Keeping the idea that waves must travel in mind, here's what happens whenever you figure out a way to build a region in which the energy of such a moving wave cancels out fully: If you look closely, you will find that you have created a mirror, and that the missing energy has simply bounced off the region you created. Examples include opals, peacock feathers, and ordinary light mirrors. The first two reflect specific frequencies of light because repeating internal structures create a physical regions in which that frequency of light cannot travel - that is, a region in which near-total energy cancellation occurs. An optical mirror uses electrons at the top of their Fermi seas to cancel out light over a much broader range of frequencies. In all three examples the light bounces off the region, with only a little of its energy being absorbed (converted to heat). A skip rope (or perhaps a garden hose) provides a more accessible example. First, lay out the rope or hose along its length, then give it quick, sharp clockwise motion. You get a helical wave that travels quickly away from you like a moving corkscrew. No standing wave, that! You put a friend at the other end, but she does not want your wave hitting her. So what does she do? First she tries sending a clockwise wave at you too, but that seems to backfire. Your wave if anything seems to hit harder and faster. So she tries a counterclockwise motion instead. That seems to work much better. It halts the forward progress of the wave you launched at her, converting it instead to a loop. That loop still has lots of energy, but at least now it stays in one place. It has become a standing wave, in this case a classic skip-rope loop, or maybe two or more loops if you are good at skip rope. What happened is that she used a canceling motion to keep your wave from hitting her. But curiously, her cancelling motion also created a wave, one that is twisted in the opposite way (counterclockwise) and moving towards you, just as your clockwise wave moved towards her. As it turns out, the motion you are already doing cancels her wave too, sending it right back at her. The wave is now trapped between your two cancelling actions. The sum of the two waves, which now looks sinusoidal instead of helical, has the same energy as your two individual helical waves added together. I should note that you really only need one person driving the wave, since any sufficiently solid anchor for one end of the rope will also prevent the wave from entering it, and so end up reflecting that wave just as your friend did using a more active approach. Physical media such as peacock features and Fermi sea electrons also use a passive approach to reflection, with the same result: The energy is forbidden by cancellation from entering into some region of space. So, while this is by no means a complete explanation, I hope it provides some "feel" for what complete energy cancellation really means: It's more about keeping waves out . Thinking of cancellation as the art of building wave mirrors provides a different and less paradoxical-sounding perspective on a wide variety of phenomena that alter, cancel, or redirect waves. | {
"source": [
"https://physics.stackexchange.com/questions/23936",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5374/"
]
} |
24,001 | I am wondering if the mass density profile $\rho(\vec{r})$ has been characterized for atomic particles such as quarks and electrons. I am currently taking an intro class in quantum mechanics, and I have run this question by several professors. It is my understanding from the viewpoint of quantum physics a particle's position is given by a probability density function $\Psi(\vec{r},t)$. I also understand that when books quote the "radius" of an electron they are typically referring to some approximate range into which an electron is "likely" to fall, say, one standard deviation from the expectation value of its position or maybe $10^{-15}$ meters. However it is my impression that, in this viewpoint, wherever the particle "is" or even whether or not the particle "had" any position to begin with (via the Bell Inequalities), it is assumed that if it were (somehow) found, it would be a point mass. This has been verified by my professors and GSIs. I am wondering if its really true. If the particle was truly a point mass then wherever it is , it would presumably have an infinite mass density. Wouldn't that make electrons and quarks indistinguishable from very tiny black holes? Is there any practical difference between saying that subatomic particles are black holes and that they are point masses? I am aware of such problems as Hawking Radiation although at the scales of the Schwarchild radius of an electron (back of the envelope calculation yields $\tilde{}10^{-57}$ meters), would it really make any more sense to use quantum mechanics as opposed to general relativity? If anyone knows of an upper bound on the volume over which an electron/quark/gluon/anything else is distributed I would be interested to know. A quick Google Search has yielded nothing but the "classical" electron radius, which is not what I am referring to. Thanks in advance; look forward to the responses. | Let me start by saying nothing is known about any possible substructure of the electron . There have been many experiments done to try to determine this, and so far all results are consistent with the electron being a point particle. The best reference I can find is this 1988 paper by Hans Dehmelt (which I unfortunately can't access right now) which sets an upper bound on the radius of $10^{-22}\text{ m}$. The canonical reference for this sort of thing is the Particle Data Group's list of searches for lepton and quark compositeness . What they actually list in that reference is not exactly a bound on the electron's size in any sense, but rather the bounds on the energy scales at which it might be possible to detect any substructure that may exist within the electron. Currently, the minimum is on the order of $10\text{ TeV}$, which means that for any process occurring up to roughly that energy scale (i.e. everything on Earth except high-energy cosmic rays), an electron is effectively a point. This corresponds to a length scale on the order of $10^{-20}\text{ m}$, so it's not as strong a bound as the Dehmelt result. Now, most physicists (who care about such things) probably suspect that the electron can't really be a point particle, precisely because of this problem with infinite mass density and the analogous problem with infinite charge density. For example, if we take our current theories at face value and assume that general relativity extends down to microscopic scales, an point-particle electron would actually be a black hole with a radius of $10^{-57}\text{ m}$. However, as the Wikipedia article explains, the electron's charge is larger than the theoretical allowed maximum charge of a black hole of that mass. This would mean that either the electron would be a very exotic naked singularity (which would be theoretically problematic), or general relativity has to break at some point before you get down to that scale. It's commonly believed that the latter is true, which is why so many people are occupied by searching for a quantum theory of gravity. However, as I've mentioned, we do know that whatever spatial extent the electron may have cannot be larger than $10^{-22}\text{ m}$, and we're still two orders of magnitude away from probing that with the most powerful particle accelerator in the world. So for at least the foreseeable future, the electron will effectively be a point. | {
"source": [
"https://physics.stackexchange.com/questions/24001",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8774/"
]
} |
24,018 | I saw this documentary hosted by Stephen Hawkins. And if I didn't get it wrong, it says that there was no time before the big bang, time was created there. So how can anything happen when there is no time (eg: the creation of time and the universe)? Is this what he calls a 'singularity'? Which I could think like this: when there is no time nothing can happen, except during this singularity. Of course I don't know of a law that says 'when there is no time nothing can happen..', I just came up with it, as something intuitive. | General relativity is a local theory. That means it describes spacetime near the point you're looking at but it doesn't say anything about the large scale structure of spacetime. Now this may seem unrelated to your question, but actually it's key to why we say that time started at the Big Bang. If we make a few apparently sensible assumptions about the universe we can solve the Einstein equation and get the FLRW metric. This metric allows us to start at our current position and trace back towards the Big Bang to see what happens. As we do this we are calculating a geodesic, which is simply the curve in spacetime followed by a freely moving object. The key point about this is that from some point on our geodesic, e.g. me sitting typing this, we use the metric to calculate the immediately preceeding points, then from there we use the metric to calculate even earlier points and so on back in time towards the Big Bang. The problem is that as we calculate back towards the Big Bang the metric gets larger and larger, and at the moment of the Big Bang it becomes infinite. You can't do arithmetic with infinity. It might be fun to speculate what $\infty$ times 0 is, but when this sort of expression crops up in Physics it means we have to admit we can't calculate what's going on. This is why you'll often hear it said that time started at the Big Bang. It's because we can't calculate backwards in time from that point. Now that doesn't necessarily mean there was no time before the Big Bang, it just means we have no way of calculating it from General Relativity. If you believe Loop Quantum Cosmology , this predicts that there was a bounce at the Big Bang, so we can follow geodesics back through the Big Bang and into an earlier universe. However this is highly speculative. Incidentally, you get exactly the opposite effect if you fall into a black hole. If you launch yourself into a static black hole your geodesic is described by a metric called the Schwarzchild metric . The metric allows you to calculate your path towards the centre of the black hole (the singularity) but when you reach the center the metric becomes infinite and you can't calculate it any further. It's often said that time stops at the central singularity in a black hole. | {
"source": [
"https://physics.stackexchange.com/questions/24018",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3935/"
]
} |
24,034 | Question: Is the Avogadro's constant equal to one? I was tasked with creating a presentation on Avogadro's work, and this is the first time I actually got introduced to the mole and to Avogadro's constant . And, to be honest, it doesn't make any mathematical sense to me. 1 mole = 6.022 * 10^23
Avogadro's constant = 6.022 * 10^23 * mole^(-1) What? This hole field seems very redundant. There are four names for the same thing! Since when is a number considered to be a measurement unit anyway?! | Yes, Avogadro's constant is a redundant artifact from the era in the history of chemistry in which people didn't know how many atoms there were in a macroscopic amount of a material and it is indeed legitimate to set Avogadro's constant equal to one and abandon the awkward obsolete unit "mole" along the way. This $N_A=1$ is equivalent to
$$ 1\,\,{\rm mole} = 6.023\times 10^{23} \text{molecules or atoms} $$
and the text "molecules or atoms" is usually omitted because they're formally dimensionless quantities and one doesn't earn much by considering "one molecule" to be a unit (because its number is integer and everyone may easily agree about the size of the unit). We may use the displayed formula above to replace "mole" (or its power) in any equation by the particular constant (or its power) in the same way as we may replace the word "dozen" by 12 everywhere (hat tip: Mark Eichenlaub). We can only do so today because we know how many atoms there are in macroscopic objects; people haven't had this knowledge from the beginning which made the usage of a special unit "mole" justified. But today, the particular magnitude of "one mole" is an obsolete artifact of social conventions that may be eliminated from science. Setting $N_A=1$ is spiritually the same as the choice of natural units which have $c=1$ (helpful in relativity), $\hbar=1$ (helpful in any quantum theory), $G=1$ or $8\pi G=1$ (helpful in general relativity or quantum gravity), $k=1$ (helpful in discussions of thermodynamics and statistical physics: entropy may be converted to information and temperature may be converted to energy), $\mu_0=4\pi$ (vacuum permeability, a similar choice was done by Gauss in his CGSM units and with some powers of ten, it was inherited by the SI system as well: $4\pi$ is there because people didn't use the rationalized formulae yet) and others. See this article for the treatment of all these universal constants and the possible elimination of the independent units: http://motls.blogspot.com/2012/04/lets-fix-value-of-plancks-constant.html In every single case in this list, the right comment is that people used to use different units for quantities that were the same or convertible from a deeper physical viewpoint. (Heat and energy were another example that was unified before the 20th century began. Joule discovered the heat/energy equivalence which is why we usually don't use calories for heat anymore; we use joules both for heat and energy to celebrate him and the conversion factor that used to be a complicated number is one.) In particular, they were counting the number of molecules not in "units" but in "moles" where one mole turned out to be a very particular large number of molecules. Setting the most universal constants to one requires one to use "coherent units" for previously independent physical quantities but it's worth doing so because the fundamental equations simplify: the universal constants may be dropped. It's still true that if you use a general unit such as "one mole" for the amount (which is useful e.g. because you often want the number of moles to be a reasonable number comparable to one, while the number of molecules is unreasonably large), you have to use a complicated numerical value of $N_A$. One additional terminological comment: the quantity that can be set to one and whose units are inverse moles is called the Avogadro constant , while the term "Avogadro's number" is obsolete and contains the numerical value of the Avogadro constant in the SI units. The Avogadro constant can be set to one; Avogadro's number, being dimensionless and different from one, obviously can't. Also, the inverse of Avogadro's number is the atomic mass unit in grams, with the units of grams removed. It's important to realize that the actual quantities, the atomic mass unit (with a unit of mass) and the Avogadro constant are not inverse to each other at all, having totally different units (when it comes both to grams and moles). Moreover, the basic unit of mass in the SI system is really 1 kilogram, not 1 gram, although multiples and fractions are constructed as if 1 gram were the basic unit. | {
"source": [
"https://physics.stackexchange.com/questions/24034",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/5686/"
]
} |
24,068 | From what I understand, the uncertainty principle states that there is a fundamental natural limit to how accurately we can measure velocity and momentum at the same time. It's not a limit on equipment but just a natural phenomenon. However, isn't this just an observational limit? There is a definite velocity and momentum, we just don't know it. As in, we can only know so much about the universe, but the universe still has definite characteristics. Considering this, how do a wide range of quantum mechanical phenomena work? For example, quantum tunneling - its based on the fact that the position of the object is indefinite. But the position is definite, we just don't know it definitely. Or the famous light slot experiment? The creation of more light slots due to uncertainty of the photon's positions? What I am basically asking is why is a limit on the observer, affecting the phenomenon he is observing? Isn't that equivalent to saying because we haven't seen Star X, it doesn't exist? It's limiting the definition of the universe to the limits of our observation! | There is a definine velocity and momentum, we just don't know it. Nope. There is no definite velocity--this was the older interpretation. The particle has all (possible) velocities at once;it is in a wavefunction, a superposition of all of these states. This can actually be verified by stuff like the double-slit experiment with one photon--we cannot explain single-photon-fringes unless we accept the fact that the photon is in "both slits at once". So, it's not a knowledge limit. The particle really has no definite position/whatever. Isn't that equivalent to saying because we haven't seen Star X, it doesn't exist? It's limiting the definition of the universe to the limits of our observation! No, it's equivalent to saying "because we haven't gotten any evidence of Star X, it may or may not exist --it's existence is not definite" Technically, an undetected object does exist as a wavefunction. Though it gets slightly philosophical and boils down to "If a tree falls in a forest and no one is around to hear it, does it make a sound?" | {
"source": [
"https://physics.stackexchange.com/questions/24068",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
24,478 | Is there any good research done to find out the work done in clicking a mouse button? Any link to that would be greatly appreciated. P.S. I am not too sure whether this question belongs here or not, so please let me know, if it doesn't, I will remove it. I have already googled "work done to click a mouse" "mouse click research" and other relevant queries on google and google scholar, but only in vain! | To prove that experimental Physics is alive and well, I used my kitchen scales to measure the force needed to click the button on my mouse, and it turned out to be 100g i.e. 1 N plus or minus about 10%. The distance the button moves is about a millimeter i.e. 0.001m, plus or minus 20% (OK - you try measuring it without a micrometer to hand) so the work per click is 0.001J $\pm$ 22%. The mouse does 0.001J work on me while the button is rising again, but I have not noticed any invigorating effects from this. Note that I have ignored the work required to move my finger, i.e. I have assumed that I am 100% efficient (an approximation that my colleagues would question). All suggestions for refinements to this calculation will be gratefully ignored. | {
"source": [
"https://physics.stackexchange.com/questions/24478",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6487/"
]
} |
24,485 | If we have a one dimensional system where the potential $$V~=~\begin{cases}\infty & |x|\geq d, \\ a\delta(x) &|x|<d, \end{cases}$$ where $a,d >0$ are positive constants, what then is the corresponding classical case -- the approximate classical case when the quantum number is large/energy is high? | To prove that experimental Physics is alive and well, I used my kitchen scales to measure the force needed to click the button on my mouse, and it turned out to be 100g i.e. 1 N plus or minus about 10%. The distance the button moves is about a millimeter i.e. 0.001m, plus or minus 20% (OK - you try measuring it without a micrometer to hand) so the work per click is 0.001J $\pm$ 22%. The mouse does 0.001J work on me while the button is rising again, but I have not noticed any invigorating effects from this. Note that I have ignored the work required to move my finger, i.e. I have assumed that I am 100% efficient (an approximation that my colleagues would question). All suggestions for refinements to this calculation will be gratefully ignored. | {
"source": [
"https://physics.stackexchange.com/questions/24485",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8913/"
]
} |
24,596 | Noether's (first) theorem states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Is the converse true: Any conservation law of a physical system has a differentiable symmetry of its action? | I) For a mathematical precise treatment of an inverse Noether's Theorem, one should consult e.g. Olver's book (Ref. 1, Thm. 5.58), as user orbifold also writes in his answer (v2). Here we would like give a heuristic and less technical discussion, to convey the heart of the matter, and try to avoid the language of jets and prolongations as much as possible. In popular terms, we would like to formulate an "inverse Noether machine" $$ \text{Input: Lagrangian system with known conservation laws} $$ $$ \Downarrow $$ $$ \text{[inverse Noether machine]} $$ $$ \Downarrow $$ $$ \text{Output: (quasi)symmetries of action functional} $$ Since this "machine" is supposed to be a mathematical theorem that should succeed everytime without exceptions (else it is by definition not a theorem!), we might have to narrow down the set/class/category of inputs that we allow into the machine in order not to have halting errors/breakdowns in the machinery. II) Let us make the following non-necessary restrictions for simplicity: Let us focus on point mechanics with a local action functional $$ S[q] ~=~ \int\! dt~ L(q(t), \frac{dq(t)}{dt}, \ldots,\frac{d^Nq(t)}{dt^N} ;t), \tag{1} $$ where $N\in\mathbb{N}_0$ is some finite order. Generalization to classical local field theory is straightforward. Let us restrict to only vertical transformations $\delta q^i$ , i.e., any horizontal transformation $\delta t=0$ vanishes. (Olver essentially calls these evolutionary vector fields, and he mentions that it is effectively enough to study those (Ref. 1, Prop. 5.52).) Let us assume, as Olver also does, that the Lagrangian $L$ and the transformations are real analytic $^{\dagger}$ . The following technical restrictions/extensions are absolutely necessary: The notion of symmetry $\delta S=0$ should be relaxed to quasisymmetry (QS). By definition a QS of the action $S$ only has to hold modulo boundary terms. (NB: Olver uses a different terminology: He calls a symmetry for a strict symmetry, and a quasisymmetry for a symmetry.) The notion of QS transformations might only make sense infinitesimally/as a vector field/Lie algebra. There might not exist corresponding finite QS transformations/Lie group. In particular, the QS transformations are allowed to depend on the velocities $\dot{q}$ . (Olver refers to this as generalized vector fields (Ref. 1, Def. 5.1).) III) Noether's Theorem provides a canonical recipe of how to turn a QS of the action $S$ into a conservation law (CL), $$ \frac{dQ}{dt}~\approx~0,\tag{2}$$ where $Q$ is the full Noether charge. (Here the $\approx$ symbol means equality on-shell, i.e. modulo the equations of motion (eom).) Remark 1: Apart from time $t$ , the QS transformations are only allowed to act on the variables $q^i$ that actively participate in the action principle. If there are passive external parameters, say, coupling constants, etc, the fact that they are constant in the model are just trivial CLs, which should obviously not count as genuine CLs. In particular, $\frac{d1}{dt}=0$ is just a trivial CL. Remark 2: A CL should by definition hold for all solutions, not just for a particular solution. Remark 3: A QS of the action $S$ is always implicitly assumed to hold off-shell. (It should be stressed that an on-shell QS of the action $$ \delta S \approx \text{boundary terms}\tag{3} $$ is a vacuous notion, as the Euler-Lagrange equations remove any bulk term on-shell.) Remark 4: It should be emphasized that a symmetry of eoms does not always lead to a QS of the Lagrangian, cf. e.g. Ref. 2, Example 1 below, and this Phys.SE post. Hence it is important to trace the off-shell aspects of Noether's Theorem. Example 1: A symmetry of the eoms is not necessarily a QS of the Lagrangian. Let the Lagrangian be $L=\frac{1}{2}\sum_{i=1}^n \dot{q}^i g_{ij} \dot{q}^j$ , where $g_{ij}$ is a constant non-degenerate metric. The eoms $\ddot{q}^i\approx 0$ have a $gl(n,\mathbb{R})$ symmetry $\delta q^{i}=\epsilon^i{}_j~q^{i}$ , but only an $o(n,\mathbb{R})$ Lie subalgebra of the $gl(n,\mathbb{R})$ Lie algebra is a QS of the Lagrangian. IV) Without further assumptions, there is a priori no guarantee that the Noether recipe will turn a QS into a non-trivial CL. Example 2: Let the Lagrangian $L(q)=0$ be the trivial Lagrangian. The variable $q$ is pure gauge. Then the local gauge symmetry $\delta q(t)=\epsilon(t)$ is a symmetry, although the corresponding CL is trivial. Example 3: Let the Lagrangian be $L=\frac{1}{2}\sum_{i=1}^3(q^i)^2-q^1q^2q^3$ . The eom are $q_1\approx q_2q_3$ and cyclic permutations. It follows that the positions $q^i\in\{ 0,\pm 1\}$ are constant. (Only $1+1+3=5$ out of the $3^3=27$ branches are consistent.) Any function $Q=Q(q)$ is a conserved quantity. The transformation $\delta q^i=\epsilon \dot{q}^i$ is a QS of the action $S$ . If we want to formulate a bijection between QSs and CLs, we must consider equivalence classes of QSs and CLs modulo trivial QSs and CLs, respectively. A QS transformation $\delta q^i$ is called trivial if it vanishes on-shell (Ref. 1, p.292). A CL is called trivial of first kind if the Noether current $Q$ vanishes on-shell. trivial of second kind if CL vanishes off-shell. trivial if it is a linear combination of CLs of first and second kinds (Ref. 1, p.264-265). V) The most crucial assumption is that the eoms are assumed to be (totally) non-degenerate. Olver writes (Ref. 1, Def. 2.83.): A system of differential equations is called totally non-degenerate if it and all its prolongations are both of maximal rank and locally solvable $^{\ddagger}$ . The non-degeneracy assumption exclude that the action $S$ has a local gauge symmetry. If $N=1$ , i.e. $L=L(q,\dot{q},t)$ , the non-degeneracy assumption means that the Legendre transformation is regular, so that we may easily construct a corresponding Hamiltonian formulation $H=H(q,p,t)$ . The Hamiltonian Lagrangian reads $$ L_H~=~p_i \dot{q}^i-H.\tag{4} $$ VI) For a Hamiltonian action functional $S_H[p,q] = \int\! dt~ L_H$ , there is a canonical way to define an inverse map from a conserved quantity $Q=Q(q,p,t)$ to a transformation of $q^i$ and $p_i$ by using the Noether charge $Q$ as Hamiltonian generator for the transformations, as also explained in e.g. my Phys.SE answer here . Here we briefly recall the proof. The on-shell CL (2) implies $$ \{Q,H\}+\frac{\partial Q}{\partial t}~=~0\tag{5} $$ off-shell, cf. Remark 2 and this Phys.SE post. The corresponding transformation $$ \delta q^i~=~ \{q^i,Q\}\epsilon~=~\frac{\partial Q}{\partial p_i}\epsilon\qquad \text{and}\qquad
\delta p_i~=~ \{p_i,Q\}\epsilon~=~-\frac{\partial Q}{\partial q^i}\epsilon\tag{6} $$ is a QS of the Hamiltonian Lagrangian $$\begin{align} \delta L_H ~\stackrel{(4)}{=}~&\dot{q}^i \delta p_i -\dot{p}_i \delta q^i -\delta H+\frac{d}{dt}(p_i \delta q^i)\cr
~\stackrel{(6)+(8)}{=}& -\dot{q}^i\frac{\partial Q}{\partial q^i}\epsilon
-\dot{p}_i\frac{\partial Q}{\partial p_i}\epsilon
-\{H,Q\}\epsilon + \epsilon \frac{d Q^0}{dt}\cr
~\stackrel{(5)}{=}~& \epsilon \frac{d (Q^0-Q)}{dt}~\stackrel{(9)}{=}~ \epsilon \frac{d f^0}{dt},\end{align}\tag{7} $$ because $\delta L_H$ is a total time derivative. Here $Q^0$ is the bare Noether charge $$ Q^0~=~ \frac{\partial L_H}{\partial \dot{q}^i} \{q^i,Q\}
+ \frac{\partial L_H}{\partial \dot{p}_i} \{p_i,Q\}
~=~ p_i \frac{\partial Q}{\partial p_i},\tag{8} $$ and $$ f^0~=~ Q^0-Q .\tag{9} $$ Hence the corresponding full Noether charge $$ Q~=~Q^0-f^0\tag{10} $$ is precisely the conserved quantity $Q$ that we began with. Therefore the inverse map works in the Hamiltonian case. Example 4: The non-relativistic free particle $L_H=p\dot{q}-\frac{p^2}{2m}$ has e.g. the two conserved charges $Q_1=p$ and $Q_2=q-\frac{pt}{m}$ . The inverse Noether Theorem for non-degenerate systems (Ref. 1, Thm. 5.58) can intuitively be understood from the fact, that: Firstly, there exists an underlying Hamiltonian system $S_H[p,q]$ , where the bijective correspondence between QS and CL is evident. Secondly, by integrating out the momenta $p_i$ we may argue that the same bijective correspondence holds for the original Lagrangian system. VII) Finally, Ref. 3 lists KdV and sine-Gordon as counterexamples to an inverse Noether Theorem. KdV and sine-Gordon are integrable systems with infinitely many conserved charges $Q_n$ , and one can introduce infinitely many corresponding commuting Hamiltonians $\hat{H}_n$ and times $t_n$ . According to Olver, KdV and sine-Gordon are not really counterexamples, but just a result of a failure to properly distinguishing between non-trivial and trivial CL. See also Ref. 4. References: P.J. Olver, Applications of Lie Groups to Differential Equations, 1993. V.I. Arnold, Mathematical methods of Classical Mechanics, 2nd eds., 1989, footnote 38 on p. 88. H. Goldstein, Classical Mechanics; 2nd eds., 1980, p. 594; or 3rd eds., 2001, p. 596. L.H. Ryder, Quantum Field Theory, 2nd eds., 1996, p. 395. $^{\dagger}$ Note that if one abandons real analyticity, say for $C^k$ differentiability instead, the analysis may become very technical and cumbersome. Even if one works with the category of smooth $C^\infty$ functions rather than the category of real analytic functions, one could encounter the Lewy phenomenon , where the equations of motion (eom) have no solutions at all! Such situation would render the notion of a conservation law (CL) a bit academic! However, even without solutions, a CL may formally still exists as a formal consequence of eoms. Finally, let us add that if one is only interested in a particular action functional $S$ (as opposed to all action functionals within some class) most often, much less differentiability is usually needed to ensure regularity. $^{\ddagger}$ Maximal rank is crucial, while locally solvable may not be necessary, cf. previous footnote. | {
"source": [
"https://physics.stackexchange.com/questions/24596",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2146/"
]
} |
24,895 | Every kid who first looks into a telescope is shocked to see that everything's black and white. The pretty colors, like those in this picture of the Sleeping Beauty Galaxy (M64) , are missing: The person running the telescope will explain to them that the color they see in pictures like those isn't real. They're called "false color images", and the colors usually represent light outside the visual portion of the electromagnetic spectrum. Often you see images where a red color is used for infrared light and purple for ultraviolet. Is this also correct for false color astronomy images? What colors are used for other parts of the spectrum? Is there a standard, or does it vary by the telescope the image was taken from or some other factor? | Part of why you don't see colors in astronomical objects through a telescope is that your eye isn't sensitive to colors when what you are looking at is faint. Your eyes have two types of photoreceptors: rods and cones. Cones detect color, but rods are more sensitive. So, when seeing something faint, you mostly use your rods, and you don't get much color. Try looking at a color photograph in a dimly lit room. As Geoff Gaherty points out, if the objects were much brighter, you would indeed see them in color. However, they still wouldn't necessarily be the same colors you see in the images, because most images are indeed false color. What the false color means really depends on the data in question. What wavelengths an image represents depends on what filter was being used (if any) when the image was taken, and the sensitivity of the detector (eg CCD) being used. So, different images of the same object may look very different. For example, compare this image of the Lagoon Nebula (M8) to this one . Few astronomers use filter sets designed to match the human eye. It is more common for filter sets to be selected based on scientific considerations. General purpose sets of filters in common use do not match the human eye: compare the transmission curves for the Johnson-Cousins UBVRI filters and the SDSS filters the the sensativity of human cone cells . So, a set of images of an object from a given astronomical telescope may have images at several wavelengths, but these will probably not be exactly those that correspond to red, green, and blue to the human eye. Still, the easiest way for humans to visualise this data is to map these images to the red, green, and blue channels in an image, basically pretending that they are. In addition to simply mapping images through different filters to the RGB channels of an image, more complex approaches are sometimes used. See, for example, this paper (2004PASP..116..133L) . So, ultimately, what the colors you see in a false color image actually mean depends both of what data happened to be used to be make the image and the method of doing the mapping preferred by whoever constructed the image. | {
"source": [
"https://physics.stackexchange.com/questions/24895",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
24,912 | Is there a possible explanation for the apparent equal size of sun and moon or is this a coincidence? (An explanation can involve something like tide-lock effects or the anthropic principle.) | It just happens to be a coincidence. The current popular theory for how the Moon formed was a glancing impact on the Earth, late in the planet buiding process, by a Mars sized object. This caused the break up of the impactor and debris from both the impactor and the proto-Earth was flung into orbit to later coallesce into the Moon. So the Moon's size just happens to be random. Plus the Moon was formed closer to the Earth and due to tidal interactions is slowly drifting away. Over time (astronomical time, millions and millions of years) it will appear smaller and smaller in the sky. It will still always be roughly the size of the Sun but total solar eclipses will become rarer and rarer (they will be more and more annular or partial). Likewise in the past, it was larger and total eclipses were both longer and more common. | {
"source": [
"https://physics.stackexchange.com/questions/24912",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3415/"
]
} |
24,927 | How do scientists measure the distance between objects in space? For example, Alpha Centauri is 4.3 light years away. | There are a variety of methods used to measure distance, each one building on the one before and forming a cosmic distance ladder. The first, which is actually only usable inside the solar system, is basic Radar and LIDAR . LIDAR is really only used to measure distance to the moon. This is done by flashing a bright laser through a big telescope (such as the 3.5 m on Apache Point in New Mexico (USA), see the Apollo Project ) and then measuring the faint return pulse with that telescope from the various corner reflectors placed there by the Apollo moon missions. This allows us to measure the distance to the Moon very accurately (down to centimeters I believe). Radar has been used at least out to Saturn by using the 305 m Arecibo
radio dish as both a transmitter and receiver to bounce radio waves off of Saturn's moons. Round trip radio time is on the order of almost 3 hours. If you want to get distances to things beyond our solar system, the first rung on the distance ladder is, as Wedge described in his answer, triangulation, or as it is called in astronomy, parallax. To measure distance in this manner, you take two images of a star field, one on each side of the Earth's orbit so you effectively have a baseline of 300 million kilometers. The closer stars will shift relative to the more distant background stars and by measuring the size of the shift, you can determine the distance to the stars. This method only works for the closest stars for which you can measure the shift. However, given today's technology, that is actually quite a few stars. The current best parallax catalog is the Tycho-2 catalog made from data observed by the ESA Hipparcos satellite in the late 1980s and early 1990s. Parallax is the only direct distance measurement we have on astronomical scales. (There is another method, the moving cluster method , but it has very limited applicability.) Beyond that everything else is based on data calibrated using stars for which we can determine parallax. And they all rely on some application of the distance-luminosity relationship $m - M = 5log_{10}\left(\frac{d}{10pc}\right)$ where m = apparent magnitude (brightness) of the object M = Absolute magnitude of the object (brightness at 10 parsecs) d = distance in parsecs Given two of the three you can find the third. For the closer objects, for which we know the distance, we can measure the apparent magnitude and thus compute the absolute magnitude. Once we know the absolute magnitude for a given type of object, we can measure the apparent magnitudes of these objects in more distant locations, and since we now have the apparent and absolute magnitudes, we can compute the distance to these objects. It is this relationship that allows us to define a series of "standard candles" that serve as ever more distant rungs on our distance ladder stretching back to the edge of the visible universe. The closest of these standard candles are the Cepheid variable stars. For these stars, the period of their variability is directly related to the absolute magnitude. The longer the period, the brighter the star. These stars can be seen in both our galaxy and in many of the closer galaxies as well. In fact, observing Cepheid variable stars in distant galaxies, was one of the original primary mission of the Hubble Space Telescope (named after Edwin Hubble who measured Cepheids in M31 , the Andromeda Galaxy, thus proving that it was an “island universe” itself and not part of the Milky Way). Beyond the Cepheid variables, other standard candles, such as planetary nebula, the Tully-Fisher relation and especially Type 1a supernova allow us to measure the distance to even more distant galaxies and out to the edge of the visible universe. All of these later methods are based on calibrations of distances made using Cepheid variable stars (hence the importance of the Hubble mission to really nail down those observations. | {
"source": [
"https://physics.stackexchange.com/questions/24927",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
24,934 | If a large star goes supernova, but not enough mass collapses to form a black hole, it often forms a neutron star. My understanding is that this is the densest object that can exist because of the Pauli exclusion principle: It's made entirely of degenerate matter, each particle of which cannot occupy the same quantum state of any other. So these objects are so massive that they gravitationally lens light. If you make them more massive, they bend the light more. Keep going and going until they bend the light so much that light passing near the surface can barely escape. It's still a neutron star. Add a bit more mass, just enough that light passing just over the surface cannot escape. Now it's a black hole with an event horizon (I think?). Does this mean the neutron star has become a singularity? Isn't it still just a neutron star just beneath the event horizon? Why are black holes treated as having a singularity instead of just an incredibly massive neutron star at its center? Does something happen when an event horizon is "created?" | Short answer is yes. But if you want to nit pick, I could argue that when a star collapses to form a BH, it first forms a horizon before the singularity forms (cannot form a "naked singularity"). And since time inside the horizon is essentially frozen with respect to that of an observer outside, the singularity NEVER forms. Yet from the point of view of the collapsing star, the singularity forms in about a millisecond after the horizon. | {
"source": [
"https://physics.stackexchange.com/questions/24934",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/417/"
]
} |
24,958 | Considering that a black hole's gravity prevents light from escaping, how can a black hole emit X-rays? Visible light and X-rays are both electromagnetic radiation, shouldn't the black hole's gravity prevent X-rays from escaping? | The X-rays come from hot gas orbiting around the black hole in an accretion disk. As the gas orbits, magnetic stresses cause it to lose energy and angular momentum, thus spiralling slowly in towards the black hole. The orbital energy is transformed into thermal energy, heating up the gas to millions of degrees, so it then emits blackbody radiation in the X-ray band. Once the gas gets closer than a few times the horizon radius, it plunges into the black hole, so while some X-rays can still escape just before the horizon, most are emitted a fair bit outside. | {
"source": [
"https://physics.stackexchange.com/questions/24958",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
25,070 | I understand that we can never see much farther than the farthest galaxies we have observed. This is because, before the first galaxies formed, the universe was opaque--it was a soup of subatomic particles that scattered all light. But before the universe was opaque, the Big Bang happened, which is where the cosmic microwave background (CMB) comes from. If the opaque early universe scattered all light, and the first few galaxies are as far back as we can see, why is the CMB observable? Where is it coming from? | The cosmic microwave background does not originate with the big bang itself. It originates roughly 380,000 years after the big bang, when the temperature dropped far enough to allow electrons and protons to form atoms. When it was released, the cosmic microwave background wasn't microwave at all- the photons had higher energies. Since that time, they have been redshifted due to the expansion of the universe, and are presently in the microwave band. The universe is opaque from 380,000 years and earlier. The galaxies that we can see only formed after that time. Before that, all that is observable is the CMB. | {
"source": [
"https://physics.stackexchange.com/questions/25070",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/417/"
]
} |
25,110 | Almost all of the orbits of planets and other celestial bodies are elliptical , not circular . Is this due to gravitational pull by other nearby massive bodies? If this was the case a two body system should always have a circular orbit . Is that true? | No, any ellipse is a stable orbit, as shown by Johannes Kepler . A circle happens to be one kind of ellipse, and it's not any more likely or preferable than any other ellipse. And since there are so many more non-circular ellipses (infinitely many), it's simply highly unlikely for two bodies to orbit each other in a perfect circle. | {
"source": [
"https://physics.stackexchange.com/questions/25110",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1512/"
]
} |
25,123 | A couple of years ago my son showed an interest in astronomy and we bought a 6" reflector telescope. We use it pretty regularly and have enjoyed it immensely. Lately we've both been wishing we had something bigger to be able see more things and to see what we can see now with more detail. How do you determine the size of telescope needed to view a certain object? I understand that there are a lot of other factors that come into play when talking about what you can see and how well you can see it. Ideally I suppose what I'm looking for is some sort of chart/table that gives a general guideline of the scope size and some of the objects that should be viewable (with an average setup). Are there any such resources? | There are lots of mathematical answers to this question, but I'd like to make a few qualitative observations instead, based on 54 years using telescopes of all kinds and sizes, from 40mm refractors to 74-inch reflectors. Unless you have some specialized purpose, don't consider anything smaller than 6 inches aperture. Small telescopes look cute, but don't
show you much, especially if you're a beginner. Experienced observers
can tease amazing observations out of tiny scopes, but most of us
will be happier to give these a pass. Aperture wins. A 10-inch Newtonian on a Dobsonian mount is something of a "sweet
spot." It's about the smallest aperture to show significant detail in
deep sky objects, yet is compact and light enough to be easily
transported to dark sky sites. Above 10-inches, the more aperture the better, provided you can
comfortably transport, set up, and operate it . This is crucial! The
nicest telescope in the world is useless if it never gets used. I
find even a 12-inch Dob becomes bulky, cumbersome, and heavy.
Aperture wins, but only if you use it. | {
"source": [
"https://physics.stackexchange.com/questions/25123",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
25,128 | A spiral arm orbiting a central mass should be dispersed quite quickly as the outer elements would move more slowly than the inner ones. The Milky Way, is about 59 Galactic Years old , which, one would have thought, would be enough rotations to disperse a spiral structure entirely. Is there, then, something keeping the spiral arms in existence, and if so what could it be? Or are the spiral galaxies monstrous co-incidences? | The material (gas and stars) in the outer part of a galaxy move with roughly the same velocity as the inner part (for example, see this paper ), which means that the inner portions do indeed have a faster angular speed; this is sometimes referred to as the "winding problem." One important feature of spiral arms is that they are bright more because they have lots of young stars than because they have extra material. Young populations of stars include bright, short-lived, blue stars, which die off over time, leaving the fainter, redder populations. Populations of these young stars are particularly apparent in images like this one . Because of this population of bright, young stars, the density of matter in spiral arms compared to the non-arm disk is not as great as their brightness would suggest. The usual explanation for spiral arms is that they are the result of density waves rather than moving structures. See this paper for a short review. | {
"source": [
"https://physics.stackexchange.com/questions/25128",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/612/"
]
} |
25,131 | I have heard multiple estimates on the quantity of stars within our galaxy, anything from 100 to 400 billion of them. The estimates seem to be increasing for the time being. What are the main methods that are used to make these estimates, and why are there such large discrepancies between them? | The estimates I've read are similar to yours: 200 to 400 billion stars. Counting the stars in the galaxy is inherently difficult because, well, we can't see all of them. We don't really count the stars, though. That would take ages: instead we measure the orbit of the stars we can see. By doing this, we find the angular velocity of the stars and can determine the mass of the Milky Way. But the mass isn't all stars. It's also dust, gas, planets, Volvos, and most overwhelmingly: dark matter . By observing the angular momentum and density of stars in other galaxies, we can estimate just how much of our own galaxy's mass is dark matter. That number is close to 90%. So we subtract that away from the mass, and the rest is stars (other objects are more-or-less insignificant at this level). The mass alone doesn't give us a count though. We have to know about how much each star weighs, and that varies a lot. So we have to class different types of stars, and figure out how many of each are around us. We can extrapolate that number and turn the mass into the number of stars. Obviously, there's a lot of error in this method: it's hard to measure the orbit of stars around the galactic center because they move really, really slowly . So we don't know exactly how much the Milky Way weighs, and figuring out how much of that is dark matter is even worse. We can't even see dark matter, and we don't really understand it either. Extrapolating the concentrations of different classes of stars is inexact, and at best we can look at other galaxies to confirm that the far side of the Milky Way is probably the same as this one. Multiply all those inaccuracies together and you get a range on the order of 200 billion. | {
"source": [
"https://physics.stackexchange.com/questions/25131",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3395/"
]
} |
25,162 | Pluto has been designated a planet in our solar system for years (ever since it was discovered in the last century), but in 2006 it was demoted. What caused this decision? And is there a chance that it could be reversed? Edit: well, http://www.dailygalaxy.com/my_weblog/2017/03/nasas-new-horizon-astronomers-declare-pluto-is-a-planet-so-is-jupiters-ocean-moon-europa.html is interesting; this is science, so anything could (potentially) change. | Pluto is now classified as a dwarf planet . The main difference between a planet and a dwarf planet has to do with the requirement that a planet clear out the material in and near its orbit. Planets do this, dwarf planets do not. The reclassification was triggered by the discovery of many additional object (the Edgeworth-Kuiper Belt) out beyond the orbit of Neptune. Some of the objects are nearly as big as (and is a few cases, possibly bigger than) Pluto and in very similar orbits. Thus it was realized that Pluto was just the largest of a large number of objects in the outer solar system. This is simply science at work. At the local university, we have an Astronomy textbook from the 1800's that lists the 12 planets: Mercury, Venus, Earth, Mars, Ceres, Pallas, Juno, Vesta, Jupiter, Saturn, Uranus, and Neptune. However, as more objects were detected between Mars and Jupiter, it was realized this was a new class of object and the middle four were downgraded from planet status to asteroids. It is the same process at work today out in the outer solar system. | {
"source": [
"https://physics.stackexchange.com/questions/25162",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1153/"
]
} |
25,209 | The Voyager 1 & 2 spacecraft launched in 1977 with Plutonium as their source of electricity. 34 years later they claim these two spacecraft have enough power to last them until at least 2020. That means they'll have had enough power to last them at least 42 years. It obviously offers enough power to literally send transmissions across the entire solar system. Why don't modern spacecraft use nuclear power if it offers such longevity and power? You would think that 34 years later we would have the technology to make this even more viable source of electricity then when the Voyagers were designed and built. The New Frontiers spacecraft seems like an excellent candidate for nuclear power. | It's all a question of if they need it. Most that are staying within a couple AU of the sun can get sufficient power from solar panels. It's when they start getting further away that they use an RTG . For example, New Horizons , which launched in 2006 (which is considered to be 'modern' when you only launch a few probes per year) is going to Pluto, so it won't be able to get sufficient power from solar panels, and uses an RTG. Like anything else, it's a question of risk and cost. If it's cheaper, or lower risk without significantly increased cost, they'll go with the alternative. | {
"source": [
"https://physics.stackexchange.com/questions/25209",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
25,221 | In Frontiers of Astronomy , Fred Hoyle advanced an idea from E.E.R.Holmberg that although the Earth's day was originally much shorter than it is now, and has lengthened owing to tidal friction, that further increases in the length of the day would not occur because of resonance effects in the atmosphere caused by the gravitational field of the sun. I haven't heard that suggested anywhere else, though, and wondered whether it was really the case, or is the length of the day increasing? | The length of a day is increasing slowly. The rate is very slow, about 0.0017 seconds per century. The length of SI day is based upon the mean solar day between 1750 and 1892. A mean day nowadays (!) is about 0.002 seconds longer, which means that we accumulate a difference of about 0.6 seconds every year. This is why we have leap seconds (leap second is when we stop our clocks for one second to let the earth "catch up"). From the Wikipedia article on leap seconds: The leap second adjustment (which is approximately 0.6 seconds per year) is necessary because of the difference between the length of the SI day (based on the mean solar day between 1750 and 1892) and the length of the current mean solar day (which is about 0.002 seconds longer). The difference between these two will increase with time, but only by 0.0017 seconds per century. In other words, the adjustment is required because we have decoupled the definition of the second from the current rotational period of the Earth. The actual rotational period varies due to unpredictable factors such as the motion of mass within Earth, and has to be observed rather than computed. Also see: ΔT . | {
"source": [
"https://physics.stackexchange.com/questions/25221",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/612/"
]
} |
25,252 | From what I know, the Moon is accelerating away from the Earth. Do we know when it will reach escape velocity? How do we calculate this? | It will never reach such a high velocity. The moon is drifting further from the earth due to tidal acceleration . This process is, at the same time, slowing the rotation of the earth. Once the earth's rotational period matches the moon's orbital period, the earth-moon system will be tidally locked to each other (note: the moon is already tidally locked to the earth), and the acceleration will cease. To briefly explain the mechanism, the gravitational pull between the earth and the moon causes tidal "bulges" to extend out on both bodies (just like the ocean tides, except that the entire surface moves slightly, not just the water). Since the earth rotates faster than the moon completes one orbit, the bulge on the earth lies slightly ahead of the earth-moon line, because the earth is rotating so quickly. This bulge gravitationally pulls on the moon, speeding it up, while at the same time the moon pulls on the bulge, creating a torque on the earth and slowing down its rotation. | {
"source": [
"https://physics.stackexchange.com/questions/25252",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/898/"
]
} |
25,275 | This is the Arecibo Observatory in Arecibo , Puerto Rico . Its reflector is spherical, measuring 1,001 ft. in diameter. It is considered the most sensitive radio telescope on Earth, but the fact that its reflector is spherical and not parabolic makes me wonder how much more sensitive it could be if the reflector were parabolic. What are the pros and cons of a sphere vs. parabolic? | It's spherical because the main dish cannot be steered; steering is done by moving the receiver (the big thing hanging over the center of the reflector). A parabolic reflector would produce varying errors when aimed in different directions; a spherical reflector has the same error for all directions. Presumably the receiver is designed to compensate for this. Source: Wikipedia . | {
"source": [
"https://physics.stackexchange.com/questions/25275",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
25,369 | I understand that the event horizon of a black hole forms at the radius from the singularity where the escape velocity is $c$ . But it's also true that you don't have to go escape velocity to escape an object if you can maintain some kind of thrust. You could escape the earth at 1 km/h if you could maintain the proper amount of thrust for enough time. So if you pass just beneath the event horizon, shouldn't you be able to thrust your way back out, despite the $>c$ escape velocity? Or does this restriction have to do solely with relativistic effects (time stopping for outside observers at the event horizon)? | @Florin is absolutely right, but sometimes a picture is worth a thousand words. This website has multiple pictures and explanations about how the future light cone starts to point only to the inside of the black hole once you pass the event horizon. Here is one of the images: Time is vertical, the cylinder represents the event horizon and the cones are the future light cones for the observer as they fall into the black hole. Note that even if the observer could instantaneously accelerate to almost the speed of light they would still be confined to the future light cone. So once the event horizon is crossed they cannot escape. UPDATE: As @Florin says in a comment, rotating black holes are even stranger than than the non rotating Schwarzchild black hole described above. In particular there is a region outside of the event horizon, called the ergosphere where space-time itself is dragged around the black hole at faster than the speed of light (relative to distant stars). It is possible to enter the ergosphere and still escape to infinity. In fact some of the black holes rotational energy can be extracted in this region and may be source of energy for gamma ray bursts. I'd also like to point out that there is no local experiment that can be performed to determine when the observer has crossed the event horizon. It will "feel" perfectly normal - it is only the future destination of light rays that changes when the horizon is crossed and that cannot be determined locally. In fact the "normal Newtonian" concept of the "force" of gravity doesn't have to be particularly strong at the horizon. A more massive black hole has a lower event horizon "gravity force". A hand waving way of seeing this is that the Newtonian force is proportional to $\frac{1}{R^2}$ but the event horizon radius is proportional to the mass, $M$ ; so the surface force is proportional to $\frac{1}{M}$ . See this Wikipedia article for a more rigorous discussion of what it might even mean to talk about the concept of "force" in General Relativity. | {
"source": [
"https://physics.stackexchange.com/questions/25369",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/417/"
]
} |
25,591 | If the big bang was the birth of everything, and the big bang was an event in the sense that it had a location and a time (time 0), wouldn't that mean that our universe has a center? Where was the big bang? What is there now? Are we moving away from the center still? Are these even valid questions? | The big bang was everywhere, because distance didn't exist before it (or as @forest more accurately commented, "measuring distances was meaningless since the distance between any two discrete points was zero",) so from one perspective, everywhere may be the centre (especially as some theorists think, the universe doesn't have an edge) The real issue is that the question shouldn't matter, as we can only gain information from distances within our visible radius and once we get to that limit, what we see gets closer to the big bang so it all looks closer to the centre. Tricky eh | {
"source": [
"https://physics.stackexchange.com/questions/25591",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7395/"
]
} |
25,650 | I understand that the Mollweide projection is used to show the map of the universe. Although I understand how this projection can be interesting for Earth where most populated (and of interest) areas are not at polar latitudes, I imagine that in the sky the distribution of interesting places does not follow the same pattern of accentuating the equatorial plane. Do you think this the best projection to represent the Universe? Possible tags: map cosmos universe projection | The equatorial plane of the Mollweide projection applied to the whole sky is usually the plane of the Milky Way. We're blatant galactic chauvinists that way. More importantly, this projection preserves area, which is more important than angular relations for this sort of thing. For instance, when studying the CMB, in increasing level of detail, the most important questions are: What is the overall average temperature of the CMB? (Average over area) What direction is the overall dipole moment? (Technical way of asking which direction the Earth is moving relative to the average Universe. Easy to grasp in Mollweide. See http://aether.lbl.gov/www/projects/u2/ ) What is the size distribution of its irregularities? (In other words, how grainy is it? Area, area, area) | {
"source": [
"https://physics.stackexchange.com/questions/25650",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
25,759 | How exactly does time slow down near a black hole? I have heard this as a possible way of time traveling, and I do understand that it is due in some way to the massive gravity around a black hole, but how exactly does that massive gravity slow down time? | This web page provides a good explanation . To oversimplify the explanation, you have to understand the curvature of space time around a black hole. The basic principle is that because of the curvature of spacetime around a black hole, the amount of "distance" a beam of light has to cover is greater near a black hole. However, to an observer in that gravitational field, light must appear to always be 300,000 km/sec, time has to slow down for that individual as compared to someone outside that gravitational field as related by the time/distance relationship of speed. Or as the web page says: If acceleration is equivalent to gravitation, it follows that the predictions of Special Relativity must also be valid for very strong gravitational fields. The curvature of spacetime by matter therefore not only stretches or shrinks distances, depending on their direction with respect to the gravitational field, but also appears to slow down the flow of time. This effect is called gravitational time dilation. In most circumstances, such gravitational time dilation is minuscule and hardly observable, but it can become very significant when spacetime is curved by a massive object, such as a black hole. A black hole is the most compact matter imaginable. It is an extremely massive and dense object in space that is thought to be formed by a star collapsing under its own gravity. Black holes are black, because nothing, not even light, can escape from its extreme gravity. The existence of black holes is not yet firmly established. Major advances in computation are only now enabling scientists to simulate how black holes form, evolve, and interact. They are betting on powerful instruments now under construction to confirm that these exotic objects actually exist. This web page provides a large series of links for further research into the subject: http://casa.colorado.edu/~ajsh/relativity.html | {
"source": [
"https://physics.stackexchange.com/questions/25759",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4179/"
]
} |
25,802 | I mean besides the obvious "it has to have finite mass or it would suck up the universe." A singularity is a dimensionless point in space with infinite density, if I'm not mistaken. If something is infinitely dense, must it not also be infinitely massive? How does a black hole grow if everything that falls into it merges into the same singularity, which is already infinitely dense? | If something is infinitely dense, must it not also be infinitely massive? Nope. The singularity is a point where volume goes to zero, not where mass goes to infinity. It is a point with zero volume, but which still holds mass, due to the extreme stretching of space by gravity. The density is $\frac{mass}{volume}$, so we say that in the limit $volume\rightarrow 0$, the density goes to infinity, but that doesn't mean mass goes to infinity. The reason that the volume is zero rather than the mass is infinite is easy to see in an intuitive sense from the creation of a black hole. You might think of a volume of space with some mass which is compressed due to gravity. Normal matter is no longer compressible at a certain point due to Coulomb repulsion between atoms, but if the gravity is strong enough, you might get past that. You can continue compressing it infinitely (though you'll probably have to overcome some other force barriers along the way) - until it has zero volume. But it still contains mass! The mass can't just disappear through this process. The density is infinite, but the mass is still finite. | {
"source": [
"https://physics.stackexchange.com/questions/25802",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/417/"
]
} |
25,806 | We all know that our universe is inflating from what is known as the Big Bang . However, will our universe continue to inflate at the current rate? Or after reaching a maximum size, will it collapse in a Big Crunch ? | If something is infinitely dense, must it not also be infinitely massive? Nope. The singularity is a point where volume goes to zero, not where mass goes to infinity. It is a point with zero volume, but which still holds mass, due to the extreme stretching of space by gravity. The density is $\frac{mass}{volume}$, so we say that in the limit $volume\rightarrow 0$, the density goes to infinity, but that doesn't mean mass goes to infinity. The reason that the volume is zero rather than the mass is infinite is easy to see in an intuitive sense from the creation of a black hole. You might think of a volume of space with some mass which is compressed due to gravity. Normal matter is no longer compressible at a certain point due to Coulomb repulsion between atoms, but if the gravity is strong enough, you might get past that. You can continue compressing it infinitely (though you'll probably have to overcome some other force barriers along the way) - until it has zero volume. But it still contains mass! The mass can't just disappear through this process. The density is infinite, but the mass is still finite. | {
"source": [
"https://physics.stackexchange.com/questions/25806",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/8061/"
]
} |
25,924 | Forgive the elementary nature of this question: Because a new moon occurs when the moon is positioned between the earth and sun, doesn't this also mean that somewhere on the Earth, a solar eclipse (or partial eclipse) is happening? What, then, is the difference between a solar eclipse and a new moon? | Briefly: Because the moon's orbit "wobbles" up and down, so it isn't always in the plane of the earth's orbit around the sun. There's a 2D plane you can form from the ellipse of the earth's orbit and the sun. This plane is known as the ecliptic . The moon's orbit is not exactly in the ecliptic at all times; see this (slightly overcomplicated) picture from Wikipedia: So the moon has got its own orbital plane, separate from the ecliptic. This orbital plane "wobbles" around - there are two points of the lunar orbital plane which intercept the ecliptic, known as the "nodes," and these nodes rotate around the earth periodically. The moon will only pass right in front of the sun and cause an eclipse when one of the two nodes is along the line of sight to the sun and right in the ecliptic plane (hence the name "ecliptic"). | {
"source": [
"https://physics.stackexchange.com/questions/25924",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1638/"
]
} |
25,928 | It seems that we are moving relative to the universe at the speed of ~ 600 km/s .
This is the speed of our galaxy relative to the cosmic microwave background. Where does this rest frame come from? Is this special in any way (i.e., an absolute frame)? EDIT: I think the more important question is "where does the CMB rest frame come from?". | I found this answer at Professor Douglas Scott's FAQ page . He researches CMB and cosmology at the University of British Columbia. How come we can tell what motion we have with respect to the CMB? Doesn't this mean there's an absolute frame of reference? The theory of special relativity is based on the principle that there are no preferred reference frames. In other words, the whole of Einstein's theory rests on the assumption that physics works the same irrespective of what speed and direction you have. So the fact that there is a frame of reference in which there is no motion through the CMB would appear to violate special relativity! However, the crucial assumption of Einstein's theory is not that there are no special frames, but that there are no special frames where the laws of physics are different. There clearly is a frame where the CMB is at rest, and so this is, in some sense, the rest frame of the Universe. But for doing any physics experiment, any other frame is as good as this one. So the only difference is that in the CMB rest frame you measure no velocity with respect to the CMB photons, but that does not imply any fundamental difference in the laws of physics. “Where does it come from?” is also answered: Where did the photons actually come from? A very good question. We believe that the very early Universe was very hot and dense. At an early enough time it was so hot, ie there was so much energy around, that pairs of particles and anti-particles were continually being created and annihilated again. This annihilation makes pure energy, which means particles of light - photons. As the Universe expanded and the temperature fell the particles and anti-particles (quarks and the like) annihilated each other for the last time, and the energies were low enough that they couldn't be recreated again. For some reason (that still isn't well understood) the early Universe had about one part in a billion more particles than anti-particles. So when all the anti-particles had annihilated all the particles, that left about a billion photons for every particle of matter. And that's the way the Universe is today! So the photons that we observe in the cosmic microwave background were created in the first minute or so of the history of the Universe. Subsequently they cooled along with the expansion of the Universe, and eventually they can be observed today with a temperature of about 2.73 Kelvin. EDIT: @starwed points out in the comments that there may be some confusion as to whether someone in the rest frame is stationary with respect to the photons in the rest frame. I found a couple more questions on Professor Scott's excellent email FAQ page to clarify the concept. In your answer to the "How come we can tell what motion we have with respect to the CMB?" question, there is one more point that could be mentioned. In an expanding universe, two distant objects that are each at rest with respect to the CMB will typically be in motion relative to each other, right? The expansion of the Universe is certainly an inconvenience when it comes to thinking of simple pictures of how things work cosmologically! Normally we get around this by imagining a set of observers who are all expanding from each other uniformly, i.e. they have no "peculiar motions", only the "Hubble expansion" (which is directly related to their distance apart). These observers then define an expanding reference frame. There are many different such frames, all moving with some constant speed relative to each other. But one of them can be picked out explicitly as the one with no CMB dipole pattern on the sky. And that's the absolute (expanding) rest frame! Assumptions: From most points in the universe, one will measure a CMBR dipole. Thus, one would have to accelerate to attain a frame of reference "at rest" relative to the CMBR. Questions: Having attained that "rest frame", would one not have to accelerate constantly to stay at rest (to counter attraction of all the mass scattered around the universe)? [abridged] I think the assumption is wrong, and therefore the question doesn't need to be asked. The fact that there's a CMB dipole (one side of the sky hotter and the other side colder than the average) tells us that we are moving at a certain speed in a certain direction with respect to the "preferred" reference frame (i.e. the one in which there is no observed dipole). To get ourselves into this dipole-free frame we just have to move with a velocity which cancels out the dipole-producing velocity. There's no need to accelerate (accept the rapid acceleration you'd need to do to change velocity of course). Our local motion (which makes us move relative to the "CMB frame" and hence gives us a dipole to observe) is caused by nearby clusters and superclusters of galaxies pulling us around. It's true that over cosmological timescales these objects are also moving. And so if we wanted to keep ourselves always in the dipole-free frame we'd have to make small adjustments to our velocity as we moved and got pulled around by different objects. But these changes would be on roughly billion year timescales. And so to get into the frame with no CMB dipole basically just requires the following 3 steps: (1) observe today's dipole; (2) move towards the coldest direction at just the right speed to cancel the dipole; and (3) maintain basically that same velocity forever. | {
"source": [
"https://physics.stackexchange.com/questions/25928",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3641/"
]
} |
25,986 | At my new job, it's soon going to be my turn for doing night/graveyard shifts for a fair amount of weeks. Perfect excuse to buy a decent beginners telescope to replace the 4.5" 15 y/o Meade that doesn't work anymore with a 10" Dobsonian reflector! My girlfriend wants to join me (which I'm all for), and having grown up on a (relatively poor) farm, she's never had the pleasure of looking through a telescope. When I asked her what she wanted to look at first, she shrugged and said "I don't know. Show me everything!" The problem is, the stuff I generally stare at are Nebulae and star clusters (since that's all I really could see), despite knowing that there is so much more out there that are equally enticing to the eye. So my question is this: For any category of objects, what are the most popular? If you have gone to many star parties, what are the objects people are generally asking to see most of the time? What were the things that when you first saw them through a telescope made you go "Wow!"? Note: Do understand that the objects generally have to be viewable during Late September (new moon), but having new objects I can show her in the coming months would be awesome as well. I've already told her the journey of the grand tour will probably take 6 months to a year (due to earth rotation and all that,) and will require the joining of us to the local astronomy club. | I have operated public observing nights at various universities for eight or ten years. These are the objects that have stood out for me. Saturn and Jupiter are the "stars" of the night sky. Bar none. Spec. Tac. U. Lar. If you can look at Saturn with nothing between it and your eye besides a few panes of glass, and see it hanging in the sky in all its glory, and NOT want to be an astronomer, there is something deeply wrong with you. You can see both the Galilean moons (good opportunity to give an historical lecture, if you're a yarn-spinner) and the shadows of the Galilean moons pass across the surface of the planet. Some of them are fast enough to change noticeably between the beginning and end of an observing night. The other inner planets , if up. Lots to talk about with them. The Moon itself is a "destination", not just an obstacle to observing. The binary star Albireo (the "head" of Cygnus the Swan) is fairly spectacular if you have developed a sense of what's "normal", because it is one of very few objects in the sky that's multiple colors- i.e., Jupiter is brownish, Mars is red, Saturn yellow, most other things white, but Albireo is bright enough that its member stars are distinctly blue and yellow. Mizar and Alcor (the middle stars of the Big Dipper's handle) are conceptually neat, if visually dull. Mizar and Alcor are a visual binary, while with a backyard grade telescope Mizar resolves into a binary pair (telescopic binary), and with a spectrograph, you can resolve each of those into pairs of stars themselves (spectroscopic binary). So, in one telescopic view, you get all three kinds of binary stars. The Christmas Tree Cluster (near a foot of Gemini) really, really looks like one (with an 8-inch refractor, in a fairly light-polluted area), complete with Christmas lights, a star at the top, a trunk, and a smattering of presents underneath. The Beehive Cluster is pretty spectacular as far as open clusters go. Unusually uniform brightness gives it a real appearance of bees. If you're ready to propose to your girlfriend, it would be hard to beat the Ring Nebula . The Messier globular cluster M13 in Hercules is really easy to find. It lies right along one of the lines between the four trapezoid stars. Orion Nebula . Talk about the Trapezium and star formation. Cool stuff Pleiades - Lessons in multiculturalism (Different cultures call it different things), consumerism (Subaru logo), and optometry (How many stars can you see? Good vision, 7, 20-10 vision with experience in night sky observation, 14+) Uranus - Giggles, and then its historical background. Constellations and asterisms are a lot of fun too, because the audience can "take them home," that is, enjoy them again on a different night when they aren't being treated to a star party. Get a green laser pointer . Absolutely indispensable, and also has a HUGE "wow" factor. EDIT: Oh, and the Andromeda galaxy . The observatory where I did most of my observing was too badly light-polluted to see it very well, but if it's dark enough, it's the farthest object any human being in history has ever seen with his own two eyes. The Double cluster in Perseus is pretty impressive in binoculars, but it's so big you can't really appreciate it in a telescope. It's a good object for a n008 to try for her first solo "spot." EDIT: This is a fairly comprehensive list, including planets, double stars, open clusters, a globular cluster, a galaxy, and nebulae. That's close to a Noah's Ark of everything you can possibly see with a backyard telescope, if you add transients like comets and satellites , and maybe a few other odds and ends like Ceres (though I've never attempted that myself. Is it actually pragmatically visible?) | {
"source": [
"https://physics.stackexchange.com/questions/25986",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
26,023 | I was wondering if all the stars that we can see with the unaided eye as distinct point sources are from our own galaxy? In other words, can we see stars from the Andromeda Galaxy or other galaxies without telescopes? | Yes, everything that appears as a point like star is in the Milky Way. The most nearby stars outside of the Milky Way are in the dwarf galaxies that are Milky Way satellites, such as the Large and Small Magellanic Clouds. These appear as fuzzy little blobs to the naked eye, just as Andromeda does. The only exception to this that I can think of is when a supernova occurs in a nearby galaxy. The most recent supernova visible to the naked eye was 1987A, which occurred in the Large Magellanic Cloud. Supernovae in Andromeda could also be visible to the naked eye as point sources. | {
"source": [
"https://physics.stackexchange.com/questions/26023",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7946/"
]
} |
26,083 | I always see pictures of the solar system where our sun is in the middle and the planets surround the sun. All these planets move on orbits on the same layer. Why? | We haven't ironed out all the details about how planets form, but they almost certainly form from a disk of material around a young star . Because the disk lies in a single plane, the planets are broadly in that plane too. But I'm just deferring the question. Why should a disk form around a young star? While the star is forming, there's a lot of gas and dust falling onto it. This material has angular momentum, so it swirls around the central object (i.e. the star) and the flow collides with itself. The collisions cancel out the angular momentum in what becomes the vertical direction and smear the material out in the horizontal direction, leading to a disk. Eventually, this disk fragments and forms planets. Like I said, the details aren't well understood, but we're pretty sure about the disk part, and that's why the planets are co-planar. | {
"source": [
"https://physics.stackexchange.com/questions/26083",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4867/"
]
} |
26,087 | We've all seen the telescope photographs of andromeda galaxy: I'm wondering if it were possible to travel close enough to the andromeda galaxy could you achieve a same perspective with the naked eye? What distance away from andromeda would give you the same vantage point as the photograph to the point that you could take the same photograph with a regular point and shoot camera? I assume you would be in intergalactic space between andromeda and the milky way. On that note, if you were halfway between the two, would you be able to clearly make out both galaxies in their entirety or would you simply see 2 points of light that more resemble stars than galaxies? | We haven't ironed out all the details about how planets form, but they almost certainly form from a disk of material around a young star . Because the disk lies in a single plane, the planets are broadly in that plane too. But I'm just deferring the question. Why should a disk form around a young star? While the star is forming, there's a lot of gas and dust falling onto it. This material has angular momentum, so it swirls around the central object (i.e. the star) and the flow collides with itself. The collisions cancel out the angular momentum in what becomes the vertical direction and smear the material out in the horizontal direction, leading to a disk. Eventually, this disk fragments and forms planets. Like I said, the details aren't well understood, but we're pretty sure about the disk part, and that's why the planets are co-planar. | {
"source": [
"https://physics.stackexchange.com/questions/26087",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
26,236 | Is this photo "real"? Are the stars not super-imposed in the image? | Not quite like in the photo above, which shows more than what the naked eye can see, but yes, absolutely! Our galaxy (well, the chunk of it visible from these parts) is a naked-eye object. The fact that your question even exists shows how much time is now spent by people under light-polluted skies. It will not be visible from the city, however. You need to drive an hour (or two, if you live in a huge urban area) to the country side, far from city lights. Stay outside in full darkness for a few minutes, then look up. There will be a faint "river" of light crossing the sky. That's the Milky way. Full dark adaptation occurs after 30 minutes of not seeing any source of light, but this is not required for seeing our galaxy. While you're in a dark sky area, also look up the Andromeda galaxy, a.k.a. M31. http://www.physics.ucla.edu/~huffman/m31.html I mean, if you can see M31 with the naked eye, at 2 mil light-years away, then of course you can see Milky Way, which is basically in our backyard. Here's a light pollution map, not very recent, but still useful: http://www.jshine.net/astronomy/dark_sky/ | {
"source": [
"https://physics.stackexchange.com/questions/26236",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
26,332 | How does "outer space" affect the human body? Some movies show it as the body exploding, imploding or even freezing solid. I know space is essentially a vacuum with 0 pressure and the dispersion of energy makes it very cold. So are the predictions above accurate? | There have actually been cases of (accidental!) exposure to near-vacuum conditions. Real life does not conform to what you see in the movies. (Well, it depends on the movie; Dave Bowman's exposure to vacuum in 2001 was pretty accurate.) Long-term exposure, of course, is deadly, but you could recover from an exposure of, say, 15-30 seconds. You don't explode, and your blood doesn't immediately boil, because the pressure is held in by your skin. In one case involving a leaking space suit in a vacuum chamber in 1965: He remained conscious for about 14 seconds, which is about the time it
takes for O2 deprived blood to go from the lungs to the brain. The
suit probably did not reach a hard vacuum, and we began repressurizing
the chamber within 15 seconds. The subject regained consciousness at
around 15,000 feet equivalent altitude. The subject later reported
that he could feel and hear the air leaking out, and his last
conscious memory was of the water on his tongue beginning to boil (emphasis added) UPDATE: Here's a YouTube video regarding the incident. It includes video of the actual event, and the test subject's own description of bubbling saliva. Another incident: The experiment of exposing an unpressurized hand to near vacuum for a
significant time while the pilot went about his business occurred in
real life on Aug. 16, 1960. Joe Kittinger, during his ascent to
102,800 ft (19.5 miles) in an open gondola, lost pressurization of his
right hand. He decided to continue the mission, and the hand became
painful and useless as you would expect. However, once back to lower
altitudes following his record-breaking parachute jump, the hand
returned to normal. If you attempt to hold your breath, you could damage your lungs. If you're exposed to sunlight you could get a nasty sunburn, because the solar UV isn't blocked by the atmosphere (assuming the exposure happens in space near a star). You could probably remain conscious for about 15 seconds, and survive for perhaps a minute or two. The considerations are about the same in interstellar or interplanetary space, or even in low Earth orbit (or a NASA vacuum chamber). The major difference is the effect of sunlight. As far as temperature is concerned -- well, a vacuum has no temperature. There would be thermal effects as your body cools by radiating heat, but over the short time span that you'd be able to survive, even intergalactic space isn't much different from being in shadow in low Earth orbit. Reference: http://imagine.gsfc.nasa.gov/docs/ask_astro/answers/970603.html | {
"source": [
"https://physics.stackexchange.com/questions/26332",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7395/"
]
} |
26,339 | In my physics lessons, my teachers have always been keen to tell my class that Jupiter is considered a 'failed star' by scientists. Is this true? In my own effort I wondered if maybe this could just be being regurgitated from an outdated physics syllabus that still considers the Solar System to have nine planets. From that thought onward, through my research on the Internet, I haven't found people referring to Jupiter as such and people always call it a planet rather than a brown dwarf . Furthermore, it's my understanding that brown dwarfs possess more mass than Jupiter suggesting to me that Jupiter possesses too little mass for fusion to even be plausible. So am I correct in thinking that Jupiter is 'only' a planet, or are my physics teachers correct in saying it is a failed star (and if so, why)? | The answer kind of depends on how old you are. At a very introductory level, say, maybe middle school or younger, it's "okay" to refer to Jupiter as a failed star to get the idea across that a gas giant planet is sort of similar to a star in composition. But around middle school and above (where "middle school" refers to around 6-8 grade, or age ~12-14), I think you can get into enough detail in science class where this is fairly inaccurate. If you ignore that the solar system is dominated by the Sun and just focus on mass, Jupiter is roughly 80x lighter than the lightest star that undergoes fusion. So it would need to have accumulated 80 times what it already has in order to be a "real star." No Solar System formation model indicates this was remotely possible, which is why I personally don't like to think of it as a "failed star." Below 80 M J (where M J is short for "Jupiter masses"), objects are considered to be brown dwarf stars -- the "real" "failed stars." Brown dwarfs do not have enough mass to fuse hydrogen into helium and produce energy that way, but they do still produce their own heat and glow in the infrared because of that. Their heat is generated by gravitational contraction. And Jupiter also produces heat through both gravitational contraction and differentiation (heavy elements sinking, light elements rising). Astronomers are not very good at drawing boundaries these days, mostly because when these terms were created, we didn't know of a continuum of objects. There were gas giant planets, like Jupiter and Saturn, and there were brown dwarf stars, and there were full-fledged stars. The line between brown dwarf and gas giant - to my knowledge - has not been drawn. Personally, and I think I remember reading somewhere , the general consensus is that around 10-20 M J is the boundary between a gas giant planet and brown dwarf, but I think it's fairly arbitrary, much like what's a planet vs. minor planet, Kuiper belt object (KBO) or asteroid. So during Solar System formation, was there a chance Jupiter could have been a star and it failed ("failed star!") because the mean Sun gobbled up all the mass? Not really, at least not in our solar system. But for getting the very basic concept across of going from a gas giant planet to a star, calling Jupiter a "failed star" can be a useful analogy. | {
"source": [
"https://physics.stackexchange.com/questions/26339",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9119/"
]
} |
26,382 | Why 1 AU is defined as the distance between the Sun and the Earth? (approximately if you like to be precise) An astronomical unit (abbreviated as AU, au, a.u., or ua) is a unit of length equal to about 149,597,870.7 kilometres (92,955,807.3 mi) or approximately the mean Earth–Sun distance. Shouldn't astronomical units be defined within metric units (that is, $10^x$), so we can understand massive distances a little easier? | Using the distance between the Sun and the Earth, at least for distances within the Solar system, just gives a better feel for the scales involved. You can't really imagine a distance of, say, 1000000000 kilometers -- or at least I can't. (I deliberately didn't include commas in that number, to illustrate the point.) But using a concrete physical distance creates a kind of mental anchor, and makes the relative scale easier to visualize. Tell me that Neptune is about 4.5 billion kilometers from the Sun, and I think "Wow, that's a really big number". Tell me that it's about 30 AUs from the Sun, and that's something I can fit into a mental image. One AU is still unimaginably long, but the ratio of 30 AUs to 1 AU is easy. On the other hand, if you want to do physical calculations (say, calculating the orbit of some body under the influence of various gravitational fields), then it makes more sense to use metric units (meters, kilometers). The universal gravitational constant G is expressed in units of m 3 ·kg -1 ·s -2 ; it could be expressed with an AU as the length unit, but I've never heard if it being done that way. Basically, AU is used to express distances for a human audience; meters and kilometers are used for calculations. Update : ghoppe comments: Actually, ephemerides have been often calculated in astronomical units
and not in SI units because neither G nor the mass of the sun can be
measured to high accuracy in SI units, but the value of their product
is known very precisely due to Kepler's Third Law. The value of AU
depends on the product. | {
"source": [
"https://physics.stackexchange.com/questions/26382",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
26,390 | Does there exist a Clear Sky Chart with the following enhancements?: 1 - Actual Cloud Cover (Offered Visually and not just Colors with a Legend, Over Time/Past & Predictive) 2 - Simulate/Predict Cloud Cover taking into account the direction from Observer to Observed Object and Angle of view - Close to the horizon (May be helpful to know when you can reasonably start/end tracking something you want to catch that night that is close to the horizon) The reason I'm curious is: A) I wonder if it's just not feasible for any/many reasons. B) It would be great help to know this information. In general, does anyone know of other Earth Weather, Clear Sky Clocks and Charts or anything else that gives more information?...anything related will be helpful. EDIT: I would love to find this lecture "You can do better than Clear Sky Chart" mentioned: http://stjornuskodun.blog.is/blog/stjornuskodun/entry/966941/ | Using the distance between the Sun and the Earth, at least for distances within the Solar system, just gives a better feel for the scales involved. You can't really imagine a distance of, say, 1000000000 kilometers -- or at least I can't. (I deliberately didn't include commas in that number, to illustrate the point.) But using a concrete physical distance creates a kind of mental anchor, and makes the relative scale easier to visualize. Tell me that Neptune is about 4.5 billion kilometers from the Sun, and I think "Wow, that's a really big number". Tell me that it's about 30 AUs from the Sun, and that's something I can fit into a mental image. One AU is still unimaginably long, but the ratio of 30 AUs to 1 AU is easy. On the other hand, if you want to do physical calculations (say, calculating the orbit of some body under the influence of various gravitational fields), then it makes more sense to use metric units (meters, kilometers). The universal gravitational constant G is expressed in units of m 3 ·kg -1 ·s -2 ; it could be expressed with an AU as the length unit, but I've never heard if it being done that way. Basically, AU is used to express distances for a human audience; meters and kilometers are used for calculations. Update : ghoppe comments: Actually, ephemerides have been often calculated in astronomical units
and not in SI units because neither G nor the mass of the sun can be
measured to high accuracy in SI units, but the value of their product
is known very precisely due to Kepler's Third Law. The value of AU
depends on the product. | {
"source": [
"https://physics.stackexchange.com/questions/26390",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6805/"
]
} |
26,397 | Given that antimatter galaxies are theoretically possible,
how would they be distinguishable from regular matter galaxies? That is, antimatter is equal in atomic weight and all properties, except for the opposite reverse charge of the particles, identical to regular matter. Hence a star composed of antimatter hydrogen would fuse to anti-helium in an analogous way to our own Sun, and it would emit light and radiation at the same wavelengths as any regular matter star and would cause the same gravitational forces for planetary systems to form as in any other star system. Hence, what would be a telltale sign if you were observing a galaxy made up entirely of antimatter? Also, is there any evidence for that half of all galaxies are not made of antimatter -- while general theories currently assume that there is an imbalance of matter over antimatter in the universe, then what is the rationale for not assuming that there is in fact an even balance between the two? | You're right - for isolated galaxies, there is no obvious way of discerning whether they are made of matter or antimatter, since we only observe the light from them. But if there are regions of matter and antimatter in the universe, we would expect to see HUGE amounts of radiation from annihilation at the edges of these regions. But we don't. You could also make the case that galaxies are well-separated in space, and there's not much interaction between them. But there are plenty of observed galaxy collisions even in our own small region of the universe, and even annihilation between dust and antidust in the intergalactic medium would (probably) be observable. | {
"source": [
"https://physics.stackexchange.com/questions/26397",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4882/"
]
} |
26,408 | I frequently hear that Kepler, using his equations of orbital motion, could predict the orbits of all the planets to a high degree of accuracy -- except Mercury . I've heard that mercury's motion couldn't be properly predicted until general relativity came around. But what does general relativity have to do with Mercury's orbit? | This web page has a nice discussion on it: http://archive.ncsa.illinois.edu/Cyberia/NumRel/EinsteinTest.html Basically the orbit's eccentricity would precess around the sun. Classical stellar mechanics (or Newtonian gravity) couldn't account for all of that. It basically had to do with (and forgive my crude wording) the sun dragging the fabric of space-time around with it. Or as the web page says: Mercury's Changing Orbit In a second test, the theory explained slight alterations in Mercury's orbit around the Sun. Daisy petal effect of precession Since almost two centuries earlier astronomers had been aware of a small flaw in Mercury's orbit around the Sun, as predicted by Newton's laws. As the closest planet to the Sun, Mercury orbits a region in the solar system where spacetime is disturbed by t he Sun's mass. Mercury's elliptical path around the Sun shifts slightly with each orbit such that its closest point to the Sun (or "perihelion") shifts forward with each pass. Newton's theory had predicted an advance only half as large as the one actually observed. Einstein's predictions exactly matched the observation. For more detail that goes beyond a simple layman answer, you can check this page out and even download an app that let's you play with the phenomenon: http://www.fourmilab.ch/gravitation/orbits/ And of course, the ever handy Wikipedia has this covered as well: http://en.wikipedia.org/wiki/Tests_of_general_relativity#Perihelion_precession_of_Mercury Although, truth be told, I think I said it better (i.e. more elegantly) than the Wiki page does. But then I may be biased. | {
"source": [
"https://physics.stackexchange.com/questions/26408",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/417/"
]
} |
26,427 | Assume you've come in contact with a tribe of people cut off from the rest of the world, or you've gone back in time several thousand years, or (more likely) you've got a numbskull cousin. How would you prove that the Earth is, in fact, round? | The shadow of the Earth on the Moon during an eclipse and the way masts of ships are still visible when the hulls are out of sight are the classical reasons. | {
"source": [
"https://physics.stackexchange.com/questions/26427",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
26,443 | The Hubble Space Telescope (HST) was launched in 1990, more than 20 years ago, but I know that it was supposed to be launched in 1986, 24 years ago. Since it only took 66 years from the fist plane to the first man on the Moon why don't we have a better telescope in space after 24 years? | Money and willpower . With any program (scientific, military, public works, etc.) it all depends on the amount of money someone is willing to put to it, and how much backing and protection that program has from getting money re-prioritized to other projects. You are making a false dichotomy of attempting to present our past actions as a justification for actions we should have been able to take. With the decisions made on many levels (i.e. to fight several wars, cancel various lift vehicle programs, etc.) that just doesn't translate very well. Keep in mind that getting to the moon was all part of the " Space Race " which had many layered motivations, with science perhaps only being a side benefit to the projects. The James Webb Telescope is the next generation telescope that is due to go up. Although, the JWST is optimized for the infrared spectrum. For visible spectrum telescopes , the most ambitions space based one planned is the Terrestrial Planet Finder . However, the Hubble is still the belle of the ball. This of course doesn't touch on the ground based observatories we have, some of which are truly spectacular! I want to make a family vacation to Chile just to see some of them! | {
"source": [
"https://physics.stackexchange.com/questions/26443",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
26,507 | I was reading this question and got to thinking. Can neutrino detectors give us any clue where the neutrinos came from or when a supernova may occur? | Depends on the detection technology. Yes Cerenkov based detectors ( SNO and Super-Kamiokande for instance, as well a many cosmic ray neutrino detector) are direction sensitive, and this is one of the design considerations that drive the use of this tricky technique. The best results come from quasi-elastic reactions like $\nu_l + n \to l^- + p$ . The calorimeter stacks used for beam based work have pretty good direction resolving power (this includes OPERA, if that is what prompted this question BTW). Many reaction contribute, but again charged-current quasi-elastic gives the best direction data. Liquid argon time projection chambers are a fairly new technology for which we have no full scale experiments, but test-beds have been deployed and the results are very good. Sorta In principle liquid scintillator detectors still get a Cernekov ring on quasi-elastic events, but in practice they are too washed out by the scintillation signal and especially by absorption/re-emission to be of much use MiniBoonNe made a valiant effort to get some use from this, but the results were disappointing; most LOS detectors don't even try ( KamLAND for instance--where to my knowledge no one has even tried). I think I did see a plot showing that the original Chooz experiment could tell the reactor side of the experiment from the off side, but they needed a large portion of their data set to do it so it would have been of minimal help on a per event basis. A colleague has shown in bench studies that with sufficient time resolution (on order of 0.1 ns) it is possible to resolve the Cernekov/scintillation ambiguity and to RICH in scintillator. Of course, a sparely instrumented LOS detector won't ever have a chance, as was the case with Cowen and Rheins instrument and the non-proliferation monitors that people are experimenting with (no link 'cause I've only ever seen a colloquium and don't recall the name of the instrument). The really big detectors like IceCube have some chance to get Cerenkov imaging, but often get better results from time of flight data from high energy events. No Radio-chemical methods (as in Ray Davis' Homestake Mine experiment ) have no direction sensitivity even in principle. Note that the direction sensitivity is always for the momentum of reaction products rather than the neutrino itself. In the case of high energy neutrinos the direction of the products can be highly correlated with that of the neutrino, but at lower energies this becomes less true and pointing information is increasingly only good in aggregate. The experiments that participate in the Supernova Early Warning System are all direction sensitive in some degree or another as the plan is to both alert the light telescopes that an event may be coming and tell them what part of the sky to search. Non-direction-sensitive detectors also attempt to monitor supernova neutrino pulses, but without direction sensitivity their data will be more useful during the postmortem analysis of the timing difference. Disclosure: I was associated with KamLAND for several years and am currently associated with two LArTPC projects. | {
"source": [
"https://physics.stackexchange.com/questions/26507",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2843/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.