source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
2,532 | I know that string theory is still under heavy development, and as such it still can't make predictions (or not that many predictions anyways). On the other hand, it is clear from the number of years this theory has been under development and from the large number of theoretical physicists studying it, that it is considered a good and viable candidate as a quantum gravity theory. So, what is the evidence that this is true? Why is it considered such a good candidate as the correct quantum gravity theory? Without wanting to sound inflammatory in the least, it has been under heavy development for a very long time and it's still not able to make predictions, for example, or still makes outlandish statements (like extra dimensions) that would require a high amount of experimental evidence to be accepted. So - if so many people believe it is the way to go, there have to be good reasons, right? What are they? | Some random points in support of ST, with no attempt to be systematic or comprehensive. I will not get into a long discussion, if someone does not find this convincing I'd advice them not to work on the subject. I also don't have the time to elaborate or justify the claims below, just take them at face value, or maybe you can ask more specific follow up questions. ST incorporated naturally and effortlessly all of the mathematical structure underlying modern particle physics and gravity. It does so many times in surprising and highly non-trival ways, many times barely surviving difficult consistency checks. Certainly anyone with any familiarity with the subject has a strong feeling that you get out much more than you put in. ST quantizes gravity, and that form of quantization avoids all the difficulties that more traditional approaches encounter. This is also surprising: it was developed originally as theory of the strong interactions, and when people discovered it contains quantized gravity they spent years trying to get rid of it, with no success. As a theory of quantum gravity it passes many consistentcy checks, failure of any of them would invalidate the whole framework, for example in providing a microscopic description of a large class of black holes. ST extends the calculational tools available to us to investigate interesting physical systems, many times the only such calculational techniques available. Again, it does so in novel and unexpected ways. It therefore provides a natural language to extend quantum field theory, to models which include quantized gravity, and (relatedly) models with strong interactions. Many calculations using that language are simpler and more natural than other methods, it seems therefore to be the right language to discuss a large class of physical systems. ST contains in principle all the ingredients needed to construct a model for particle physics, though this has proven to be more difficult than originally thought. But, in view of the above, even if it turns out not to provide a model for beyond the standard model physics, it is certainly useful enough for many physicists to decide spending their time stuying it. Of course other people may have their own reasons to find the subject interesting, I'm only speaking for myself. | {
"source": [
"https://physics.stackexchange.com/questions/2532",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/66/"
]
} |
2,554 | I read a lot about the classical twin paradox recently. What confuses me is that some authors claim that it can be resolved within SRT , others say that you need GRT . Now, what is true (and why)? | To understand this paradox it's best to forget about everything you know (even from SR) because all of that just causes confusion and start with just a few simple concepts. First of them is that the space-time carries a metric that tells you how to measure distance and time. In the case of SR this metric is extremely simple and it's possible to introduce simple $x$, $t$ coordinates (I'll work in 1+1 and $c = 1$) in which space-time interval looks like this $$ ds^2 = -dt^2 + dx^2$$ Let's see how this works on this simple doodle I put together The vertical direction is time-like and the horizontal is space-like. E.g. the blue line has "length" $ds_1^2 = -20^2 = -400$ in the square units of the picture (note the minus sign that corresponds to time-like direction) and each of the red lines has length zero (they represent the trajectories of light). The length of the green line is $ds_2^2 = -20^2 + 10^2 = -300$. To compute proper times along those trajectories you can use $d\tau^2 = -ds^2$. We can see that the trip will take the green twin shorter proper time than the blue twin. In other words, green twin will be younger. More generally, any kind of curved path you might imagine between top and bottom will take shorter time than the blue path. This is because time-like geodesics (which are just upward pointing straight lines in Minkowski space) between two points maximize the proper time. Essentially this can be seen to arise because any deviation from the straight line will induce unnecessary space-like contributions to the space-time interval. You can see that there was no paradox because we treated the problem as what is really was: computation of proper-time of the general trajectories. Note that this is the only way to approach this kind of problems in GR. In SR that are other approaches because of its homogeneity and flatness and if done carefully, lead to the same results. It's just that people often aren't careful enough and that is what leads to paradoxes. So in my opinion, it's useful to take the lesson from GR here and forget about all those ad-hoc SR calculations. Just to give you a taste what a SR calculation might look like: because of globally nice coordinates, people are tempted to describe also distant phenomena (which doesn't really make sense, physics is always only local). So the blue twin might decide to compute the age of the green twin. This will work nicely because it is in the inertial frame of reference, so it'll arrive at the same result we did. But the green twin will come to strange conclusions. Both straight lines of its trajectory will work just fine and if it weren't for the turn, the blue twin would need to be younger from the green twin's viewpoint too. So the green twin has to conclude that the fact that blue twin was in a strong gravitational field (which is equivalent to the acceleration that makes green twin turn) makes it older . This gives a mathematically correct result (if computed carefully), but of course, physically it's a complete nonsense. You just can't expect that your local acceleration has any effect on a distant observer. The point that has to be taken here (and that GR makes clear only too well) is that you should never try to talk about distant objects. | {
"source": [
"https://physics.stackexchange.com/questions/2554",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/171/"
]
} |
2,597 | I understand that energy conservation is not a rule in general relativity, but I'd like to know under what circumstances it can still be possible. In other words, when is it possible to associate a potential energy to the gravitational field, so that the energy is constant in the evolution of the system? Here are some examples, is there a convenient way to define energy in these scenarios? Just a system of gravitational waves. A point mass moving in a static (but otherwise arbitrary) space-time. Equivalent (if I'm not mistaken) to a test mass moving in the field of a second much larger mass, the larger mass wouldn't move. Two rotating bodies of similar mass. Overall, I'm trying to understand what keeps us from associating a potential energy to the metric. When we break the time translation symmetry of a system by introducing an electromagnetic field, we can still conserve energy by defining an electromagnetic potential energy. Why can't we do the same when we break TT symmetry by making space-time curved? | There are a few different ways of answering this one. For brevity, I'm going to be a bit hand-wavey. There is actually still some research going on with this. Certain spacetimes will always have a conserved energy. These are the spacetimes that have what is called a global timelike (or, if you're wanting to be super careful and pedantic, perhaps null) Killing vector. Math-types will define this as a vector whose lowered form satisfies the Killing equation: $\nabla_{a}\xi_{b} + \nabla_{b} \xi_{a} = 0$. Physicists will just say that $\xi^{a}$ is a vector that generates time (or null) translations of the spacetime, and that Killing's equation just tells us that these translations are symmetries of the spacetime's geometry. If this is true, it is pretty easy to show that all geodesics will have a conserved quantity associated with the time component of their translation, which we can interpret as the gravitational potential energy of the observer (though there are some new relativistic effects--for instance, in the case of objects orbiting a star, you see a coupling between the mass of the star and the orbiting objects angular momentum that does not show up classically). The fact that you can define a conserved energy here is strongly associated with the fact that you can assign a conserved energy in any Hamiltonian system in which the time does not explicitly appear in the Hamiltonian--> time translation being a symmetry of the Hamiltonian means that there is a conserved energy associated with that symmetry. If time translation is a symmetry of the spacetime, you get a conserved energy in exactly the same way. Secondly, you can have a surface in the spacetime (but not necessarily the whole spacetime) that has a conserved killing tangent vector. Then, the argument from above still follows, but that energy is a charge living on that surface. Since integrals over a surface can be converted to integrals over a bulk by Gauss's theorem, we can, in analogy with Gauss's Law, interpret these energies as the energy of the mass and energy inside the surface. If the surface is conformal spacelike infinity of an asymptotically flat spacetime, this is the ADM Energy. If it is conformal null infinity of an asymptotically flat spacetime, it is the Bondi energy. You can associate similar charges with Isolated Horizons, as well, as they have null Killing vectors associated with them, and this is the basis of the quasi-local energies worked out by York and Brown amongst others. What you can't have is a tensor quantity that is globally defined that one can easily associate with 'energy density' of the gravitational field, or define one of these energies for a general spacetime. The reason for this is that one needs a time with which to associate a conserved quantity conjugate to time. But if there is no unique way of specifying time, and especially no way to specify time in such a way that it generates some sort of symmetry, then there is no way to move forward with this procedure. For this reason, a great many general spacetimes have quite pathological features. Only a very small proprotion of known exact solutions to Einstein's Equation are believed to have much to do with physics. | {
"source": [
"https://physics.stackexchange.com/questions/2597",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/724/"
]
} |
2,605 | When your body burns calories and you lose weight, obviously mass is leaving your body. In what form does it leave? In other words, what is the physical process by which the body loses weight when it burns its fuel? Somebody said it leaves the body in the form of heat but I knew this is wrong, since heat is simply the internal kinetic energy of a lump of matter and doesn't have anything do with mass. Obviously the chemical reactions going on in the body cause it to produce heat, but this alone won't reduce its mass. | There's a lot of detail you could go into with regard to this question, as is done in the other answers and comments, but I think the answer itself is pretty simple. Imagine a surface that just barely surrounds your body, as if you shrink-wrapped a body in plastic. By the law of conservation of mass (valid in non-relativistic physics), the only way your body can lose any amount of mass is for that amount of mass to pass out through the surface. So you just have to consider what bodily functions cause that to happen. I think they've all been identified in the comments: Exhaling Sweating Excretion (in the nontechnical sense of, roughly, things you do in the bathroom) Actually, any dead skin cells, strands of hair, etc. that fall off you would also count, although my guess is that those represent a minor contribution. As a bonus, the "shrink-wrap view" also makes it easy to identify the ways in which you gain mass, by looking for all processes that cause matter to be drawn in through the invisible surface: Eating & drinking - solids and liquids through the esophagus and gastrointestinal tract Inhaling - gas through the trachea and lungs The thing is, when most people talk about losing weight, they're referring to a long-term average loss of mass, which means that the processes in the first list have to remove more mass over some extended period of time than the ones in the second list bring in. This clearly requires some of the preexisting mass in your body to be converted into the waste forms that you can dispose of through excretion, exhaling, and sweating. This preexisting mass generally tends to be body fat. The other answers do a pretty good job filling in the details of how the fat gets converted to waste products. | {
"source": [
"https://physics.stackexchange.com/questions/2605",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1114/"
]
} |
2,625 | What is the difference between thermodynamics and statistical mechanics ? | Statistical Mechanics is the theory of the physical behaviour of macroscopic systems starting from a knowledge of the microscopic forces between the constituent particles. The theory of the relations between various macroscopic observables such as temperature, volume, pressure, magnetization and polarization of a system is called thermodynamics . first page from this good book | {
"source": [
"https://physics.stackexchange.com/questions/2625",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/202/"
]
} |
2,627 | I got a question concerning the band strucutre of solids.
The reference I'm using is the book on solid state physics by Ashcroft/Mermin. My problem is that I don't completely understand the reason why there exists a
band-index n. On the one hand Ashcroft gives a quite plausible explanation
saying that the general solution of the given eigenvalue problem H $\psi_k(r)$ = $E_{k}$ $\psi_k(r)$ can be decomposed into $\psi_k(r) = e^{i*k*r} u_{k}(r)$. Plugging this ansatz
into the Schroedinger-equation and applying H to the $e^{i*k*r}$ first we
obtain a new Hamiltonian $H_{k}$ that depends on the "wavevector" k and a
new eigenvalue-problem $H_{k} u_{k}(r) = E_{k} u_{k}(r)$. Now we fix k. The operator
$H_{k}$ acts on r and therefor produces a certain number of solution u_{n,k}(r).
These solution can be counted using the index n. So far so good. Then Ashcroft states that the second proof of Bloch's theorem shows that
$u_{k}(r)=\sum_{G} c_{k-G} e^{i G r}$. The sum is taken over all reciprocal lattice vectors G. He says that from this way of writing the function $u_{k}$ it is obvious that multiple solution do exists. I don't really understand why that is the case.
First i thought that you may think of the n-th solution to be $u_{k,n}(r)=c_{k-G_{n}} e^{i G_{n} r}$ where $G_{n}$ denotes the n-th reciprocal lattice vector.
However this doesn't seem plausible to me, since you need to have the sum to
proof that $\psi_{k}$ = $\psi_{k+G}$ and therefor E(k)=E(k+G). As you can see I'm quite confused about all this. Actually I'm also a bit confused why
the reduction to the first brillpuin zone doesn't produce more than one Energy value for a given k. Anyway, I'd be more than happy if someone could help me.
Thanks in advance!! See you. | Statistical Mechanics is the theory of the physical behaviour of macroscopic systems starting from a knowledge of the microscopic forces between the constituent particles. The theory of the relations between various macroscopic observables such as temperature, volume, pressure, magnetization and polarization of a system is called thermodynamics . first page from this good book | {
"source": [
"https://physics.stackexchange.com/questions/2627",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1156/"
]
} |
2,628 | And expansion of space is equal to expansion of vacuum? | Statistical Mechanics is the theory of the physical behaviour of macroscopic systems starting from a knowledge of the microscopic forces between the constituent particles. The theory of the relations between various macroscopic observables such as temperature, volume, pressure, magnetization and polarization of a system is called thermodynamics . first page from this good book | {
"source": [
"https://physics.stackexchange.com/questions/2628",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/238/"
]
} |
2,641 | This story about the use of battery/freewheel based Frequency Regulators confused me about how the 60hz frequency of the North American power grid was set--saying that it was kept at that frequency by balancing load and supply. I used to think that it was only voltage which was affected by this balance, and that the frequency was determined by the speed of the rotors in the generator. EG see Wikipedia's page on Alternator Synchronization . Can someone help me understand the physics behind this? | The physics is actually much easier than it seems at first glance. Power generators are engines just like the everyday ones we see all around in our cars, lawnmowers, snowblowers, etc. Except for new power sources like some wind and solar systems with electronic inverters, the vast majority of power is supplied by large rotating AC generators turning in synch with the frequency of the grid. The frequency of all these generators will be identical and is tied directly to the RPM of the generators themselves, generally 3600 RPM for gas turbines and 1800 RPM for nuclear plants. If there is sufficient power in the generators then the frequency can be maintained at the desired rate (i.e. 50Hz or 60Hz depending on the locale). The power from the individual generators will lead the grid in phase slightly by an amount roughly corresponding to the power they deliver to the grid. An increase in the power load is accompanied by a concurrent increase in the power supplied to the generators, generally by the governors automatically opening a steam or gas inlet valve to supply more power to the turbine. However, if there is not sufficient power, even for a brief period of time, then generator RPM and the frequency drops. This is much like what happens to a car on cruise control if you start going up a hill, if the hill is not too steep you can maintain speed, once you reach the limits of the torque supplied by the engine, the car and engine slow down. If the combined output of all the generators cannot supply enough power then the frequency will drop for the entire grid. All the generators slow down just like your car engine on a hill. For large grids the presence of many generators and a large distributed load makes frequency management easier because any given load is a much smaller percentage of the combined capacity. For smaller grids, there will be a much larger fluctuation in capacity as delays in matching power supplied are harder to manage when the loads represent a relatively larger percentage of the generated power. So a battery systems like the one in the article is really designed to keep short-term fluctuations in power requirements from dropping the frequency because of lags in the governors and generators which require a finite time to adjust to the new power requirements. These "frequency regulator" power stations can supply very high power for short bursts to keep the power requirements even so that the other generators don't see too much load faster than they can respond due to mechanical limitations. | {
"source": [
"https://physics.stackexchange.com/questions/2641",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/961/"
]
} |
2,644 | I've always assumed/been told that Newton's 2nd law is an empirical law — it must be discovered by experiment. If this is the case, what experiments did Newton do to discover this? Is it related to his studies of the motion of the moon and earth? Was he able to analyze this data to see that the masses were inversely related to the acceleration, if we assume that the force the moon on the earth is equal to the force the earth exerts on the moon? According to Wikipedia , the Principia reads: Lex II: Mutationem motus proportionalem esse vi motrici impressae, et fieri secundum lineam rectam qua vis illa imprimitur. Translated as: Law II: The alteration of motion is ever proportional to the motive force impress'd; and is made in the direction of the right line in which that force is impress'd. My question is how did Newton come to this conclusion? I get that he knew from Galileo Galilei the idea of inertia, but this doesn't instantly tell us that the change in momentum must be proportional to the net force. Did Newton just assume this, or was there some experiment he performed to tell him this? | First of all, it would be preposterous to think that there was a simple recipe that Newton followed and that anyone else can use to deduce the laws of a similar caliber. Newton was a genius, and arguably the greatest genius in the history of science. Second of all, Newton was inspired by the falling apple - or, more generally, by the gravity observed on the Earth. Kepler understood the elliptical orbits of the planets. One of Kepler's laws, deduced by a careful testing of simple hypotheses against the accurate data accumulated by Tycho Brahe, said that the area drawn in a unit time remains constant. Newton realized that this is equivalent to the fact that the first derivative of the velocity i.e. the second derivative of the position - something that he already understood intuitively - has to be directed radially. In modern terms, the constant-area law is known as the conservation of the angular momentum. That's how he knew the direction of the acceleration. He also calculated the dependence on the distance - by seeing that the acceleration of the apple is 3,600 times bigger than that of the Moon. So he systematically thought about the second derivatives of the position - the acceleration - in various contexts he has encountered - both celestial and terrestrial bodies. And he was able to determine that the second derivative could have been computed from the coordinates of the objects. He surely conjectured very quickly that all Kepler's laws can be derived from the laws for the second derivatives - and because it was true, it was straightforward to prove him this conjecture. Obviously, he had to discover the whole theory - both $F=ma$ (or, historically more accurately, $F=dp/dt$) as well as a detailed prescription for the force - e.g. $F=Gm_1m_2/r^2$ - at the same moment because a subset of these laws is useless without the rest. The appearance of the numerical constant in $F=ma$ or $p=mv$ is a trivial issue. The nontrivial part was of course to invent the mathematical notion of a derivative - especially because the most important one was the second derivative - and to see from the observations that the second derivative has the direction it has (from Kepler's law) and the dependence on the distance it has (from comparing the acceleration of the Moon and the apple falling from the tree). It wasn't a straightforward task that could have been solved by anyone but it was manifestly simple enough to be solved by Newton. So he had to invent the differential calculus, $F=ma$, as well as the formula for the gravitational force at the same moment to really appreciate what any component is good for in physics. | {
"source": [
"https://physics.stackexchange.com/questions/2644",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/807/"
]
} |
2,658 | I'm doing magnification and lens in class currently, and I really don't get why virtual and real images are called what they are. A virtual image occurs the object is less than the focal length of the lens from the lens, and a real image occurs when an object is further than focal length. By why virtual and real? What's the difference? You can't touch an image no matter what it's called, because it's just light. | You can project a real image onto a screen or wall, and everybody in the room can look at it. A virtual image can only be seen by looking into the optics and can not be projected. As a concrete example, you can project a view of the other side of the room using a convex lens, and can not do so with a concave lens. I'll steal some image from Wikipedia to help here: First consider the line optics of real images (from http://en.wikipedia.org/wiki/Real_image ): Notice that the lines that converge to form the image point are all drawn solid. This means that there are actual rays, composed of photon originating at the source objects. If you put a screen in the focal plane, light reflected from the object will converge on the screen and you'll get a luminous image (as in a cinema or a overhead projector). Next examine the situation for virtual images (from http://en.wikipedia.org/wiki/Virtual_image ): Notice here that the image is formed by a one or more dashed lines (possibly with some solid lines). The dashed lines are draw off the back of solid lines and represent the apparent path of light rays from the image to the optical surface, but no light from the object ever moves along those paths. This light energy from the object is dispersed, not collected and can not be projected onto a screen. There is still a "image" there, because those dispersed rays all appear to be coming from the image. Thus, a suitable detector (like your eye) can "see" the image, but it can not be projected onto a screen. | {
"source": [
"https://physics.stackexchange.com/questions/2658",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4/"
]
} |
2,670 | Consider that we have a system described by a wavefunction $\psi(x)$. We then make an exact copy of the system, and anything associated with it, (including the inner cogs and gears of the elementary paticles, if any, aswell as the fabric of spacetime), but where all distances are multiplied by a number $k$, so $\psi(x) \to \psi(kx)$, we consider the case $k>1$ (if $k=-1$ this is just the parity operation, so for $k<0$ from the little I read about this we could express it as a product of P and "k" transformations). Consider then that all observables associated with the new system are identical to the original, i.e. we find that that the laws of the universe are invariant to a scale transformation $x\to kx$. According to Noether's theorem then, there will be a conserved quantity associated with this symmetry. My question is: what would this conserved quantity be? Edit:
An incomplete discussion regarding the existence of this symmetry is mentioned here: What if the size of the Universe doubled? Edit2:
I like the answers, but I am missing the answer for NRQM! | The symmetry you are asking about is usually called a scale transformation or dilation and it, along with Poincare transformations and conformal transformations is part of the group of conformal isometries of Minkowski space. In a large class of theories one can construct an "improved" energy-momentum tensor $\theta^{\mu \nu}$ such that the Noether current corresponding to scale transformations is given by $s^\mu=x_\nu \theta^{\mu \nu}$. The spatial
integral of the time component of $s^\mu$ is the conserved charge. Clearly $\partial_\mu s^\mu = \theta^\mu_\mu$ so the conservation of $s^\mu$ is equivalent to the vanishing of the trace of the energy-momentum tensor. It should be noted that most quantum field theories are not invariant under scale and conformal transformations. Those that are are called conformal field theories and they have been studied in great detail in connection with phase transitions (where the theory becomes scale invariant at the transition point), string theory (the two-dimensional theory on the string world-sheet is a CFT) and some parts of mathematics (the study of Vertex Operator Algebras is the study of a particular kind of CFT). | {
"source": [
"https://physics.stackexchange.com/questions/2670",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1708/"
]
} |
2,690 | According to Noether's theorem, all conservation laws originate from invariance of a system to shifts in a certain space. For example conservation of energy stems from invariance to time translation. What kind of symmetry creates the conservation of mass? | Mass is only conserved in the low-energy limit of relativistic systems. In relativistic systems, mass can be converted into energy, and you can have processes like massive electron-positron pairs annhillating to form massless photons. What is conserved (in theories obeying special relativity, at least) is mass energy--this conservation is enforced by the time and space translation invariance of the theory. Since the amount of energy in the mass dominates the amount of energy in kinetic energy ($mc^{2}$ means a lot of energy is stored even in a small mass) for nonrelativistic motion, you get a very good approximation of mass conservation. out of the energy conservation. | {
"source": [
"https://physics.stackexchange.com/questions/2690",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/72/"
]
} |
2,708 | What are the main practical applications that a Bose-Einstein condensate can have? | I assume you mean the relatively recent phenomenon of Bose-Einstein Condensation in dilute atomic vapors (first produced in 1995 in Colorado). The overall phenomenon of Bose-Einstein Condensation is closely related to superconductivity (in a very loose sense, you can think of the superconducting transition in a metal as the formation of a BEC of pairs of electrons), and that application would trump everything else. The primary application of atomic BEC systems is in basic research areas at the moment, and will probably remain so for the foreseeable future. You sometimes hear people talk about BEC as a tool for lithography, or things like that, but that's not likely to be a real commercial application any time soon, because the throughput is just too low. Nobody has a method for generating BEC at the sort of rate you would need to make interesting devices in a reasonable amount of time. As a result, most BEC applications will be confined to the laboratory. One of the hottest areas in BEC at the moment is the use of Bose condensates (and the related phenomenon of degenerate Fermi gases) to simulate condensed matter systems. You can easily make an "optical lattice" from an interference pattern of multiple laser beams that looks to the atoms rather like a crystal lattic in a solid looks to electrons: a regular array of sites where the particles could be trapped, with all the sites interconnected by tunneling. The big advantage BEC/ optical lattice systems have over real condensed matter systems is that they are more easily tunable. You can easily vary the lattice spacing, the strength of the interaction between atoms, and the number density of atoms in the lattice, which allows you to explore a range of different parameters with essentially the same sample, which is very difficult to do with condensed matter systems where you need to grow all new samples for every new set of values you want to explore. As a result, there is a great deal of work in using BEC systems to explore condensed matter physics, essentially making cold atoms look like electrons. There's a good review article, a couple of years old now, by Immanuel Bloch, Jean Dalibard, and Wilhelm Zwerger ( RMP paper , arxiv version ) that covers a lot of this work. And people continue to expand the range of experiments-- there's a lot of work ongoing looking at the effect of adding disorder to these systems, for example, and people have begun to explore lattice structures beyond the really easy to make square lattices of the earliest work. There is also a good deal of interest in BEC for possible applications in precision measurement. At the moment, some of the most sensitive detectors ever made for things like rotation, acceleration, and gravity gradients come from atom interferometry, using the wavelike properties of atoms to do interference experiments that measure small shifts induced by these effects. BEC systems may provide an improvement beyond what you can do with thermal beams of atoms in these sorts of systems. There are a number of issues to be worked out in this relating to interatomic interactions, but it's a promising area. Full Disclosure: My post-doc research was in this general area, though what I did was more a proof-of-principle demonstration than a real precision measurement. My old boss, Mark Kasevich, now at Stanford, does a lot of work in this area. The other really hot area of BEC research is in looking for ways to use BEC systems for quantum information processing. If you want to build a quantum computer, you need a way to start with a bunch of qubits that are all in the same state, and a BEC could be a good way to get there, because it consists of a macroscopic number of atoms occupying the same quantum state. There are a bunch of groups working on ways to start with a BEC, and separate the atoms in some way, then manipulate them to do simple quantum computing operations. There's a lot of overlap between these sub-sub-fields-- one of the best ways to separate the qubits for quantum information processing is to use an optical lattice, for example. But those are what I would call the biggest current applications of BEC research. None of these are likely to provide a commercial product in the immediate future, but they're all providing useful information about the behavior of matter on very small scales, which helps feed into other, more applied, lines of research. This is not by any stretch a comprehensive list of things people are doing with BEC, just some of the more popular areas over the last couple of years. | {
"source": [
"https://physics.stackexchange.com/questions/2708",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/592/"
]
} |
2,717 | I have been taught that the $\pi^0$ particle contains either an up quark and an anti-up quark or a down and an anti-down. How can these exist without annihilating? Also, it is its own antiparticle, but it doesn't make sense that the up version and down version would annihilate when they meet. Or is what I've been taught a simplification - if so, in what state does this particle exist? | Actually, the quark and antiquark do annihilate with each other. It just takes some amount of time for them to do so. The actual time that it takes for any given pion is random, and follows an exponential distribution, but the average time it takes is $8.4\times 10^{-17}\,\mathrm{s}$ according to Wikipedia , which we call the lifetime of the neutral pion. What you've learned is a simplification, in fact (it pretty much always is in physics). The actual state of a pion is a linear combination of the up state and the down state, $$\frac{1}{\sqrt{2}}(u\bar{u} - d\bar{d})$$ This is how it's able to be its own antiparticle: there aren't separate up and down versions of the neutral pion. Each one is a combination of both flavors. The orthogonal linear combination, $$\frac{1}{\sqrt{2}}(u\bar{u} + d\bar{d})$$ doesn't correspond to a real particle. (In a sense it "contributes" to the $\eta$ and $\eta'$ mesons, but I won't go into detail on that.) | {
"source": [
"https://physics.stackexchange.com/questions/2717",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/954/"
]
} |
2,743 | This question appeared quite a time ago and was inspired, of course, by all the fuss around "LHC will destroy the Earth". Consider a small black hole, that is somehow got inside the Earth. Under "small" I mean small enough to not to destroy Earth instantaneously, but large enough to not to evaporate due to the Hawking radiation . I need this because I want the black hole to "consume" the Earth. I think reasonable values for the mass would be $10^{15} - 10^{20}$ kilograms. Also let us suppose that the black hole is at rest relative to the Earth. The question is: How can one estimate the speed at which the matter would be consumed by the black hole in this circumstances? | In the LHC, we are talking about mini black holes of mass around $10^{-24}kg$, so when you talk about $10^{15}-10^{20}kg$ you talk about something in the range from the mass of Deimos (the smallest moon of Mars) up to $1/100$ the mass of the Moon. So we are talking about something really big. The Schwarzschild radius of such a black hole (using the $10^{20}$ value) would be $$R_s=\frac{2GM}{c^2}=1.46\times 10^{-7}m=0.146\mu m$$ We can consider that radius to be a measure of the cross section that we can use to calculate the rate that the BH accretes mass. So, the accretion would be a type of Bondi accretion (spherical accretion) that would give an accretion rate $$\dot{M}=\sigma\rho u=(4\pi R_s^2)\rho_{earth} u,$$ where $u$ is a typical velocity, which in our case would be the speed of sound and $\rho_{earth}$ is the average density of the earth interior.
The speed of sound in the interior of the earth can be evaluated to be on average something like $$c_s^2=\frac{GM_e}{3R_e}.$$ So, the accretion rate is $$\dot{M}=\frac{4\pi}{\sqrt{3}}\frac{G^2M_{BH}^2}{c^4}\sqrt{\frac{GM_e}{R_e}}.$$ That is an order of magnitude estimation that gives something like $\dot{M}=1.7\times10^{-6}kg/s$. If we take that at face value, it would take something like $10^{23}$ years for the BH to accrete $10^{24}kg$. If we factor in the change in radius of the BH, that time is probably much smaller, but even then it would be something much larger than the age of the universe. But that is not the whole picture. One should take also in to account the possibility of having a smaller accretion rate due to the Eddington limit. As the matter accretes to the BH it gets hotter since the gravitational potential energy is transformed to thermal energy (virial theorem). The matter then radiates with some characteristic luminosity. The radiation excerpts some back-force on the matter that is accreting lowering the accretion rate. In this case I don't thing that this particular effect plays any part in the evolution of the BH. | {
"source": [
"https://physics.stackexchange.com/questions/2743",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/386/"
]
} |
2,767 | Electromagnetic waves can be shielded by a perfect conductor. What about gravitational fields or waves? | In a consistent theory of gravity, there can't exist any objects that can shield the gravitational field in the same way as conductors shield the electric field. It follows from the positive-energy theorems and/or energy conditions (roughly saying that the energy density cannot be negative). To see why, just use the conductor to shield an ordinary electric field - which is what your problem reduces to temporarily for very low frequencies of the electromagnetic waves. The basic courses of electromagnetism allow one to calculate the electric field of a point-like charged source and a planar conductor: the electric field is identical to the original charge plus a "mirror charge" on the opposite side from the conductor's boundary. Importantly, the mirror charge has the opposite sign. In this way, one may guarantee that the electric field is transverse to the plane of the conductor. This fact makes the electromagnetic waves bounce off the mirror if you consider time-dependent fields. If you wanted to do an "analogous" thing for gravity, you would first have to decide what boundary conditions you want to be imposed for the gravitational field by the mysterious new object - what is your gravitational analogy of "$\mathbf E$ is orthogonal to the conductor". The metric field has many more components. Most of them will require you to consider a negative "mirror mass" - but the mass in a region can't be negative, otherwise the vacuum would be unstable (one could produce regions of negative and positive energy in pairs out of vacuum, without violating any conservation laws). Microscopically, one may also see why there can't be any counterpart of the conductor. The conductor allows to change the electric field discontinuously because it can support charges distributed over the boundary. The profile of the charge density goes like $\sigma\delta(z)$ if the conductor boundary sits at $z=0$. However, you would need a similar singular mass distribution to construct a gravitational counterpart. If the distribution failed to be positively definite, it would violate the positive-energy theorem or energy conditions, if you wish. If it were positively definite, it would create a huge gravitational field. Locally, the boundary of the gravitational "conductor" would have to look like an event horizon. But we know that the event horizons cannot shield the interior from the gravitational waves - the waves as well as everything else falls inside the black hole. Moreover, the price for such "non-shielding" is that you have to be killed by the singularity in a finite proper time. One may probably write some formal solutions that have some properties but they can't really work when all the behavior of gravity and the related consistency conditions are taken into account. One can't fundamentally construct a "gravitational conductor" because gravity means that the space itself remains dynamical and this fact can't be undone. In particular, in a consistent theory of gravity - in string/M-theory - you won't find any objects generalizing conductors to gravity. By the way, there is one object that comes close to a "gravitational conductor" in M-theory - the Hořava-Witten domain wall, a possible boundary of the 11-dimensional spacetime of M-theory. It can be placed at $x_{10}=0$ and about $1/2$ of the components of the metric tensor become unphysical near this domain wall (boundary of the world that carries the $E_8$ gauge multiplet) because the domain wall also acts as an orientifold plane. But such an orientifold plane is not just some object (like a conductor) that is "inserted" into a pre-existing space that doesn't change. Quite on the contrary, the character of the underlying spacetime is changed qualitatively: the world behind the orientifold plane is literally just a mirror copy of the world in front of the plane. So there is a lot of fundamentally incorrect thinking about gravity in all the answers that try to say "Yes". Gravity is not another force that is inserted into a pre-existing geometrical space; gravity is the curvature and dynamics of the spacetime itself. Once we say that the space is dynamical, we can't find any objects that would "undo" this fundamental assumption of general relativity. There are also some more detailed confusions in the other solutions. First, a white hole corresponds to a time-reversed black hole with the same (positive) mass but it is an unphysical time reversal because it violates the second law of thermodynamics: entropy has to increase which means that the (large entropy) black hole can be formed, but it cannot be "unformed". But a white hole, which is forbidden thermodynamically (and microscopically, it corresponds to the same microstates as black hole microstates, they just never behave in any "white hole" way), is still something else than a negative-mass black hole. A negative mass black hole isn't related to a positive-mass black hole by any "reflection". It is a solution without horizons, with a naked singularity, and can't occur in a consistent theory of gravity because it would cause instability of the vacuum. (Also, naked singularities can't be "produced" by any generic evolution in 3+1 dimensions because of Penrose's Cosmic Censorship Conjecture.) So white holes and negative-mass black holes are forbidden for different reasons. However, even if you considered them, it would still fail to be enough to create a "gravitational conductor" which is nothing else than the denial of the fact that the geometry of the spacetime is dynamical, and one can't freely construct infinite, delta-function-like mass densities without completely changing the shape of spacetime. | {
"source": [
"https://physics.stackexchange.com/questions/2767",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1218/"
]
} |
2,774 | Imagine a bar spinning like a helicopter propeller, At $\omega$ rad/s because the extremes of the bar goes at speed $$V = \omega * r$$ then we can reach near $c$ (speed of light)
applying some finite amount of energy
just doing $$\omega = V / r$$ The bar should be long, low density, strong to minimize the
amount of energy needed For example a $2000\,\mathrm{m}$ bar $$\omega = 300 000 \frac{\mathrm{rad}}{\mathrm{s}} = 2864789\,\mathrm{rpm}$$ (a dental drill can commonly rotate at $400000\,\mathrm{rpm}$) $V$ (with dental drill) = 14% of speed of light. Then I say this experiment can be really made
and bar extremes could approach $c$. What do you say? EDIT : Our planet is orbiting at sun and it's orbiting milky way, and who knows what else, then any Earth point have a speed of 500 km/s or more agains CMB. I wonder if we are orbiting something at that speed then there would be detectable relativist effect in different direction of measurements, simply extending a long bar or any directional mass in different galactic directions we should measure mass change due to relativity, simply because $V = \omega * r$ What do you think? | Imagine a rock on a rope. As you rotate the rope faster and faster, you need to pull stronger and stronger to provide centripetal force that keeps the stone on the orbit. The increasing tension in the rope would eventually break the it. The very same thing would happen with bar (just replace the rock with the bar's center of mass). And naturally, all of this would happen at speeds far below the speed of light. Even if you imagined that there exists a material that could sustain the tension at relativistic speeds you'd need to take into account that signal can't travel faster than at the speed of light. This means that the bar can't be rigid . It would bend and the far end would trail around. So it's hard to even talk about rotation at these speeds. One thing that is certain is that strange things would happen. But to describe this fully you'd need a relativistic model of solid matter. People often propose arguments similar to yours to show Special Relativity fails. In reality what fails is our intuition about materials, which is completely classical. | {
"source": [
"https://physics.stackexchange.com/questions/2774",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
2,838 | In popular science books and articles, I keep running into the claim that the total energy of the Universe is zero, "because the positive energy of matter is cancelled out by the negative energy of the gravitational field". But I can't find anything concrete to substantiate this claim. As a first check, I did a calculation to compute the gravitational potential energy of a sphere of uniform density of radius $R$ using Newton's Laws and threw in $E=mc^2$ for energy of the sphere, and it was by no means obvious that the answer is zero! So, my questions: What is the basis for the claim – does one require General Relativity, or can one get it from Newtonian gravity? What conditions do you require in the model, in order for this to work? Could someone please refer me to a good paper about this? | On my blog , I published a popular text why energy conservation becomes trivial (or is violated) in general relativity (GR). To summarize four of the points: In GR, spacetime is dynamical, so in general, it is not time-translation invariant. One therefore can't apply Noether's theorem to argue that energy is conserved. One can see this in detail in cosmology: the energy carried by radiation decreases as the universe expands since every photon's wavelength increases. The cosmological constant has a constant energy density while the volume increases, so the total energy carried by the cosmological constant (dark energy), on the contrary, grows. The latter increase is the reason why the mass of the universe is large - during inflation, the total energy grew exponentially for 60+ $e$-foldings , before it was converted to matter that gave rise to early galaxies. If one defines the stress-energy tensor as the variation of the Lagrangian with respect to the metric tensor, which is okay for non-gravitating field theories, one gets zero in GR because the metric tensor is dynamical and the variation — like all variations — has to vanish because this is what defines the equations of motion. In translationally invariant spaces such as Minkowski space, the total energy is conserved again because Noether's theorem may be revived; however, one can't "canonically" write this energy as the integral of energy density over the space; more precisely, any choice to distribute the total energy "locally" will depend on the chosen coordinate system. | {
"source": [
"https://physics.stackexchange.com/questions/2838",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
3,005 | I've started reading Peskin and Schroeder on my own time, and I'm a bit confused about how to obtain Maxwell's equations from the (source-free) lagrangian density $L = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$ (where $F^{\mu\nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$ is the field tensor). Substituting in for the definition of the field tensor yields $L = -\frac{1}{2}[(\partial_\mu A_\nu)(\partial^\mu A^\nu) - (\partial_\mu A_\nu)(\partial^\nu A^\mu)]$. I know I should be using $A^\mu$ as the dynamical variable in the Euler-Lagrange equations, which become $\frac{\partial L}{\partial A_\mu} - \partial_\mu\frac{\partial L}{\partial(\partial_\mu A_\nu)} = - \partial_\mu\frac{\partial L}{\partial(\partial_\mu A_\nu)}$, but I'm confused about how to proceed from here. I know I should end up with $\partial_\mu F^{\mu\nu} = 0$, but I don't quite see why. Since $\mu$ and $\nu$ are dummy indices, I should be able to change them: how do the indices in the lagrangian relate to the indices in the derivatives in the Euler-Lagrange equations? | Well, you are almost there. Use the fact that
$$ {\partial (\partial_{\mu} A_{\nu}) \over \partial(\partial_{\rho} A_{\sigma})} = \delta_{\mu}^{\rho} \delta_{\nu}^{\sigma}$$
which is valid because $\partial_{\mu} A_{\nu}$ are $d^2$ independent components. | {
"source": [
"https://physics.stackexchange.com/questions/3005",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
3,009 | I understand that people explain (in layman's terms at least) that the presence of mass "warps" space-time geometry, and this causes gravity. I have also of course heard the analogy of a blanket or trampoline bending under an object, which causes other objects to come together, but I always thought this was a hopelessly circular explanation because the blanket only bends because of "real" gravity pulling the object down and then pulling the other objects down the sloped blanket.
In other words, to me, it seems that curved space wouldn't have any actual effect on objects unless there's already another force present. So how is curved space-time itself actually capable of exerting a force (without some source of a fourth-dimensional force)? I apologize for my ignorance in advance, and a purely mathematical explanation will probably go over my head, but if it's required I'll do my best to understand. | Luboš's answer is of course perfectly correct. I'll try to give you some examples why the straightest line is physically motivated (besides being mathematically exceptional as an extremal curve). Image a 2-sphere (a surface of a ball). If an ant lives there and he just walks straight, it should be obvious that he'll come back where he came from with his trajectory being a circle. Imagine a second ant and suppose he'll start to walk from the same point as the first ant and at the same speed but into a different direction. He'll also produce circle and the two circles will cross at two points (you can imagine those circles as meridians and the crossing points as a north resp. south poles). Now, from the ants' perspective who aren't aware that they are living in a curved space, this will seem that there is a force between them because their distance will be changing in time non-linearly (think about those meridians again). This is one of the effects of the curved space-time on movement on the particles (these are actually tidal forces). You might imagine that if the surface wasn't a sphere but instead was curved differently, the straight lines would also look different. E.g. for a trampoline you'll get ellipses (well, almost, they do not close completely, leading e.g. to the precession of the perihelion of the Mercury). So much for the explanation of how curved space-time (discussion above was just about space; if you introduce special relativity into the picture, you'll get also new effects of mixing of space and time as usual). But how does the space-time know it should be curved in the first place? Well, it's because it obeys Einstein's equations (why does it obey these equations is a separate question though). These equations describe precisely how matter affects space-time. They are of course compatible with Newtonian gravity in low-velocity, small-mass regime, so e.g. for a Sun you'll obtain that trampoline curvature and the planets (which will also produce little dents, catching moons, for example; but forget about those for a moment because they are not that important for the movement of the planet around the Sun) will follow straight lines, moving in ellipses (again, almost ellipses). | {
"source": [
"https://physics.stackexchange.com/questions/3009",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1358/"
]
} |
3,014 | The simplistic undergrad explanation aside, I've never really understood what energy really is. I've been told that it's something when converted from one kind of something to another kind, or does some "work", as defined by us, but what is that something? Moreover, if the total amount of energy in the universe is finite and we cannot create energy. Then, where did it come from? I've learned from thermodynamics where it goes , but where does it come from? I know this sounds like something trivially simple, but there is so much going on in the physical universe and I just can't grasp what it is. Maybe it is because I lack the mathematical understanding that I can't grasp the subtle things the universe is doing. Still, I want to understand what it is doing. How do I get to the point of understanding what it's doing? ( Note: What prompted me to ask this was this answer . I'm afraid that it just puzzled me further and I sat there staring at the screen for a good 10 minutes.) | Energy is any quantity - a number with the appropriate units (in the SI system, Joules) - that is conserved as the result of the fact that the laws of physics don't depend on the time when phenomena occur, i.e. as a consequence of the time-translational symmetry. This definition, linked to Emmy Noether's fundamental theorem , is the most universal among the accurate definitions of the concept of energy. What is the "something"? One can say that it is a number with units, a dimensionful quantity. I can't tell you that energy is a potato or another material object because it is not (although, when stored in the gasoline or any "fixed" material, the amount of energy is proportional to the amount of the material). However, when I define something as a number , it is actually a much more accurate and rigorous definition than any definition that would include potatoes. Numbers are much more well-defined and rigorous than potatoes which is why all of physics is based on mathematics and not on cooking of potatoes. Centuries ago, before people appreciated the fundamental role of maths in physics, they believed e.g. that the heat - a form of energy - was a material called the phlogiston . But, a long long time ago experiments were done to prove that such a picture was invalid. Einstein's $E=mc^2$ partly revived the idea - energy is equivalent to mass - but even the mass in this formula has to be viewed as a number rather than something that is made out of pieces that can be "touched". Energy has many forms - terms contributing to the total energy - that are more "concrete" than the concept of energy itself. But the very strength of the concept of energy is that it is universal and not concrete: one may convert energy from one form to another. This multiplicity of forms doesn't make the concept of energy ill-defined in any sense. Because of energy's relationship with time above, the abstract definition of energy - the Hamiltonian - is a concept that knows all about the evolution of the physical system in time (any physical system). This fact is particularly obvious in the case of quantum mechanics where the Hamiltonian enters the Schrödinger or Heisenberg equations of motion, being put equal to a time-derivative of the state (or operators). The total energy is conserved but it is useful because despite the conservation of the total number, the energy can have many forms, depending on the context. Energy is useful and allows us to say something about the final state from the initial state even without solving the exact problem how the system looks at any moment in between. Work is just a process in which energy is transformed from one form (e.g. energy stored in sugars and fats in muscles) to another form (furniture's potential energy when it's being brought to the 8th floor on the staircase). That's when "work" is meant as a qualitative concept. When it's a quantitative concept, it's the amount of energy that was transformed from one form to another; in practical applications, we usually mean that it was transformed from muscles or the electrical grid or a battery or another "storage" to a form of energy that is "useful" - but of course, these labels of being "useful" are not a part of physics, they are a part of the engineering or applications (our subjective appraisals). | {
"source": [
"https://physics.stackexchange.com/questions/3014",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1065/"
]
} |
3,081 | Updated: In order to fold anything in half, it must be $\pi$ times longer than its thickness, and that depending on how something is folded, the amount its length decreases with each fold differs. – Britney Gallivan , the person who determined that the maximum number of times a paper or other finite thickness materials can be folded = 12. Mathematics of paper folding explains the mathematical aspect of this. I would like to know the physical explanation of this. Why is it not possible to fold a paper more than $N$ (=12) times? | I remember that the question in your title was busted in Mythbusters episode 72 .
A simple google search also gives many other examples. As for single- vs alternate-direction folding, I'm guessing that the latter would allow for more folds. It is the thickness vs length along a fold that basically tells you if a fold is possible, since there is always going to be a curvature to the fold. Alternate-direction folding uses both flat directions of the paper, so you run out of length slightly slower. This would be a small effect since you have the linear decrease in length vs the exponential increase in thickness. Thanks to gerry for the key word (given in a comment above). I can now make my above guess more concrete. The limit on the number of folds (for a given length) does follow from the necessary curvature on the fold. The type of image you see for this makes it clear what's going on For a piece of paper with thickness $t$, the length $L$ needed to make $n$ folds is (OEIS) $$ L/t = \frac{\pi}6 (2^n+4)(2^n-1) \,.$$
This formula was originally derived by (the then Junior high school student) Britney Gallivan in 2001 . I find it amazing that it was not known before that time... (and full credit to Britney).
For alternate folding of a square piece of paper, the corresponding formula is
$$ L/t = \pi 2^{3(n-1)/2} \,.$$ Both formulae give $L=t\,\pi$ as the minimum length required for a single fold. This is because, assuming the paper does not stretch and the inside of the fold is perfectly flat, a single fold uses up the length of a semicircle with outside diameter equal to the thickness of the paper. So if $L < t\,\pi$ then you don't have enough paper to go around the fold. Let's ignore a lot of the subtleties of the linear folding problem and say that each time you fold the paper you halve its length and double its thickness:
$ L_i = \tfrac12 L_{i-1} = 2^{-i}L_0 $ and $ t_i = 2 t_{i-1} = 2^{i} t_0 $,
where $L=L_0$ and $t=t_0$ are the original length and thickness respectively.
On the final fold (to make it n folds) you need
$L_{n-1} \leq \pi t_{n-1}$ which implies $L \leq \frac14\pi\,2^{2n} t$.
Qualitatively this reproduce the linear folding result given above.
The difference comes from the fact you lose slightly over half of the length on each fold. These formulae can be inverted and plotted to give the logarithmic graphs where $L$ is measured in units of $t$. The linear folding is shown in red and the alternate direction folding is given in blue. The boxed area is shown in the inset graphic and details the point where alternate folding permanently gains an extra fold over linear folding. You can see that there exist certain length ranges where you get more folds with alternate than linear folding. After $L/t = 64\pi \approx 201$ you always get one or more extra folds with alternate compared to linear. You can find similar numbers for two or more extra folds, etc... Looking back on this answer, I really think that I should ditch my theorist tendencies and put some approximate numbers in here. Let's assume that the 8 alternating fold limit for a "normal" piece of paper is correct. Normal office paper is approximately 0.1mm thick . This means that a normal piece of paper must be
$$ L \approx \pi\,(0.1\text{mm}) 2^{3\times 7/2} \approx 0.3 \times 2^{10.5}\,\text{mm}
\approx .3 \times 1000 \, \text{mm} = 300 \text{mm} \,.
$$
Luckily this matches the normal size of office paper, e.g. A4 is 210mm * 297mm . The last range where you get the same number of folds for linear and alternate folding is $L/t \in (50\pi,64\pi) \approx (157,201)$,
where both methods yield 4 folds. For a square piece of paper 0.1mm thick, this corresponds to 15cm and 20cm squares respectively. With less giving only three folds for linear and more giving five folds for alternating. Some simple experiments show that this is approximately correct. | {
"source": [
"https://physics.stackexchange.com/questions/3081",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1378/"
]
} |
3,158 | From everything I've read about quantum mechanics and quantum entanglement phenomena, it's not obvious to me why quantum entanglement is considered to be an active link. That is, it's stated every time that measurement of one particle affects the other. In my head, there is a less magic explanation: the entangling measurement affects both particles in a way which makes their states identical, though unknown. In this case measuring one particle will reveal information about state of the other, but without a magical instant modification of remote entangled particle. Obviously, I'm not the only one who had this idea. What are the problems associated with this view, and why is the magic view preferred? | Entanglement is being presented as an "active link" only because most people - including authors of popular (and sometimes even unpopular, using the very words of Sidney Coleman) books and articles - don't understand quantum mechanics. And they don't understand quantum mechanics because they don't want to believe that it is fundamentally correct: they always want to imagine that there is some classical physics beneath all the observations. But there's none. You are absolutely correct that there is nothing active about the connection between the entangled particles. Entanglement is just a correlation - one that can potentially affect all combinations of quantities (that are expressed as operators, so the room for the size and types of correlations is greater than in classical physics). In all cases in the real world, however, the correlation between the particles originated from their common origin - some proximity that existed in the past. People often say that there is something "active" because they imagine that there exists a real process known as the "collapse of the wave function". The measurement of one particle in the pair "causes" the wave function to collapse, which "actively" influences the other particle, too. The first observer who measures the first particle manages to "collapse" the other particle, too. This picture is, of course, flawed. The wave function is not a real wave. It is just a collection of numbers whose only ability is to predict the probability of a phenomenon that may happen at some point in the future. The wave function remembers all the correlations - because for every combination of measurements of the entangled particles, quantum mechanics predicts some probability. But all these probabilities exist a moment before the measurement, too. When things are measured, one of the outcomes is just realized. To simplify our reasoning, we may forget about the possibilities that will no longer happen because we already know what happened with the first particle. But this step, in which the original overall probabilities for the second particle were replaced by the conditional probabilities that take the known outcome involving the first particle into account, is just a change of our knowledge - not a remote influence of one particle on the other. No information may ever be answered faster than light using entangled particles. Quantum field theory makes it easy to prove that the information cannot spread over spacelike separations - faster than light. An important fact in this reasoning is that the results of the correlated measurements are still random - we can't force the other particle to be measured "up" or "down" (and transmit information in this way) because we don't have this control even over our own particle (not even in principle: there are no hidden variables, the outcome is genuinely random according to the QM-predicted probabilities). I recommend late Sidney Coleman's excellent lecture Quantum Mechanics In Your Face who discussed this and other conceptual issues of quantum mechanics and the question why people keep on saying silly things about it: http://motls.blogspot.com/2010/11/sidney-coleman-quantum-mechanics-in.html | {
"source": [
"https://physics.stackexchange.com/questions/3158",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1414/"
]
} |
3,168 | While this concept is widely used in physics, it is really puzzling (at least for beginners) that you just have to multiply two functions (or the function by itself) at different values of the parameter and then average over the domain of the function keeping the difference between those parameters: $$C(x)=\langle f(x'+x)g(x')\rangle$$ Is there any relatively simple illustrative examples that gives one the intuition about correlation functions in physics? | The correlation function you wrote is a completely general correlation of two quantities,
$$\langle f(X) g(Y)\rangle$$
You just use the symbol $x'$ for $Y$ and the symbol $x+x'$ for $X$. If the environment - the vacuum or the material - is translationally invariant, it means that its properties don't depend on overall translations. So if you change $X$ and $Y$ by the same amount, e.g. by $z$, the correlation function will not change. Consequently, you may shift by $z=-Y=-x'$ which means that the new $Y$ will be zero. So
$$\langle f(X) g(Y)\rangle = \langle f(X-Y)g(0)\rangle = \langle f(x)g(0) \rangle$$
As you can see, for translationally symmetric systems, the correlation function only depends on the difference of the coordinates i.e. separation of the arguments of $f$ and $g$, which is equal to $x$ in your case. So this should have explained the dependence on $x$ and $x'$. Now, what is a correlator? Classically, it is some average over the probabilistic distribution
$$\langle S \rangle = \int D\phi\,\rho(\phi) S(\phi)$$
This holds for $S$ being the product of several quantities, too. The integral goes over all possible configurations of the physical system and $\rho(\phi)$ is the probability density of the particular configuration $\phi$. In quantum mechanics, the correlation function is the expectation value in the actual state of the system - usually the ground state and/or a thermal state. For a ground state which is pure, we have
$$\langle \hat{S} \rangle = \langle 0 | \hat{S} | 0 \rangle$$
where the 0-ket-vector is the ground state, while for a thermal state expressed by a density matrix $\rho$, the correlation function is defined as
$$\langle \hat{S} \rangle = \mbox{Tr}\, (\hat{S}\hat{\rho})$$
Well, correlation functions are functions that know about the correlation of the physical quantities $f$ and $g$ at two points. If the correlation is zero, it looks like the two quantities are independent of each other. If the correlation is positive, it looks like the two quantities are likely to have the same sign; the more positive it is, the more they're correlated. They're correlated with the opposite signs if the correlation function is negative. In quantum field theory, correlation functions of two operators - just like you wrote - is known as the propagator and it is the mathematical expression that replaces all internal lines of Feynman diagrams. It tells you what is the probability amplitude that the corresponding particle propagates from the point $x+x'$ to the point $x'$. It is usually nonzero inside the light cone only and depends on the difference of the coordinates only.
An exception to this is the Feynman Propagator in QED. It is nonzero outside the light cone as well, but invokes anti-particles, which cancel this nonzero contribution outside the light cone, and preserve causality. Correlation functions involving an arbitrary positive number of operators are known as the Green's functions or $n$-point functions if a product of $n$ quantities is in between the brackets. In some sense, the $n$-point functions know everything about the calculable dynamical quantities describing the physical system. The fact that everything can be expanded into correlation functions is a generalization of the Taylor expansions to the case of infinitely many variables. In particular, the scattering amplitude for $n$ external particles (the total number, including incoming and outgoing ones) may be calculated from the $n$-point functions. The Feynman diagrams mentioned previously are a method to do this calculation systematically: a complicated correlator may be rewritten into a function of the 2-point functions, the propagators, contracted with the interaction vertices. There are many words to physically describe a correlation function in various contexts - such as the response functions etc. The idea is that you insert an impurity or a signal into $x'$, that's your $g(x')$, and you study how much the field $f(x+x')$ at point $x+x'$ is affected by the impurity $g(x')$. | {
"source": [
"https://physics.stackexchange.com/questions/3168",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/386/"
]
} |
3,390 | Does anybody give a good textbook description of a quantum computer algorithm and how its different from an ordinary algorithm? | I'll avoid talking about Shor's algorithm and leave it for you to read on your own -- Shor's algorithm is the quantum algorithm, and is the reason quantum computing has become such a hot topic, and is not only a must read, but Scott Aaronson has provided a better introduction than I could ever manage, http://www.scottaaronson.com/blog/?p=208 Instead, I will guide you through Deutsch's algorithm, which solves a fantastically useless problem exponentially faster than any classical algorithm: Imagine I have a function $f$ on a configuration of $n$ bits, $C=\{0,1\}^n$ that takes each configuration to a single bit. I promise you beforehand that this function is either: Constant - all configurations are mapped to the same value. Balanced - exactly half of the configurations map to $1$, the other half to $0$. So classically, in the worst case you must evaluate the function for $2^{n-1}+1$ configurations ($O(2^n)$) to verify which category $f$ falls into. Now, a quantum computer isn't just a parallel processor, where you can give it a superposition of the configurations and get back $f$ evaluated on all of them. At the end of the day, you have to make a measurement which destroys our carefully crafted superposition -- so we have to be clever! The fundamental feature of a quantum algorithm is that we use unitary operations to transform our state and use interference between states so that when we measure the state at the end, it tells us something unambiguous. So without further ado, the Deutsch algorithm for $n=2$ qubits (you can do this for many more qubits, of course, but you wanted a simple example). The "quantum version" of our function is a unitary operation (a "quantum gate") that takes me from $|a\rangle|b\rangle$ to $|a\rangle|b+f(a)\rangle$ (where the addition is modulo 2). Also at my disposal is a gate that takes any single bit and puts it in a particular superposition: $$|1\rangle\rightarrow(|0\rangle-|1\rangle)/\sqrt{2}$$ $$|0\rangle\rightarrow(|0\rangle+|1\rangle)/\sqrt{2}$$ The first step in the algorithm is to prepare my two qubits as $|0\rangle|1\rangle$ and then apply this transformation, giving me the state $(|0\rangle+|1\rangle)(|0\rangle-|1\rangle)/2$. Now I evaluate the function by applying the gate I described earlier, taking my state to $$[|0\rangle(|0+f(0)\rangle-|1+f(0)\rangle)+|1\rangle(|0+f(1)\rangle-|1+f(1)\rangle)]/2$$ If we stare at this carefully, and think about arithmetic mod 2, we see that if $f(0)=1$ the first term picks up a minus sign relative to its state before, and if $f(0)=0$ nothing happens. So we can rewrite the state as $$[(-1)^{f(0)}|0\rangle(|0\rangle-|1\rangle)+(-1)^{f(1)}|1\rangle(|0\rangle-|1\rangle)]/2$$ or regrouping $$(|0\rangle+(-1)^{f(0)+f(1)}|1\rangle)(|0\rangle-|1\rangle)/2$$ Now we shed the second qubit (we don't need it anymore, and it's not entangled with the first bit), and apply the second transformation I listed once more (and regrouping, the algebra is simple but tedious) -- at last, our final state is $$[(1+(-1)^{f(0)+f(1)})|0\rangle+((1-(-1)^{f(0)+f(1)})|1\rangle]/2$$ And lastly, we measure! It should be clear from the state above that we've solved the problem... If $f$ is constant, then $f(0)=f(1)$ and we will always measure $|0\rangle$, and if $f$ is balanced then $f(0) \neq f(1)$ and we always measure $|1\rangle$. Some final comments: hopefully this gives you some taste for the structure of a quantum algorithm, even if it seems like kind of a useless thing -- I strongly recommend going and reading the article I linked to at the beginning of this answer. | {
"source": [
"https://physics.stackexchange.com/questions/3390",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1383/"
]
} |
3,436 | I'm a big fan of the podcast Astronomy Cast and a while back I was listening to a Q&A episode they did. A listener sent in a question that I found fascinating and have been wondering about ever since. From the show transcript : Arunus Gidgowdusk from Lithuania asks: "If you took a one kilogram mass and accelerated it close to the speed of light would it form into a black hole? Would it stay a black hole if you then decreased the speed?" Dr. Gay, an astrophysicist and one of the hosts, explained that she'd asked a number of her colleagues and that none of them could provide a satisfactory answer. I asked her more recently on Facebook if anyone had come forward with one and she said they had not. So I thought maybe this would be a good place to ask. | The answer is no. The simplest proof is just the principle of relativity: the laws of physics are the same in all reference frames. So you can look at that 1-kg mass in a reference frame that's moving along with it. In that frame, it's just the same 1-kg mass it always was; it's not a black hole. | {
"source": [
"https://physics.stackexchange.com/questions/3436",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1479/"
]
} |
3,500 | It is an usual practice that any quantum field theory starts with a suitable Lagrangian density. It has been proved enormously successful. I understand, it automatically ensures valuable symmetries of physics to be preserved. But nevertheless the question about the generality of this approach keeps coming to my mind. My question is how one can be sure that this approach has to be always right and fruitful. Isn't it possible, at least from the mathematical point of view that a future theory of physics does not subscribe to this approach? | That's an excellent question, which has a few aspects: Can you quantize any given Lagrangian? The answer is no. There are classical Lagrangians which do not correspond to a valid field theory, for example those with anomalies. Do you have field theories with no Lagrangians? Yes, there are some field theories which have no Lagrangian description. You can calculate using other methods, like solving consistency conditions relating different observables. Does the quantum theory fix the Lagrangian? No, there are examples of quantum theories which could result from quantization of two (or more) different Lagrangians, for example involving different degrees of freedom. The way to think about it is that a Lagrangian is not a property of a given quantum theory, it also involves a specific classical limit of that theory. When the theory does not have a classical limit (it is inherently strongly coupled) it doesn't need to have a Lagrangian. When the theory has more than one classical limit, it can have more than one Lagrangian description. The prevalence of Lagrangians in studying quantum field theory comes because they are easier to manipulate than other methods, and because usually you approach a quantum theory by "quantizing" - meaning you start with a classical limit and include quantum corrections systematically. It is good to keep in mind though that this approach has its limitations. | {
"source": [
"https://physics.stackexchange.com/questions/3500",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
3,503 | Consider a complex scalar field $\phi$ with the Lagrangian: $$L = \partial_\mu\phi^\dagger\partial^\mu\phi - m^2 \phi^\dagger\phi.$$ Consider also two real scalar fields $\phi_1$ and $\phi_2$ with the Lagrangian: $$L = \frac12\partial_\mu\phi_1\partial^\mu\phi_1 - \frac12m^2 \phi_1^2
+\frac12\partial_\mu\phi_2\partial^\mu\phi_2 - \frac12m^2 \phi_2^2.$$ Are these two systems essentially the same? If not -- what is the difference? | There are some kind of silly answers here, except for QGR who correctly says they are identical. The two Lagrangians are isomorphic, the fields have just been relabeled. So anything you can do with one you can do with the other. The first has manifest $U(1)$ global symmetry, the second manifest $SO(2)$ but these two Lie algebras are isomorphic. If you want to gauge either global symmetry you can do it in the obvious way. You can use a complex scalar to represent a single charged field, but you could also use it to represent two real neutral fields. If you don't couple to some other fields in a way that allows you to measure the charge there is no difference. | {
"source": [
"https://physics.stackexchange.com/questions/3503",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/386/"
]
} |
3,518 | If the space-time metric is expanding with the expansion of the universe, if I could travel back in time, would I be less dense than the matter in that previous era? | There are some kind of silly answers here, except for QGR who correctly says they are identical. The two Lagrangians are isomorphic, the fields have just been relabeled. So anything you can do with one you can do with the other. The first has manifest $U(1)$ global symmetry, the second manifest $SO(2)$ but these two Lie algebras are isomorphic. If you want to gauge either global symmetry you can do it in the obvious way. You can use a complex scalar to represent a single charged field, but you could also use it to represent two real neutral fields. If you don't couple to some other fields in a way that allows you to measure the charge there is no difference. | {
"source": [
"https://physics.stackexchange.com/questions/3518",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
3,534 | The common understanding is that, setting air resistance aside, all objects dropped to Earth fall at the same rate. This is often demonstrated through the thought experiment of cutting a large object in half—the halves clearly don't fall more slowly just from having been sliced into two pieces. However, I believe the answer is that when two objects fall together, attached or not, they do "fall" faster than an object of less mass alone does. This is because not only does the Earth accelerate the objects toward itself, but the objects also accelerate the Earth toward themselves. Considering the formula: $$
F_{\text{g}} = \frac{G m_1 m_2}{d^2}
$$ Given $F = ma$ thus $a = F/m$ , we might note that the mass of the small object doesn't seem to matter because when calculating acceleration, the force is divided by $m$ , the object's mass. However, this overlooks that a force is actually applied to both objects, not just to the smaller one. An acceleration on the second, larger object is found by dividing $F$ , in turn, by the larger object's mass. The two objects' acceleration vectors are exactly opposite, so closing acceleration is the sum of the two: $$
a_{\text{closing}} = \frac{F}{m_1} + \frac{F}{m_2}
$$ Since the Earth is extremely massive compared to everyday objects, the acceleration imparted on the object by the Earth will radically dominate the equation. As the Earth is $\sim 5.972 \times {10}^{24} \, \mathrm{kg} ,$ a falling object of $5.972 \times {10}^{1} \, \mathrm{kg}$ (a little over 13 pounds) would accelerate the Earth about $\frac{1}{{10}^{24}}$ as much, which is one part in a trillion trillion. Thus, in everyday situations we can for all practical purposes treat all objects as falling at the same rate because this difference is so small that our instruments probably couldn't even detect it. But I'm hoping not for a discussion of practicality or what's measurable or observable, but of what we think is actually happening . Am I right or wrong? What really clinched this for me was considering dropping a small Moon-massed object close to the Earth and dropping a small Earth-massed object close to the Moon. This thought experiment made me realize that falling isn't one object moving toward some fixed frame of reference, and treating the Earth as just another object, "falling" consists of multiple objects mutually attracting in space . Clarification: one answer points out that serially lifting and dropping two objects on Earth comes with the fact that during each trial, the other object adds to the Earth's mass. Dropping a bowling ball (while a feather waits on the surface), then dropping the feather afterward (while the bowling ball stays on the surface), changes the Earth's mass between the two experiments. My question should thus be considered from the perspective of the Earth's mass remaining constant between the two trials (such as by removing each of the objects from the universe, or to an extremely great distance, while the other is being dropped). | Using your definition of "falling," heavier objects do fall faster, and here's one way to justify it: consider the situation in the frame of reference of the center of mass of the two-body system (CM of the Earth and whatever you're dropping on it, for example). Each object exerts a force on the other of $$F = \frac{G m_1 m_2}{r^2}$$ where $r = x_2 - x_1$ (assuming $x_2 > x_1$) is the separation distance. So for object 1, you have $$\frac{G m_1 m_2}{r^2} = m_1\ddot{x}_1$$ and for object 2, $$\frac{G m_1 m_2}{r^2} = -m_2\ddot{x}_2$$ Since object 2 is to the right, it gets pulled to the left, in the negative direction. Canceling common factors and adding these up, you get $$\frac{G(m_1 + m_2)}{r^2} = -\ddot{r}$$ So it's clear that when the total mass is larger, the magnitude of the acceleration is larger, meaning that it will take less time for the objects to come together. If you want to see this mathematically, multiply both sides of the equation by $\dot{r}\mathrm{d}t$ to get $$\frac{G(m_1 + m_2)}{r^2}\mathrm{d}r = -\dot{r}\mathrm{d}\dot{r}$$ and integrate, $$G(m_1 + m_2)\left(\frac{1}{r} - \frac{1}{r_i}\right) = \frac{\dot{r}^2 - \dot{r}_i^2}{2}$$ Assuming $\dot{r}_i = 0$ (the objects start from relative rest), you can rearrange this to $$\sqrt{2G(m_1 + m_2)}\ \mathrm{d}t = -\sqrt{\frac{r_i r}{r_i - r}}\mathrm{d}r$$ where I've chosen the negative square root because $\dot{r} < 0$, and integrate it again to find $$t = \frac{1}{\sqrt{2G(m_1 + m_2)}}\biggl(\sqrt{r_i r_f(r_i - r_f)} + r_i^{3/2}\cos^{-1}\sqrt{\frac{r_f}{r_i}}\biggr)$$ where $r_f$ is the final center-to-center separation distance. Notice that $t$ is inversely proportional to the total mass, so larger mass translates into a lower collision time. In the case of something like the Earth and a bowling ball, one of the masses is much larger, $m_1 \gg m_2$. So you can approximate the mass dependence of $t$ using a Taylor series, $$\frac{1}{\sqrt{2G(m_1 + m_2)}} = \frac{1}{\sqrt{2Gm_1}}\biggl(1 - \frac{1}{2}\frac{m_2}{m_1} + \cdots\biggr)$$ The leading term is completely independent of $m_2$ (mass of the bowling ball or whatever), and this is why we can say, to a leading order approximation, that all objects fall at the same rate on the Earth's surface. For typical objects that might be dropped, the first correction term has a magnitude of a few kilograms divided by the mass of the Earth, which works out to $10^{-24}$. So the inaccuracy introduced by ignoring the motion of the Earth is roughly one part in a trillion trillion, far beyond the sensitivity of any measuring device that exists (or can even be imagined) today. | {
"source": [
"https://physics.stackexchange.com/questions/3534",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1483/"
]
} |
3,541 | I've read a number of the helpful Q&As on photons that mention the mass/mass-less issue. Do I understand correctly that the idea of mass-less (a rest mass of 0) may be just a convention to make the equations work? From a layperson's view, it's difficult to understand how a particle of light (photon) can be mass-less. A physical object (everyday world-large or quantum-small) must have a mass. Yet, if my understanding is correct, the mass of a moving object/particle increases in proportion to its speed/velocity...so that at the speed of light, its mass would be infinite. A photon travels at the speed of light, but it obviously doesn't have infinite mass, right? Can someone formulate a practical explanation that can be understood by middle-school to high school kids? Much thanks for the help. Wow--your answers to my original Q below clear up much of my confusion. I now have the daunting task of going over these nuggets and working up an equation-less (hopefully) explanation of the mass-less photon for non-physicist types. Yes, from a layperson's view , it does seem remarkable that an existing piece of matter --
which has to be made of physical substance --could have zero mass at rest (though a photon is never at rest). It would be almost understandable if a piece of matter made of nothing had zero mass, but that seems to be an oxymoron, and "nothing" would equate to nonexistent, right? In case you might find it interesting: I'm working on a writing project that posits we inhabit a universe that consists of matter (physical stuff) only, and that the NON-physical (aka supernatural) does not (and cannot) exist. For instance, if a purported supernatural phenomenon is found to actually exist, then by definition, its existence is proof that it is mundane/natural. All it would take to disprove this premise is reliable proof that ONE supernatural event has occurred. Despite thousands of such claims, that's never yet happened. Who else better than physicists to confirm my premise? However, I do wish the TV physicists would explain the terms they throw about, some of which mislead/confuse their lay viewers. Case in point: "The universe is made up of matter and energy" (without properly defining the term "energy" as a property of matter). The result is that laypersons are left with the impression that energy must therefore be something apart from or independent of matter (ie, nonphysical). Their use of the term "pure energy" without specifying exactly what that means adds to the confusion. (Thanks to your replies on this forum, I now understand that "pure energy" refers to photon particles.) However, "psychics" and other charlatans take advantage of such confusion by hijacking terms like energy (as in "psychic energy"), frequencies, vibrations, etc to give perceived scientific legitimacy to their claims that a supernatural spirit world, etc., exists. As you may realize, the majority of people in the US (per 2009 Harris Poll) and around the world believe in the existence of nonphysical/supernatural stuff such as ghosts and spirits. My purpose is to give laypersons the information they need to distinguish what's real from what's not. Thanks so much for help...And, PLEASE, add any further comments you think might be helpful/insightful to better inform laypersons. | There is absolutely nothing conventional about the mass of different particle species. For any particle moving in the vacuum, you may measure the total energy $E$ (including the latent energy) and the momentum $p$. It turns out experimentally - and Einstein's special theory of relativity guarantees - that the combination
$$E^2 - p^2 c^2$$
doesn't depend on the velocity but only on the type of the particle. It is a quantity describing the particle type and we call it
$$E^2 - p^2 c^2 = m_0^2 c^4$$
This determines the rest mass $m_0$ of the particle. The formula above works for any particle in the vacuum, any speed, and is always non-singular. Photons have $E=pc$ which implies that $m_0=0$. The rest mass of a photon is equal to zero. Indeed, that's also the reason why one can't really have a photon at rest, $v=0$. If a speed of something is $c$ in one reference frame, it will stay $c$ in any (non-singular) reference frame - that's another postulate of the special theory of relativity. So one can't ever make the speed of the photon zero by switching to another (non-singular) reference frame. But if you want to see some values for all quantities, you may imagine that a photon at rest could exist and its total mass would be zero. At speed $v$, the mass is increased to
$$m_{total} = \frac{m_{0}}{\sqrt{1-v^2/c^2}}$$
For $m_0=0$ and $v=c$, the expression above is clearly a $0/0$ indeterminate form and its proper result may be anything. In particular, the correct value is any finite number. At the right speed, $v=c$, the massless photons can have any finite energy. | {
"source": [
"https://physics.stackexchange.com/questions/3541",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1500/"
]
} |
3,618 | As an exercise I sat down and derived the magnetic field produced by moving charges for a few contrived situations. I started out with Coulomb's Law and Special Relativity. For example, I derived the magnetic field produced by a current $I$ in an infinite wire. It's a relativistic effect; in the frame of a test charge, the electron density increases or decreases relative to the proton density in the wire due to relativistic length contraction, depending on the test charge's movement. The net effect is a frame-dependent Coulomb field whose effect on a test charge is exactly equivalent to that of a magnetic field according to the Biot–Savart Law. My question is: Can Maxwell's equations be derived using only Coulomb's Law and Special Relativity? If so, and the $B$-field is in all cases a purely relativistic effect, then Maxwell's equations can be re-written without reference to a $B$-field. Does this still leave room for magnetic monopoles? | Maxwell's equations do follow from the laws of electricity combined with the principles of special relativity. But this fact does not imply that the magnetic field at a given point is less real than the electric field. Quite on the contrary, relativity implies that these two fields have to be equally real. When the principles of special relativity are imposed, the electric field $\vec{E}$ has to be incorporated into an object that transforms in a well-defined way under the Lorentz transformations - i.e. when the velocity of the observer is changed. Because there exists no "scalar electric force", and for other technical reasons I don't want to explain, $\vec{E}$ can't be a part of a 4-vector in the spacetime, $V_{\mu}$. Instead, it must be the components $F_{0i}$ of an antisymmetric tensor with two indices,
$$F_{\mu\nu}=-F_{\nu\mu}$$
Such objects, generally known as tensors, know how to behave under the Lorentz transformations - when the space and time are rotated into each other as relativity makes mandatory. The indices $\mu,\nu$ take values $0,1,2,3$ i.e. $t,x,y,z$. Because of the antisymmetry above, there are 6 inequivalent components of the tensor - the values of $\mu\nu$ can be
$$01,02,03;23,31,12.$$
The first three combinations correspond to the three components of the electric field $\vec{E}$ while the last three combinations carry the information about the magnetic field $\vec{B}$. When I was 10, I also thought that the magnetic field could have been just some artifact of the electric field but it can't be so. Instead, the electric and magnetic fields at each point are completely independent of each other. Nevertheless, the Lorentz symmetry can transform them into each other and both of them are needed for their friend to be able to transform into something in a different inertial system, so that the symmetry under the change of the inertial system isn't lost. If you only start with the $E_z$ electric field, the component $F_{03}$ is nonzero. However, when you boost the system in the $x$-direction, you mix the time coordinate $0$ with the spatial $x$-coordinate $1$. Consequently, a part of the $F_{03}$ field is transformed into the component $F_{13}$ which is interpreted as the magnetic field $B_y$, up to a sign. Alternatively, one may describe the electricity by the electric potential $\phi$. However, the energy density from the charge density $\rho=j_0$ has to be a tensor with two time-like indices, $T_{00}$, so $\phi$ itself must carry a time-like index, too. It must be that $\phi=A_0$ for some 4-vector $A$. This whole 4-vector must exist by relativity, including the spatial components $\vec{A}$, and a new field $\vec{B}$ may be calculated as the curl of $\vec{A}$ while $\vec{E}=-\nabla\phi-\partial \vec{A}/\partial t$. You apparently wanted to prove the absence of the magnetic monopoles by proving the absence of the magnetic field itself. Well, apologies for having interrupted your research plan: it can't work. Magnets are damn real. And if you're interested, the existence of magnetic monopoles is inevitable in any consistent theory of quantum gravity. In particular, two poles of a dumbbell-shaped magnet may collapse into a pair of black holes which will inevitably possess the (opposite) magnetic monopole charges. The lightest possible (Planck mass) black holes with magnetic monopole charges will be "proofs of concept" heavy elementary particles with magnetic charges - however, lighter particles with the same charges may sometimes exist, too. | {
"source": [
"https://physics.stackexchange.com/questions/3618",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1247/"
]
} |
3,767 | John Cramer’s transactional interpretation of quantum mechanics (TIQM) is billed as resolving the fuzzy agnosticism of the Copenhagen interpretation while avoiding the alleged ontological excesses of the Many Worlds Interpretation. Yet it has a low profile. Is this because no-one care anymore about ontology in physics, or is there something about TIQM which undermines belief in it? | Nobody has explained to me how Shor's quantum factorization algorithm works under the transactional interpretation, and I expect this is because the transactional interpretation cannot actually explain this algorithm. If it can't, then chances are the transactional interpretation doesn't actually work. (I have looked at some of the papers that purport to explain the transactional interpretation, and have found them exceedingly vague about the details of this interpretation, but assuming this interpretation is actually valid, maybe somebody else with more determination could figure these details out.) | {
"source": [
"https://physics.stackexchange.com/questions/3767",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1394/"
]
} |
3,799 | Cold fusion is being mentioned a lot lately because of some new setup that apparently works. This is an unverified claim. See for example: http://hardware.slashdot.org/story/11/01/24/1550205/Italian-Scientists-Demonstrate-Cold-Fusion http://www.physorg.com/news/2011-01-italian-scientists-cold-fusion-video.html http://www.wipo.int/pctdb/en/wo.jsp?WO=2009125444&IA=IT2008000532&DISPLAY=DOCS http://www.journal-of-nuclear-physics.com/files/Rossi-Focardi_paper.pdf ( Archived copy of that last link in the Wayback Machine, given frequent http 403 errors from that page.) While we should give the scientific community time to evaluate the set up and eventually replicate the results, there is undoubtedly some skepticism that cold fusion would work at all, because the claim is quite extraordinary. In the past, after Fleischmann and Pons announced their cold fusion results, in perfectly good faith, they were proven wrong by subsequent experiments. What are the experimental realities that make Fleischmann and Pons style cold fusions experiments easy to get wrong? Would the same risks apply to this new set up? | This was beautifully answered theoretically right away at the 1989 APS session in NY, I think by Koonin. Theoretically, for any sort of fusion one needs to overcome the Coulomb repulsion of the relevant nuclei, on the order of MeV in order to allow the nuclei to get close enough for their wave functions to overlap and fuse. Because of the phenomenom of quantum mechanical tunnelling, this can be reduced to tens to hundreds of kev. So temperatures of >> 10^5 K, or cold muons (which outweigh electrons by 200x) are required to reduce the internuclear distance (as in muon catalyzed cold fusion, a real phenomenon), or some other special mechanism is required to allow this close approach. However, for any sort of chemically catalyzed fusion, i.e. via the valence electrons, to take place, the binding energy of the two H atoms to the catalyst would have to be so high, that the particular configuration of the low energy valence electrons, etc. would necessarily be entirely irrelevant to the problem, i.e. whatever their arrangement they could not possibly catalyze the fusionable nuclei to approach close enough to fuse. So no clever packing arrangement, quasiparticles, special adsorbtion, special crystal lattice structures, etc. could ever alter this conclusion. Whatever was happening at such low energy scales would appear as a kind of irrelevant fluff compared to the energy scale of the internuclear distance necessary for fusion. Therefore valence electron catalyzed cold fusion would violate the fundamental laws of quantum mechanics, nuclear physics, etc. Leggett and Baym also published an argument like this around the same time (summarized for free here ). Koonin and Nauenberg published an accurate calculation here , showing that if the mass of the electron were 5-10 times larger than it really is, chemically calalyzed fusion could work. Note however, that the reaction rate depends on the electron mass very, very strongly, so that this remains impossible in our universe. | {
"source": [
"https://physics.stackexchange.com/questions/3799",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/66/"
]
} |
4,010 | Yesterday I looked underwater with my eyes open (and no goggles) and I realized I can't see anything clearly. Everything looks very, very blurry. My guess is that the eye needs direct contact with air in order to work properly. With water, the refraction index is different, and the eye lens are not able to compensate for correct focalization on the retina. Am I right ? If so, what lenses should one wear in order to see clearly while under water ? | You can't see clearly underwater for a couple of reasons. One is the thickness of your lens, but the main one is the index of refraction of your cornea. For reference, here's the Wikipedia picture of a human eye. According to Wikipedia , two-thirds of the refractive power of your eye is in your cornea, and the cornea's refractive index is about 1.376. The refractive index of water ( according to Google ) is 1.33. In water, your cornea bends light as much as a lens in air whose refractive index is $$\frac{1.376-1.33}{1.33} + 1 = 1.034$$ That means you're losing about 90% of your cornea's refractive power, or 60% of your total refractive power, when you enter the water. The question becomes whether your lens can compensate for that. I didn't find a direct quote on how much you can change the focal distance of your lens, but we can estimate that your cornea is doing essentially nothing, and ask whether your lens ought to be able to do all the focusing itself. For a spherical lens with index of refraction $n$ sitting in a medium with index of refraction $n_0$, the effective focal length is $$f = \frac{nD}{4(n-n_0)}$$ The refractive index of your vitreous humor is about 1.33 (like water), and the refractive index of your lens, according to Wikipedia , varies between 1.386 and 1.406. Let's take 1.40 as an average. Then, plugging in the numbers, the effective focal distance of a spherical eye lens would be five times its diameter. The Wikipedia picture of a human eye makes this look reasonable - a spherical lens might be able to do all the focusing a human eye needs, even without the cornea. The problem is that your eye's lens isn't spherical. From the same Wikipedia article In many aquatic vertebrates, the lens is considerably thicker, almost spherical, to increase the refraction of light. This difference compensates for the smaller angle of refraction between the eye's cornea and the watery medium, as they have similar refractive indices. [2] Even among terrestrial animals, however, the lens of primates such as humans is unusually flat.[3] So, the reason you can't see well underwater is that your eye lens is too flat. If you wear goggles, the light is refracted much more as it enters the cornea - the same amount as normal. If you want to wear some sort of corrective lenses directly on your eye like contact lenses, they should have a refractive index as low as possible. Googling for "underwater contact lens", I found an article about contact lenses made with a layer of air , allowing divers to see sharply underwater. | {
"source": [
"https://physics.stackexchange.com/questions/4010",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/217/"
]
} |
4,102 | Why is the Lagrangian a function of the position and velocity (possibly also of time) and why are dependences on higher order derivatives (acceleration, jerk,...) excluded? Is there a good reason for this or is it simply "because it works". | I reproduce a blog post I wrote some time ago: We tend to not use higher derivative theories. It turns out that there
is a very good reason for this, but that reason is rarely discussed in
textbooks. We will take, for concreteness, $L(q,\dot q, \ddot
q)$ , a Lagrangian which depends on the 2nd derivative in an
essential manner. Inessential dependences are terms such as $q\ddot q$ which may be partially integrated to give ${\dot q}^2$ . Mathematically,
this is expressed through the necessity of being able to invert the
expression $$P_2 = \frac{\partial L\left(q,\dot q, \ddot
q\right)}{\partial \ddot q},$$ and get a closed form for $\ddot q
(q, \dot q, P_2)$ . Note that usually we also require a
similar statement for $\dot q (q, p)$ , and failure in this
respect is a sign of having a constrained system, possibly with gauge
degrees of freedom. In any case, the non-degeneracy leads to the Euler-Lagrange equations in
the usual manner: $$\frac{\partial L}{\partial q} -
\frac{d}{dt}\frac{\partial L}{\partial \dot q} +
\frac{d^2}{dt^2}\frac{\partial L}{\partial \ddot q} = 0.$$ This is then
fourth order in $t$ , and so require four initial conditions, such as $q$ , $\dot q$ , $\ddot q$ , $q^{(3)}$ . This is twice as many as usual, and
so we can get a new pair of conjugate variables when we move into a
Hamiltonian formalism. We follow the steps of Ostrogradski, and choose
our canonical variables as $Q_1 = q$ , $Q_2 = \dot q$ , which leads to \begin{align} P_1 &= \frac{\partial L}{\partial \dot q} -
\frac{d}{dt}\frac{\partial L}{\partial \ddot q}, \\ P_2 &=
\frac{\partial L}{\partial \ddot q}. \end{align} Note that the
non-degeneracy allows $\ddot q$ to be expressed in terms of $Q_1$ , $Q_2$ and $P_2$ through the second equation, and the first one is only
necessary to define $q^{(3)}$ . We can then proceed in the usual fashion, and find the Hamiltonian
through a Legendre transform: \begin{align} H &= \sum_i P_i \dot{Q}_i -
L \\ &= P_1 Q_2 + P_2 \ddot{q}\left(Q_1, Q_2, P_2\right) - L\left(Q_1,
Q_2,\ddot{q}\right). \end{align} Again, as usual, we can take time
derivative of the Hamiltonian to find that it is time independent if the
Lagrangian does not depend on time explicitly, and thus can be
identified as the energy of the system. However, we now have a problem: $H$ has only a linear dependence on $P_1$ , and so can be arbitrarily negative. In an interacting system this
means that we can excite positive energy modes by transferring energy
from the negative energy modes, and in doing so we would increase the
entropy — there would simply be more particles, and so a need to
put them somewhere. Thus such a system could never reach equilibrium,
exploding instantly in an orgy of particle creation. This problem is in
fact completely general, and applies to even higher derivatives in a
similar fashion. | {
"source": [
"https://physics.stackexchange.com/questions/4102",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/418/"
]
} |
4,116 | I know that the rotation period of the moon equals its revolution period. It's just so astonishing that these 2 values have such a small difference. I mean, what is the probability of these 2 values to be practically the same? I don't believe this to be a coincidence. It's just too much for a coincidence. What could have caused this? | This is a gravitational phenomenon known as tidal lock . It is closely related to the phenomenon of tides on Earth, hence the name. Tidal locking is an effect caused by the gravitational gradient from the near side to the far side of the moon. (That is, the continuous variation of the gravitational field strength across the Moon.) The end result is that the Moon rotates around its own axis with the same period as which it rotates around the Earth, causing the face of one hemisphere always to point towards the Earth. | {
"source": [
"https://physics.stackexchange.com/questions/4116",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1654/"
]
} |
4,156 | Recently I talked about QFT with another physicist and mentioned that the Quantum Field Theory of a fermion is a quantisation of its one-particle quantum mechanical theory. He denied this and responded that he rather sees the single particle QM as the non-relativistic limit of a QFT. He elaborated that the energies encountered are all much smaller than the particles mass, so we can ignore all multi-particle excitations in Fock Space and get an effective Hilbert space consisting of all single-particle excitations. In turn, I asked what the corresponding limit of QED quantum mechanics of the massless photon is, and he of course responded that there can't be a nonrelativistic limit of QED exactly because of the masslessness. But there is classical ED, the classical limit of QED. So is taking the classical or the nonrelativistic limit the same, or does one include the other, or is there some deep difference? The question What does a Field Theory mean? has something to do with it, but does not fully answer my question. | Dear Turion, the Dirac quantum field may be formally obtained by quantizing the Dirac equation which is a relativistic but single-particle quantum mechanical equation. The non-relativistic limit of the single-particle Dirac equation is the Pauli equation which is essentially the non-relativistic Schrödinger equation for a wave function with an extra 2-fold degeneracy describing the spin. To get from the non-relativistic Schrödinger equation for an electron to a quantum field theory with a quantized Dirac field, you therefore Have to add the spin and going to the Pauli equation - easy Guess the right relativistic generalization of the Pauli equation - it's the single-particle Dirac equation that also has negative-energy solutions Realize that the negative-energy solutions are inconsistent in the single-particle framework, so you have to second-quantize the wave function and obtain the Dirac quantum field This sequence of steps is formal. One can't really "deduce" things in this order, at least not in a straightforward way. After all, the step 1 needed a creative genius of Pauli's caliber, the step 2 needed a creative genius of Dirac's caliber, and the step 3 needed a collaboration of dozens of top physicists who developed quantum field theory. Quite on the contrary, as you were correctly told, the meaningful well-defined operations go exactly in the opposite order - but it doesn't follow the steps above. You begin with the quantum field theory, including the Dirac field, which is the right full theory, and you may take various limits of it. The non-relativistic limit is of course something totally different than the classical limit. The nonrelativistic limit is still a quantum theory, with probabilities etc. - but it doesn't respect the special role of the speed of light. On the other hand, the classical limit is something totally different - a classical deterministic theory that respects the Lorentz symmetry, and so on. Let us discuss the limits of quantum electrodynamics separately. Classical limits The classical, $\hbar\to 0$, limit of QED acts differently on fermions and bosons. The bosons like to occupy the same state. However, to "actually" send $\hbar$ to zero, you need quantities with the same units that are much larger than $\hbar$: $\hbar$ goes to zero relatively to them. What quantities may you find? Well, the electromagnetic field may carry a lot of energy in strong fields. So you get a classical limit by having many photons in the same state. They combine into classical electromagnetic fields that are governed by classical Maxwell's equations; note that classical Maxwell's equations are "already" relativistic although people before Einstein hadn't fully appreciated this symmetry (although Lorentz wrote the "redefinition" down without realizing its relationship with the different inertial frames or symmetry groups, for that matter). You just erase hats from the similar Heisenberg equations for the electromagnetic fields. Well, for extremely high frequencies, the number of photons won't be large because they carry a huge energy. So for high frequencies, you may also derive another classical limit - based on "pointlike particles", the photons. Fermions, e.g. electrons described by the Dirac equation, obey the exclusion principle. So you can't have many of them. There is at most one particle per state. In the quantum mechanical theories, it has an approximate position and momentum that don't commute. The classical limit is where they do commute. So the classical limit must inevitably produce mechanics for the fermions - with positions and momenta of individual particles. As I mentioned, this picture may be relevant for high-energy bosons, too. Nonrelativistic limit The non-relativistic, $c\to\infty$, limit of QED is something completely different. It is still a quantum theory. Because the photons propagate by the speed of light and the speed is sent to infinity, the electromagnetic waves propagate infinitely quickly in the non-relativistic limit. That means that the charged (and spinning or moving charged) objects instantly influence each other by electric (and magnetic) fields. When it comes to fermions, you undo one of the steps at the beginning: you reduce the speed of the electrons. Assuming there are no positrons for a while, the non-relativistic limit where the velocities are small will prevent you from the creation of fermion-antifermion pairs. So the number of particles will be conserved. So it makes sense to decompose the Hilbert space into sectors with $N$ particles for various values of $N$ and you're back in multi-body quantum mechanics. They will also have the spin, as in the Pauli equation, and they will interact via instantaneous interactions - the Coulomb interaction and its magnetic counterparts (combine the Ampére and Biot-Savart laws for $B$ induced by currents with the usual magnetic forces acting on moving charges and spins). You will get the usual non-relativistic quantum mechanical Hamiltonian used for atomic physics. There will be no waves because they move by the infinite speed. You won't be able to see them. But they won't destroy the conservation of energy etc. because in the non-relativistic limit, the power emitted by accelerating charges goes to zero because it contains $1/c^3$ or another negative power. So in the non-relativistic limit, the photons just disappear from the picture, and their only trace will be the instantaneous interactions of the Coulomb type. Classical non-relativistic limit Of course, you may apply both limiting procedures at the same moment. Then you get non-relativistic point-like classical electrons interacting via the Coulomb and similar instantaneous interactions. | {
"source": [
"https://physics.stackexchange.com/questions/4156",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1592/"
]
} |
4,168 | I have a pot of vigorously boiling water on a gas stove. There's some steam, but not alot. When I turn off the gas, the boiling immediately subsides, and a huge waft of steam comes out. This is followed by a steady output of steam that's greater than the amount of steam it was producing while it was actually boiling. Why is there more steam after boiling than during boiling? Also, what's with the burst of steam when it stops boiling? | I have read that true steam is clear (transparent) water vapor. According to this theory, the white "steam" you see is really a small cloud of condensed water vapor droplets, a fine mist in effect. So what you are seeing is not more steam, but more condensation and more mist. The speed with which the steam/vapor/mist rises and disperses may also change. | {
"source": [
"https://physics.stackexchange.com/questions/4168",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1666/"
]
} |
4,184 | Effective theories like Little Higgs models or Nambu-Jona-Lasinio model are non-renormalizable and there is no problem with it, since an effective theory does not need to be renormalizable. These theories are valid until a scale $\Lambda$ (ultraviolet cutoff), after this scale an effective theory need a UV completion, a more "general" theory. The Standard Model, as an effective theory of particle physics, needs a more general theory that addresses the phenomena beyond the Standard Model (an UV completion?). So, why should the Standard Model be renormalizable? | The short answer is that it doesn't have to be, and it probably isn't. The modern way to understanding any quantum field theory is as an effective field theory. The theory includes all renormalizable (relevant and marginal) operators, which give the largest contribution to any low energy process. When you are interested in either high precision or high energy processes, you have to systematically include non-renormalizable terms as well, which come from some more complete theory. Back in the days when the standard model was constructed, people did not have a good appreciation of effective field theories, and thus renormalizability was imposed as a deep and not completely understood principle. This is one of the difficulties in studying QFT, it has a long history including ideas that were superseded (plenty of other examples: relativistic wave equations, second quantization, and a whole bunch of misconceptions about the meaning of renormalization). But now we know that any QFT, including the standard model, is expected to have these higher dimensional operators. By measuring their effects you get some clue what is the high energy scale in which the standard model breaks down. So far, it looks like a really high scale. | {
"source": [
"https://physics.stackexchange.com/questions/4184",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1545/"
]
} |
4,238 | Why do we have an elementary charge $e$ in physics but no elementary mass? Is an elementary mass ruled out by experiment or is an elementary mass forbidden by some theoretical reason? | Let me add two references to points already mentioned in this discussion: Today, there is no reason known why the electric charge has to be quantized. It is true that the quantization follows from the existence of magnetic monopoles and the consistency of the quantized electromagnetic field, which was shown first by Dirac, you'll find a very nice exposition of this in Gregory L. Naber: "Topology, geometry and gauge fields." (2 books, of the top off my head I don't know if the relevant part is in the first or the second one). AFAIK there is no reason to believe that magnetic monopoles do exist, there is no experimental evidence and there is no compelling theoretical argument using a well established framework like QFT. There are of course more speculative ideas (Lubos mentioned those). AFAIK there is no reason why mass should or should not be quantized (in QFT models this is an assumption/axiom that is put in by hand, even the positivity of the energy-momentum operator is an axiom in AQFT), but a mass gap is considered to be an essential feature of a full fledged rigorous theory of QCD, for reasons that are explained in the problem description of the Millenium Problem of the Clay Institute that you can find here: Yang-Mills and Mass Gap | {
"source": [
"https://physics.stackexchange.com/questions/4238",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1648/"
]
} |
4,243 | I can understand that on small scales (within an atom/molecule), the other forces are much stronger, but on larger scales, it seems that gravity is a far stronger force; e.g. planets are held to the sun by gravity. So what does it mean to say that "gravity is the weakest of the forces" when in some cases, it seems far stronger? | When we ask "how strong is this force?" what we mean in this context is "How much stuff do I need to get a significant amount of force?" Richard Feynman summarized this the best in comparing the strength of gravity - which is generated by the entire mass of the Earth - versus a relatively tiny amount of electric charge: And all matter is a mixture of
positive protons and negative
electrons which are attracting and
repelling with this great force. So
perfect is the balance however, that
when you stand near someone else you
don't feel any force at all. If there
were even a little bit of unbalance
you would know it. If you were
standing at arm's length from someone
and each of you had one percent more
electrons than protons, the repelling
force would be incredible. How great?
Enough to lift the Empire State
building? No! To lift Mount Everest?
No! The repulsion would be enough to
lift a "weight" equal to that of the
entire earth! Another way to think about it is this: a proton has both charge and mass. If I hold another proton a centimeter away, how strong is the gravitational attraction? It's about $10^{-57}$ newtons. How strong is the electric repulsion? It's about $10^{-24}$ newtons. How much stronger is the electric force than the gravitational? We find that it's $10^{33}$ times stronger, as in 1,000,000,000,000,000,000,000,000,000,000,000 times more powerful! | {
"source": [
"https://physics.stackexchange.com/questions/4243",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/158/"
]
} |
4,245 | The alkali earth metals form the two left-most column of the periodic table of the elements, other than hydrogen. See wikipedia articles: http://en.wikipedia.org/wiki/Alkali_metal http://en.wikipedia.org/wiki/Alkaline_earth_metal Now "alkali" means that these are in a sense the opposite of "acid". One of the ancient uses of alkali materials is in the conversion of corn into hominy (or, for those in the southern US, grits similar to posole, polenta or farina ). The conversion of corn to hominy is of ancient origin. From a nutrition point of view it is useful because it increases the availability of the lysine and tryptophan proteins in the corn. See http://www.practicallyedible.com/edible.nsf/pages/hominy This helps prevent the nutritional disease pellagra http://en.wikipedia.org/wiki/Pellagra In ancient times, hominy was manufactured by soaking corn in a mixture of water and ashes. This is equivalent to the modern method, which is to soak the corn in lye. So my question is this: why is it that ashes are alkali? | When we ask "how strong is this force?" what we mean in this context is "How much stuff do I need to get a significant amount of force?" Richard Feynman summarized this the best in comparing the strength of gravity - which is generated by the entire mass of the Earth - versus a relatively tiny amount of electric charge: And all matter is a mixture of
positive protons and negative
electrons which are attracting and
repelling with this great force. So
perfect is the balance however, that
when you stand near someone else you
don't feel any force at all. If there
were even a little bit of unbalance
you would know it. If you were
standing at arm's length from someone
and each of you had one percent more
electrons than protons, the repelling
force would be incredible. How great?
Enough to lift the Empire State
building? No! To lift Mount Everest?
No! The repulsion would be enough to
lift a "weight" equal to that of the
entire earth! Another way to think about it is this: a proton has both charge and mass. If I hold another proton a centimeter away, how strong is the gravitational attraction? It's about $10^{-57}$ newtons. How strong is the electric repulsion? It's about $10^{-24}$ newtons. How much stronger is the electric force than the gravitational? We find that it's $10^{33}$ times stronger, as in 1,000,000,000,000,000,000,000,000,000,000,000 times more powerful! | {
"source": [
"https://physics.stackexchange.com/questions/4245",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1272/"
]
} |
4,247 | This is more of a math question and one, furthermore, that I know the final answer to. What I am asking is more of a "how do I get there" question as this question was generated during a self study situation. So, for a flat plate with a meniscus on it at some contact angle the form is: a y''$/(1+(y')^2)^{3/2}$ + b g y = 0 where a is surface tension, b is density, and g is the usual 9.8 m$/s^2$. Now, I know that when you "A first integration, together with the boundary condition dy/dx=y=0 as x goes to infinity yields: 1$/(1+y'^2)^{1/2}$=1-(bg/2a)$y^2$ " --From de Gennes on Menisci My problem is with the integration. How do I integrate something like the 1st equation? I find myself running in circles. Worse, I know that this type of integration is something I've run across numerous times, but I can't seem to find the methodology in the current stack of books. So I beg, can someone either tell me where to look to find the appropriate methodology to attack this problem or take me through the process? I would be grateful for as long as it would be useful for you.
Thanks,
Sam | When we ask "how strong is this force?" what we mean in this context is "How much stuff do I need to get a significant amount of force?" Richard Feynman summarized this the best in comparing the strength of gravity - which is generated by the entire mass of the Earth - versus a relatively tiny amount of electric charge: And all matter is a mixture of
positive protons and negative
electrons which are attracting and
repelling with this great force. So
perfect is the balance however, that
when you stand near someone else you
don't feel any force at all. If there
were even a little bit of unbalance
you would know it. If you were
standing at arm's length from someone
and each of you had one percent more
electrons than protons, the repelling
force would be incredible. How great?
Enough to lift the Empire State
building? No! To lift Mount Everest?
No! The repulsion would be enough to
lift a "weight" equal to that of the
entire earth! Another way to think about it is this: a proton has both charge and mass. If I hold another proton a centimeter away, how strong is the gravitational attraction? It's about $10^{-57}$ newtons. How strong is the electric repulsion? It's about $10^{-24}$ newtons. How much stronger is the electric force than the gravitational? We find that it's $10^{33}$ times stronger, as in 1,000,000,000,000,000,000,000,000,000,000,000 times more powerful! | {
"source": [
"https://physics.stackexchange.com/questions/4247",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1037/"
]
} |
4,256 | I am wondering: The noble 'gases' are inert because they have closed shells and don't want to give that up. But the noble metals, such as Copper, Silver, Rhodium, Gold, don't seem to have this. For example, Rhodium has electron configuration $4d^{8}\, 5s^1$. To me, this looks very similar to the alkali metals, which we all know are 'very' reactive. So why doesn't Rhodium happily give up its s-electron? Same with Silver, $4d^{10}\, 5s^1$. Why not give up the s electron? | When we ask "how strong is this force?" what we mean in this context is "How much stuff do I need to get a significant amount of force?" Richard Feynman summarized this the best in comparing the strength of gravity - which is generated by the entire mass of the Earth - versus a relatively tiny amount of electric charge: And all matter is a mixture of
positive protons and negative
electrons which are attracting and
repelling with this great force. So
perfect is the balance however, that
when you stand near someone else you
don't feel any force at all. If there
were even a little bit of unbalance
you would know it. If you were
standing at arm's length from someone
and each of you had one percent more
electrons than protons, the repelling
force would be incredible. How great?
Enough to lift the Empire State
building? No! To lift Mount Everest?
No! The repulsion would be enough to
lift a "weight" equal to that of the
entire earth! Another way to think about it is this: a proton has both charge and mass. If I hold another proton a centimeter away, how strong is the gravitational attraction? It's about $10^{-57}$ newtons. How strong is the electric repulsion? It's about $10^{-24}$ newtons. How much stronger is the electric force than the gravitational? We find that it's $10^{33}$ times stronger, as in 1,000,000,000,000,000,000,000,000,000,000,000 times more powerful! | {
"source": [
"https://physics.stackexchange.com/questions/4256",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/661/"
]
} |
4,284 | Is there a tutorial explanation as to how decoherence transforms a wavefunction (with a superposition of possible observable values) into a set of well-defined specific "classical" observable values without the concept of the wavefunction undergoing "collapse"? I mean an explanation which is less technical than that decoherence is the decay or rapid vanishing of the off-diagonal elements of the partial trace of the joint system's density matrix, i.e. the trace, with respect to any environmental basis, of the density matrix of the combined system and its environment [...] ( Wikipedia ). | The Copenhagen interpretation consists of two parts, unitary evolution (in which no information is lost) and measurement (in which information is lost). Decoherence gives an explanation of why information appears to be lost when it in actuality is not. "The decay of the off-diagonal elements of the wave function" is the process of turning a superposition:
$$\sqrt{{1}/{3}} |{\uparrow}\rangle + \sqrt{{2}/{3}} |{\downarrow}\rangle$$ into a probabilistic mixture of the state $|\uparrow\rangle$ with probability $1/3$, and the state $|\downarrow\rangle$ with probability $2/3$. You get the probabilistic mixture when the off-diagonal elements go to 0; if they just go partway towards 0, you get a mixed quantum state which is best represented as a density matrix. This description of decoherence is basis dependent. That is, you need to write the density matrix in some basis, and then decoherence is the process of reducing the off-diagonal elements in that basis. How do you decide which basis? What you have to do is look at the interaction of the system with its environment. Quite often (not always), this interaction has a preferred basis, and the effect of the interaction on the system can be represented by multiplying the off-diagonal elements in the preferred basis by some constant. The information contained in the off-diagonal elements does not actually go away (as it would in the Copenhagen interpretation) but gets taken into the environment, where it is experimentally difficult or impossible to recover. | {
"source": [
"https://physics.stackexchange.com/questions/4284",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1394/"
]
} |
4,349 | W and Z bosons are observed/discovered. But as force carrying bosons they should be virtual particles, unobservable? And also they require to have mass, but if they are virtual they may be off-shell, so are they virtual or not. | [Edit June 2, 2016:
A significantly updated version of the material below can be found in
the two articles https://www.physicsforums.com/insights/misconceptions-virtual-particles/ and https://www.physicsforums.com/insights/physics-virtual-particles/ ] Let me give a second, more technical answer. Observable particles. In QFT, observable (hence real) particles of mass $m$ are conventionally
defined as being associated with poles of the S-matrix at energy
$E=mc^2$ in the rest frame of the system
(Peskin/Schroeder, An introduction to QFT, p.236). If the pole is at a
real energy, the mass is real and the particle is stable; if the pole
is at a complex energy (in the analytic continuation of the S-matrix
to the second sheet), the mass is complex and the particle is unstable.
At energies larger than the real part of the mass, the imaginary part
determines its decay rate and hence its lifetime
(Peskin/Schroeder, p.237); at smaller energies, the unstable particle
cannot form for lack of energy, but the existence of the pole is
revealed by a Breit-Wigner resonance in certain cross sections.
From its position and width, one can estimate the mass and the lifetime
of such a particle before it has ever been observed.
Indeed, many particles listed in the tables http://pdg.lbl.gov/2011/reviews/contents_sports.html by the Particle
Data Group (PDG) are only resonances. Stable and unstable particles. A stable particle can be created and annihilated, as there are
associated creation and annihilation operators that add or remove
particles to the state. According to the QFT formalism, these
particles must be on-shell. This means that their momentum $p$ is
related to the real rest mass $m$ by the relation $p^2=m^2$. More precisely, it means that the 4-dimensional Fourier transform of the time-dependent single-particle wave function associated with it has a support that satisfies the on-shell relation $p^2=m^2$. There is no need for this wave function to be a plane wave, though these are taken as the basis functions between the scattering matrix elements are taken. An unstable particle is represented quantitatively by a so-called
Gamov state (see, e.g., http://arxiv.org/pdf/quant-ph/0201091.pdf ),
also called a Siegert state
(see, e.g., http://www.cchem.berkeley.edu/millergrp/pdf/235.pdf )
in a complex deformation of the Hilbert space of a QFT, obtained by
analytic continuation of the formulas for stable particles.
In this case, as $m$ is complex, the mass shell consists of all complex
momentum vectors $p$ with $p^2=m^2$ and $v=p/m$ real, and states are
composed exclusively of such momentum vectors. This is the
representation in which one can take the limit of zero decay, in which
the particle becomes stable (such as the neutron in the limit of
negligible electromagnetic interaction), and hence the representation
appropriate in the regime where the unstable particle can be observed
(i.e., resolved in time). A second representation in terms of normalizable states of real mass
is given by a superposition of scattering states of their decay
products, involving all energies in the range of the Breit-Wigner
resonance. In this standard Hilbert space representation, the unstable
particle is never formed; so this is the representation appropriate in
the regime where the unstable particle reveals itself only as a
resonance. The 2010 PDG description of the Z boson, http://pdg.lbl.gov/2011/reviews/rpp2011-rev-z-boson.pdf discusses both descriptions in quantitative detail (p.2: Breit-Wigner
approach; p.4: S-matrix approach). (added March 18, 2012):
All observable particles are on-shell, though the mass shell is real
only for stable particles. Virtual (or off-shell) particles. On the other hand, virtual particles are defined as internal lines in
a Feynman diagram (Peskin/Schroeder, p.5, or
Zeidler, QFT I Basics in mathematics and physiics, p.844).
and this is their only mode of being. In diagram-free approaches
to QFT such as lattice gauge theory, it is even impossible to make
sense of the notion of a virtual particle. Even in orthodox QFT one
can dispense completely with the notion of a virtual particle, as
Vol. 1 of the QFT book of Weinberg demonstrates. He represents the
full empirical content of QFT, carefully avoiding mentioning the
notion of virtual particles. As virtual particles have real mass but off-shell momenta, and
multiparticle states are always composed of on-shell particles only,
it is impossible to represent a virtual particle by means of states.
States involving virtual particles cannot be created for lack of
corresponding creation operators in the theory. A description of decay requires an associated S-matrix, but the in-
and out- states of the S-matrix formalism are composed of on-shell
states only, not involving any virtual particle. (Indeed, this is the
reason for the name ''virtual''.) For lack of a state, virtual particles cannot have any of the usual
physical characteristics such as dynamics, detection probabilities,
or decay channels. How then can one talk about their probability of
decay, their life-time, their creation, or their decay? One cannot,
except figuratively! Virtual states. (added on March 19, 2012):
In nonrelativistic scattering theory, one also meets the concept
of virtual states, denoting states of real particles on the second
sheet of the analytic continuation, having a well-defined but purely
inmaginary energy, defined as a pole of the S-matrix. See, e.g., Thirring,
A course in Mathematical Physics, Vol 3, (3.6.11). The term virtual state is used with a different meaning in virtual
state spectroscopy (see, e.g., http://people.bu.edu/teich/pdfs/PRL-80-3483-1998.pdf ), and denotes
there an unstable energy level above the dissociation threshold.
This is equivalent with the concept of a resonance. Virtual states have nothing to do with virtual particles, which have
real energies but no associated states, though sometimes the name
''virtual state'' is associated to them. See, e.g., https://researchspace.auckland.ac.nz/bitstream/handle/2292/433/02whole.pdf ;
the author of this thesis explains on p.20 why this is a misleading
terminology, but still occasionally uses this terminology in his work. Why are virtual particles often confused with unstable particles? As we have seen, unstable particles and resonances are observable and
can be characterized quantitatively in terms of states.
On the other hand, virtual particles lack a state and hence have no
meaningful physical properties. This raises the question why virtual particles are often confused with
unstable particles, or even identified. The reason, I believe, is that in many cases, the dominant contribution
to a scattering cross section exhibiting a resonance comes from the
exchange of a corresponding virtual particle in a Feynman diagram
suggestive of a collection of world lines describing particle creation
and annihilation. (Examples can be seen on the Wikipedia page for W and
Z bosons, http://en.wikipedia.org/wiki/Z-boson .) This space-time interpretation of Feynman diagrams is very tempting
graphically, and contributes to the popularity of Feynman diagrams
both among researchers and especially laypeople, though some authors
- notably Weinberg in his QFT book - deliberately resist this
temptation. However, this interpretation has no physical basis. Indeed, a single
Feynman diagram usually gives an infinite (and hence physically
meaningless) contribution to the scattering cross section. The finite,
renormalized values of the cross section are obtained only by summing
infinitely many such diagrams. This shows that a Feynman diagram
represents just some term in a perturbation calculation, and not a
process happening in space-time. Therefore one cannot assign physical
meaning to a single diagram but at best to a collection of infinitely
many diagrams. The true meaning of virtual particles. For anyone still tempted to associate a physical meaning to virtual
particles as a specific quantum phenomenon, let me note that
Feynman-type diagrams arise in any perturbative treatment of
statistical multiparticle properties, even classically, as any textbook
of statistical mechanics witnesses. More specifically, the paper http://homepages.physik.uni-muenchen.de/~helling/classical_fields.pdf shows that the perturbation theory for any classical field theory
leads to an expansion into Feynman diagrams very similar to those for quantum field theories, except that only tree diagrams occur. If the picture of virtual particles derived from Feynman diagrams had any intrinsic validity, one should conclude that associated to every classical field there are classical virtual particles behaving just like their quantum analogues, except that (due to the lack of loop diagrams) there are no virtual creation/annihilation patterns.
But in the literature, one can find not the slightest trace of a suggestion that classical field theory is sensibly interpreted in terms of virtual particles. The reaon for this similarity in the classical and the quantum case is that Feynman diagrams are nothing else than a graphical notation
for writing down products of tensors with many indices summed via the
Einstein summation convention. The indices of the results are the
external lines aka ''real particles'', while the indices summed over
are the internal lines aka ''virtual particles''. As such sums of
products occur in any multiparticle expansion of expectations,
they arise irrespective of the classical or quantum nature of the
system. (added September 29, 2012) Interpreting Feynman diagrams. Informally, especially in the popular literature, virtual paricles are
viewed as transmitting the fundamental forces in quantum field theory.
The weak force is transmitted by virtual Zs and Ws. The strong force
is transmitted by virtual gluons. The electromagnetic force is
transmitted by virtual photons. This ''proves'' the existence of
virtual particles in the eyes of their aficionados. The physics underlying this figurative speech are Feynman diagrams,
primarily the simplest tree diagrams that encode the low order
perturbative contributions of interactions to the classical limit of
scattering experiments. (Thus they are really a manifestation of
classical perturbative field theory, not of quantum fields.
Quantum corrections involve at least one loop.) Feynman diagrams describe how the terms in a series expansion of the
S-matrix elements arise in a perturbative treatment of the interactions
as linear combinations of multiple integrals. Each such multiple
integral is a product of vertex contributions and propagators, and each
propagator depends on a 4-momentum vector that is integrated over.
In additon, there is a dependence on the momenta of the ingoing
(prepared) and outgoing (in principle detectable) particles. The structure of each such integral can be represented by a Feynman
diagram. This is done by associating with each vertex a node of the
diagram and with each momentum a line; for ingoing momenta an external
line ending in a node, for outgoing momenta an external line starting
in a node, and for propagator momenta an internal line between two
nodes. The resulting diagrams can be given a very vivid but superficial
interpretation as the worldlines of particles that undergo a
metamorphosis (creation, deflection, or decay) at the vertices.
In this interpretation, the in- and outgoing lines are the worldlines
of the prepared and detected particles, respectively, and the others
are dubbed virtual particles, not being real but required by this
interpretation. This interpretation is related to - and indeed
historically originated with - Feynman's 1945 intuition that all
particles take all possible paths with a probability amplitute given
by the path integral density. Unfortunately, such a view is naturally
related only to the formal, unrenormalized path integral. But there all
contributions of diagrams containing loops are infinite, defying a
probability interpretation. According to the definition in terms of Feynman diagrams, a virtual
particle has specific values of 4-momentum, spin, and charges,
characterizing the form and variables in its defining propagator.
As the 4-momentum is integrated over all of $R^4$, there is no mass
shell constraint, hence virtual particles are off-shell. Beyond this, formal quantum field theory is unable to assign any
property or probability to a virtual particle. This would require to
assign to them states, for which there is no place in the QFT formalism.
However, the interpretation requires them to exist in space and time,
hence they are attributed by inmagination with all sorts of miraculous
properties that complete the picture to something plausible. (See, for
example, the Wikipedia article on virtual particles .)
Being dressed with a fuzzy notion of quantum fluctuations, where the
Heisenberg uncertainty relation allegedly allows one to borrow for a
very short time energy from the quantum bank, these properties have a
superficial appearance of being scientific.
But they are completely unphysical as there is neither a way to test
them experimentally nor one to derive them from formal properties of
virtual particles. The long list of manifestations of virtual particles mentioned in the
Wikipedia article cited are in fact manifestations of computed
scattering matrix elements.
They manifest the correctness of the formulas for the multiple
integrals associated with Feynman diagrams, but not the validity of
the claims about virtual particles. Though QFT computations generally use the momentum representation,
there is also a (physically useless) Fourier-transformed complementary
picture of Feynman diagrams using space-time positions in place of
4-momentaa. In this version, the integration is over all of space-time,
so virtual particles now have space-time positions but no dynamics,
hence no world lines. (In physics, dynamics is always tied to states
and an equation of motion. No such thing exists for virtual particles.) Can one distinguish real and virtual photons? There is a widespread view that external legs of Feynman diagrams are
in reality just internal legs of larger diagrams. This would blur the
distinction between real and virtual particles, as in reality, every
leg is internal. The basic argument behind this view is the fact that the photons that
hit an eye (and this give evidence of something real) were produced by
excitation form some distant object. This view is consistent with
regarding the creation or destruction of photons as what happens at a
vertex containing a photon line. In this view, it follows that the
universe is a gigantic Feynman diagram with many loops of which we and
our experiments are just a tiny part. But single Feynman diagrams don't have a technical meaning. Only the
sum of all Feynman diagrams has predictive value, and the small ones
contribute most - otherwise we couldn't do any perturbative
calculations. Moreover, this view contradicts the way QFT computations are actually
used. Scattering matrix elements are always considered between on-shell
particles. Without exception, comparisons of QFT results with
scattering experiments are based on these on-shell results. It must necessarily be so, as off-shell matrix elements don't make
formal sense:
Matrix elements are taken between states, and all physical states are
on-shell by the basic structure of QFT. Thus thestructure of QFT itself
enforces a fundamental distinction between real particles representable
by states and virtual particles representable by propagators only. The basic problem invalidating the above argument is the assumption
that creation and desctruction of particles in space and time can be
identified with vertices in Feynman diagrams. They cannot. For Feynman
diagrams lack any dynamical properties, and their interpretation in
space and time is sterile. Thus the view that in reality there are no external lines is based on
a superficial, tempting but invalid identification of theoretical
concepts with very different properties. The conclusion is that, indeed, real particles (represented by external legs)
and virtual particles (represented by internal legs) are completely separate conceptual entities, clearly distinguished by their meaning. In particular, never turns one into the other or affects one the other. | {
"source": [
"https://physics.stackexchange.com/questions/4349",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1702/"
]
} |
4,384 | I would like to know the physical meaning of the Legendre transformation , if there is any? I've used it in thermodynamics and classical mechanics and it seemed only a change of coordinates? | Legendre transformations are commonly used in thermodynamics (to switch between different independent variables) and classical mechanics (to switch between the Lagrange and Hamilton formalisms). But you rightly ask: what exactly is a Legendre transformation? Where does it come from? What makes it work? In (1D) classical mechanics, for example: if we have a Lagrangian $L(q,\dot{q}[,t])$, why can we define a variable $$p = \frac{\partial L}{\partial\dot{q}}$$ and expect to be able to construct a new function (the Hamiltonian ) $$H(q,p[,t]) = p\dot{q}-L(q,\dot{q}[,t])$$ that behaves well? What's the relationship between both functions? Let's look at the Lagrangian and Hamiltonian as a guiding example. I'll keep it fairly abstract/general, but the notation of Lagrangian/Hamiltonian can help make things more concrete and clearer. One thing I will do, however, is leave out the explicit time dependence. It's not important to our analysis and more often than not there will indeed be no explicit time dependence. Furthermore, I'll denote $v\equiv\dot{q}$ to put less emphasis on the relation to $q$, since it is not important for the Legendre transformation. So what do we need for a Legendre transformation? Well, first of all we need two variables $v$, $p$ that are single-valued functions of each other. Another way to put this is that $p$ must be a monotone function of $v$ and vice versa. Figure 1 shows an example of such a function. Figure 1. Example of a single-valued relation between $v$ and $p$. For such variables it is always possible to construct a pair of functions with the property that differentiation of one of the functions with respect to one of the variables yields the second variable. Equivalently, the derivative of the second function with respect to this second variable yields the first variable. In our example of classical mechanics, the functions we can construct for our two variables $v$ and $p$ are the Lagrangian $L(q,v)$ and the Hamiltonian $H(q,p)$.$^1$ They satisfy (by definition) the differential relations $$\begin{align}
\frac{\partial L}{\partial v} &= p \\
\frac{\partial H}{\partial p} &= v
\end{align}$$ Why does it work? Indeed, why can we construct such functions? Take another look at figure 1. The way the graph is set up, it looks like a graph of $p$ as a function of $v$. So if we integrate this function between $0$ and some value $v$ (shown on the graph), the answer we get is the orange area under the curve. This integral is our first function! Indeed, if we return to the notation of our classical example (I'm going to leave out the $q$ dependence from now on): $$L(v) = \int_0^v{p(v')dv'}$$ because $$\frac{\partial L}{\partial v} = \frac{\partial}{\partial v}\int_0^v{p(v')dv'} = p.$$ Now if we consider the curve in Figure 1 to be $v$ as a function of $p$ (rotate the graph around if that makes it clearer to you), we can make a similar reasoning. This time we integrate between $0$ and $p$ where $p$ has been chosen to correspond to our earlier $v$.$^2$ This integral is our second function; so in terms of our 1D classical example: $$H(p) = \int_0^p{v(p')dp'}.$$ You may have noticed that we've described a rectangle with the integrals (and therefore the two functions $L$ and $H$). This rectangle has a total surface of $p\cdot v$. But we've also calculated its surface in two parts: the green and the orange. The sum of both must therefore be equal to $pv$. This yields the Legendre transformation $$L(v) + H(p) = pv$$ or $$H(p) = pv - L(v)$$. How does a Legendre transformation work in practice? Here's a 3 step plan: Start with your first function, e.g. $L(v)$. $\left[\right.$or $U(S)$ for a thermodynamical example$\left.\right]$ Find the conjugate variable by differentiation: $$p = \frac{\partial L}{\partial v} \hspace{2cm} \left[T = \frac{\partial U}{\partial S}\right]$$ Construct the second function $$H(p) = p\cdot v - L(v) \hspace{2cm} \left[\left(-F(T)\right) = T\cdot S - U(S)\right]$$ and insert the conjugate variable wherever you can, i.e. replace $v$ $[S]$ with the expression $v(p)$ $[S(T)]$ throughout the entire expression. Partly from Figure 1, it should now be clear that the two functions are not only generally different from each other, they describe things from a different perspective (we had to view the curve in Figure 1 once as a function $p(v)$ and once as a function $v(p)$). The functions are complementary and their close relation is governed by a Legendre transformation. $^1$ These are also functions of $q$, but that's not important. They could be functions of any number of distinct variables, though their list of variables will obviously be the same except for $v$ and $p$. Indeed, the Legendre transform doesn't change any of the other dependencies. If this is not clear now, it should become so throughout the rest of this explanation. $^2$ Note that this is where the single-valuedness of the relation between $v$ and $p$ is required. If $v(p)$ was a parabola for example, then there would be ambiguity about which $p$ corresponds to the $v$ we used. | {
"source": [
"https://physics.stackexchange.com/questions/4384",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1579/"
]
} |
4,731 | 1.In classical mechanics, using Newton's laws, the ellipticity of orbits is derived. It is also said that the center of mass is at one of the foci. 2.Each body will orbit the center of the mass of the system. My question is : Are the assumptions in 1 and 2 correct? Follow up question : Assuming the distance from the centre of the mass to each body remains the same, do we have two bodies orbiting the centre of the mass of the system in an elliptical or circular orbit? Finally : With elliptical orbits, if the heavier mass is supposed to be in one of the foci, if there is any significance to second focus, what is it? Is it a Lagrange point by any chance or does it have some other mathematical property? | The second (empty) focus is relevant in the theory of tides. In an elliptical orbit, the line joining the planet and the empty focus rotates at the same frequency as the mean motion of the planet; therefore, if spin rotation period is equal to the orbital period (the planet is locked in a synchronous rotation), the planet rotates with one face pointing to the empty focus. Importantly, a tidal bulge will try to point to the massive object (the occupied focus) while the planet itself will be pointing to the empty focus, causing a "librational tide". | {
"source": [
"https://physics.stackexchange.com/questions/4731",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/622/"
]
} |
4,784 | One thing I've heard stated many times is that "most" or "many" physicists believe that, despite the fact that they have not been observed, there are such things as magnetic monopoles. However, I've never really heard a good argument for why this should be the case. The explanations I've heard tend to limit themselves to saying that "it would create a beautiful symmetry within Maxwell's equations" or similar. But I don't see how the fact that "it would be beautiful" is any kind of reason for why they should exist! That doesn't sound like good science to me. So obviously there must be at least another reason apart from this to believe in the existence of natural magnetic monopoles. (I'm aware that recent research has shown that metamaterials (or something similar) can emulate magnetic monopole behaviour, but that's surely not the same thing.) Can anyone provide me with some insight? | Aqwis, it would help in the future if you mentioned something about your background because it helps to know what level to aim at in the answer. I'll assume you know E&M at an undergraduate level. If you don't then some of this explanation probably won't make much sense. Part one goes back to Dirac. In E&M we need to specify a vector potential $A_\mu$. Classically the electric and magnetic fields suffice, but when quantum mechanics is included you need $A_\mu$. The vector potential is only defined up to gauge transformations $A_\mu \rightarrow g(x)(A_\mu + \frac{i}{e} \partial_\mu ) g^{-1}(x)$ where $g(x)=\exp(i \alpha(x))$. The group involved in these gauge transformations is the real line (that is the space of possible values of $\alpha$) if electric charge is not quantized, but if charge is quantized, as all evidence points to experimentally, then the group is compact, that is it is topologically a circle, $S^1$. So to specify a gauge field we specify an element of $S^1$ at every point in spacetime. Now suppose we don't know for sure what goes on inside
some region (because we don't know physics at short distances). Surround this region with a sphere. We can define our gauge transformation at every point outside this region, but now we have to specify it on two-spheres which cannot be contracted to a point. At a fixed radial distance the total space of angles plus the gauge transformation can be a simple product, $S^2 \times S^1$ but it turns out there are other possibilities. In particular you can make what is called a principal fibre bundle where the $S^1$ twists in a certain way as you move around the $S^2$. These are characterized by an integer $n$, and a short calculation which you can find various places in the literature shows that the integer $n$ is nothing but the magnetic monopole charge of the configuration you have defined. So charge quantization leads to the ability to define configurations which are magnetic monopoles. So far there is no guarantee that there are finite energy objects which correspond to these fields. To figure out if they are finite energy we need to know what goes on all the way down to the origin inside our region. Part two is that in essentially all models that try to unify the Standard Model you find that there are in fact magnetic monopoles of finite energy. In grand unified theories this goes back to work of 't Hooft and Polyakov. It also turns out to be true in Kaluza-Klein theory and in string theory. So there are three compelling reasons to expect that magnetic monopoles exist. The first is the beauty of a deep symmetry of Maxwell's equations called electric-magnetic duality, the second is that electric charge appears to be quantized experimentally and this allows you to define configurations with quantized magnetic monopole charge, and the third is that when you look into the interior of these objects in essentially all unified theories you find that the monopoles have finite energy. | {
"source": [
"https://physics.stackexchange.com/questions/4784",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1803/"
]
} |
4,789 | I am doing some self-study in between undergrad and grad school and I came across the beastly Wigner-Eckart theorem in Sakurai's Modern Quantum Mechanics. I was wondering if someone could tell me why it is useful and perhaps just help me understand a bit more about it. I have had two years of undergrad mechanics and I think I have a reasonably firm grasp of the earlier material out of Sakurai, so don't be afraid to get a little technical. | I will not get into theoretical details -- Luboš ad Marek did that better than I'm able to. Let me give an example instead: suppose that we need to calculate this integral: $\int d\Omega (Y_{3m_1})^*Y_{2m_2}Y_{1m_3}$ Here $Y_{lm}$ -- are spherical harmonics and we integrate over the sphere $d\Omega=\sin\theta d\theta d\phi$. This kind of integrals appear over and over in, say, spectroscopy problems. Let us calculate it for $m_1=m_2=m_3=0$: $\int d\Omega (Y_{30})^*Y_{20}Y_{10} = \frac{\sqrt{105}}{32\sqrt{\pi^3}}\int d\Omega \cos\theta\,(1-3\cos^2\theta)(3\cos\theta-5\cos^3\theta)=$ $ = \frac{\sqrt{105}}{32\sqrt{\pi^3}}\cdot 2\pi \int d\theta\,\left(3\cos^2\theta\sin\theta-14\cos^4\theta\sin\theta+15\cos^6\theta\sin\theta\right)=\frac{3}{2}\sqrt{\frac{3}{35\pi}}$ Hard work, huh? The problem is that we usually need to evaluate this for all values of $m_i$. That is 7*5*3 = 105 integrals. So instead of doing all of them we got to exploit their symmetry. And that's exactly where the Wigner-Eckart theorem is useful: $\int d\Omega (Y_{3m_1})^*Y_{2m_2}Y_{1m_3} = \langle l=3,m_1| Y_{2m_2} | l=1,m_3\rangle = C_{m_1m_2m_3}^{3\,2\,1}(3||Y_2||1)$ $C_{m_1m_2m_3}^{j_1j_2j_3}$ -- are the Clebsch-Gordan coefficients $(3||Y_2||1)$ -- is the reduced matrix element which we can derive from our expression for $m_1=m_2=m_3=0$: $\frac{3}{2}\sqrt{\frac{3}{35\pi}} = C_{0\,0\,0}^{3\,2\,1}(3||Y_2||1)\quad \Rightarrow \quad (3||Y_2||1)=\frac{1}{2}\sqrt{\frac{3}{\pi}}$ So the final answer for our integral is: $\int d\Omega(Y_{3m_1})^*Y_{2m_2}Y_{1m_3}=\sqrt{\frac{3}{4\pi}}C_{m_1m_2m_3}^{3\,2\,1}$ It is reduced to calculation of the Clebsch-Gordan coefficient and there are a lot of, tables , programs, reduction and summation formulae to work with them. | {
"source": [
"https://physics.stackexchange.com/questions/4789",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1807/"
]
} |
4,930 | I've read some papers recently that talk about gapped Hamiltonians or gapless systems, but what does it mean? Edit: Is an XX spin chain in a magnetic field gapped? Why or why not? | This is actually a very tricky question, mathematically. Physicists may think this question to be trivial. But it takes me one hour in a math summer school to explain the notion of gapped Hamiltonian. To see why it is tricky, let us consider the following statements. Any physical
system have a finite number of degrees of freedom (assuming the universe is finite). Such physical
system is described by a Hamiltonian matrix with a finite dimension.
Any Hamiltonian matrix with a finite dimension has a discrete spectrum.
So all the physical systems (or all the Hamiltonian) are gapped. Certainly, the above is not what we mean by "gapped Hamiltonian" in physics.
But what does it mean for a Hamiltonian to be gapped? Since a gapped system may have gapless excitations at boundary, so
to define gapped Hamiltonian, we need to put the Hamiltonian on a space with no boundary. Also, system with certain sizes may contain non-trivial excitations
(such as spin liquid state of spin-1/2 spins on a lattice with an ODD number of sites), so we have to specify that the system have a certain sequence of sizes as we take the thermodynamic limit. So here is a definition of "gapped Hamiltonian" in physics:
Consider a system on a closed space, if there is a sequence of sizes
of the system $L_i$, $L_i\to\infty$ as $i \to \infty$,
such that the size-$L_i$ system on closed space has the following "gap property", then the system is said to be gapped.
Note that the notion of "gapped Hamiltonian" cannot be even defined for a single Hamiltonian. It is a properties of a sequence of Hamiltonian
in the large size limit. Here is the definition of the "gap property":
There is a fixed $\Delta$ (ie independent of $L_i$) such that the
size-$L_i$ Hamiltonian has no eigenvalue in an energy window of size $\Delta$.
The number of eigenstates below the energy window does not depend on
$L_i$, the energy splitting of those eigenstates below the energy window
approaches zero as $L_i\to \infty$. The number eigenstates below the energy window becomes the ground state degeneracy of the gapped system.
This is how the ground state degeneracy of a topological ordered state is defined.
I wonder, if some one had consider the definition of gapped many-body system very carefully, he/she might discovered the notion on topological order mathematically. | {
"source": [
"https://physics.stackexchange.com/questions/4930",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/873/"
]
} |
4,959 | Noether's theorem is one of those surprisingly clear results of mathematical calculations, for which I am inclined to think that some kind of intuitive understanding should or must be possible. However I don't know of any, do you? *Independence of time $\leftrightarrow$ energy conservation. *Independence of position $\leftrightarrow$ momentum conservation. *Independence of direction $\leftrightarrow$ angular momentum conservation. I know that the mathematics leads in the direction of Lie-algebra and such but I would like to discuss whether this theorem can be understood from a non-mathematical point of view also. | It's intuitively clear that the energy most accurately describes how much the state of the system is changing with time. So if the laws of physics don't depend on time, then the amount how much the state of the system changes with time has to be conserved because it's still changing in the same way. In the same way, and perhaps even more intuitively, if the laws don't depend on position, you may hit the objects, and hit them a little bit more, and so on. The momentum measures how much the objects depend on space, so if the laws themselves don't depend on the position on space, the momentum has to be conserved. The angular momentum with respect to an axis is determining how much the state changes if you rotate it around the axis - how much it depends on the angle (therefore "angular" in the name). So the symmetry is linked to the conservation law once again. If your intuition doesn't find the comments intuitive enough, maybe you should train your intuition because your current intuition apparently misses the most important properties of time, space, angles, energy, momentum, and angular momentum. ;-) | {
"source": [
"https://physics.stackexchange.com/questions/4959",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/139/"
]
} |
5,005 | In my opinion, the Grassmann number "apparatus" is one of the least intuitive things in modern physics. I remember that it took a lot of effort when I was studying this. The problem was not in the algebraic manipulations themselves -- it was rather psychological: "Why would one want to consider such a crazy stuff!?" Later one just gets used to it and have no more such strong feelings for the anticommuting numbers. But this psychological barrier still exists for newbies. Is there a way to explain or motivate one to learn the Grassmann numbers? Maybe there is a simple yet practical problem that demonstrates the utility of those? It would be great if this problem would provide the connection to some "not-so-advanced" areas of physics, like non-relativistic QM and/or statistical physics. | I don't have an answer to the question "why would one want to consider such crazy stuff in physics ?" since I don't know much physics, but as a mathematics student I do have an answer to the question "why would one want to consider such crazy stuff in mathematics ?" What physicists call Grassmann numbers are what mathematicians call elements of the exterior algebra $\Lambda(V)$ over a vector space $V$. The exterior algebra naturally arises as the solution to the following geometric problem. Say that $V$ has dimension $n$ and let $v_1, ... v_n$ be a basis of it. We would like a nice natural definition of the $n$-dimensional volume of the paralleletope defined by the vectors $\epsilon_1 v_1 + ... + \epsilon_n v_n, e_i \in \{ 0, 1 \}$. When $n = 2$ this is the standard parallelogram defined by two linearly independent vectors, and when $n = 3$ this is the standard paralellepiped defined by three linearly independent vectors. The thing about the naive definition of volume is that it is very close to having really nice mathematical properties: it is almost multilinear. That is, if we denote the volume we're looking at by $\text{Vol}(v_1, ... v_n)$, then it is almost true that $\text{Vol}(v_1, ... v_i + cw, ... v_n) = \text{Vol}(v_1, ... v_n) + c \text{Vol}(v_1, ... v_{i-1}, w, v_{i+1}, ... v_n)$. You can draw nice diagrams to see this readily. However, it isn't actually completely multilinear: depending on how you vary $w$ you will find that sometimes the volume shrinks to zero and then goes back up in a non-smooth way when really it ought to keep getting more negative. (You can see this even in two dimensions, by varying one of the vectors until it goes past the other.) To fix that, we need to look instead at oriented volume, which can be negative, but which has the enormous advantage of being completely multilinear and smooth. The other major property it satisfies is that if any of the two vectors $v_i$ agree (that is, the vectors are linearly dependent) then the oriented volume is zero, which makes sense. It turns out (and this is a nice exercise) that this is equivalent to oriented volume coming from a "product" operation, the exterior product, which is anticommutative. Formally, these two conditions define an element of the top exterior power $\Lambda^n(V)$ defined by the exterior product $v_1 \wedge v_2 ... \wedge v_n$, and choosing an element of this top exterior power (a volume form) allows us to associate an actual number to an $n$-tuple of vectors which we can call its oriented volume in the more naive sense. If $V$ is equipped with an inner product, then there are two distinguished elements of $\Lambda^n(V)$ given by a wedge product of an orthonormal basis in some order, and it's natural to pick one of these as a volume form. Alright, so what about the rest of the exterior powers $\Lambda^p(V)$ that make up the exterior algebra? The point of these is that if $v_1, ... v_p, p < n$ is a tuple of vectors in $V$, we can consider the subspace they span and talk about the $p$-dimensional oriented volume of the paralleletope given by the $v_i$ in this subspace. But the result of this computation shouldn't just be a number: we need a way to do this that keeps track of what subspace we're in. It turns out that mathematically the most natural way to do this is to keep in mind the requirements we really want out of this computation (multilinearity and the fact that if the $v_i$ are not linearly independent then the answer should be zero), and then just define the result of the computation to be the universal thing that we get by imposing these requirements and nothing else, and this is nothing more than the exterior power $\Lambda^p(V)$. This discussion hopefully motivated for you why the exterior algebra is a natural object from the perspective of geometry. Since Einstein, physicists have been aware that geometry has a lot to say about physics, so hopefully the concept makes a little more sense now. Let me also say something about how modern mathematicians think about "space" in the abstract sense. The inspiration for the modern point of view actually derives at least partially from physics: the only thing you can really know about a space are observables defined on it. In classical physics, observables form a commutative ring, so one might say roughly speaking that the study of commutative rings is the study of "classical spaces." In mathematics this study, in the abstract, is called algebraic geometry . It is a very sophisticated theory that encompasses classical algebraic geometry, arithmetic geometry, and much more, and it is in large part because of the success of this theory and related commutative ring approaches to geometry (topological spaces, manifolds, measure spaces) that mathematicians have gotten used to the slogan that "commutative rings are rings of observables on some space." Of course, quantum mechanics tells us that the actual universe around us doesn't work this way. The observables we care about don't commute, and this is a big issue. So mathematically what is needed is a way to think about noncommutative rings as "quantum spaces" in some sense. This subject is very broad, but roughly it goes by the name of noncommutative geometry . The idea is simple: if we want to take quantum mechanics completely seriously, our spaces shouldn't have "points" at all because points are classical phenomena that implicitly require a commutative ring of observables, which we know is not what we actually have. So our spaces should be more complicated things coming from noncommutative rings in some way. Grassmann numbers satisfy one of the most tractable forms of noncommutativity (actually they are commutative if one alters the definition of "commutative" very slightly, but never mind that...), and even better it is a form of noncommutativity that is clearly related to something physicists care about (the properties of fermions), so anticommuting observables are a natural step up from commuting observables in order to get our mathematics to align more closely with reality while still being able to think in an approximately classical way. | {
"source": [
"https://physics.stackexchange.com/questions/5005",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/386/"
]
} |
5,029 | Recently it has been studied non-abelian anyons in some solid-state systems. These states are being studied for the creation and manipulation of qubits in quantum computing. But, how these non-abelian anyons can arise if all particles involved are fermions? In other words, how electronic states can have different statistics than the fermionic if all electrons are fermions? | The realization of non-Abelian statistics in condensed matter systems was first proposed in the following two papers.
G. Moore and N. Read, Nucl. Phys. B 360, 362 (1991)
X.-G. Wen, Phys. Rev. Lett. 66, 802 (1991) Zhenghan Wang and I wrote a review article to explain FQH state (include non-Abelian FQH state)
to mathematicians, which include the explanations of some basic but important concepts, such as gapped state, phase of matter, universality, etc.
It also explains topological quasiparticle, quantum dimension, non-Abelian statistics, topological order etc. The key point is the following: consider a non-Abelian FQH state that contains quasi-particles
(which are topological defects in the FQH state), even when
all the positions of the quasi-particles are fixed, the FQH state still has nearly degenerate ground states. The energy splitting between
those nearly degenerate ground states approaches zero as the quasi-particle separation approach infinity. The degeneracy is topological as there is no local perturbation near or away from the quasi-particles that can lift the degeneracy.
The appearance of such quasi-particle induced topological degeneracy is the key for non-Abelian statistics. (for more details, see direct sum of anyons? ) When there is the quasi-particle induced topological degeneracy , as we exchange
the quasi-particles, a non-Abelian geometric phase will be induced which
describes how those topologically degenerate ground states rotated into each other.
People usually refer such a non-Abelian geometric phase as non-Abelian statistics. But the appearance of quasi-particle induced topological degeneracy
is more important, and is the precondition that a non-Abelian geometric phase
can even exist. How quasi-particle-induced-topological-degeneracy arise in solid-state systems?
To make a long story short, in "X.-G. Wen, Phys. Rev. Lett. 66, 802 (1991)",
a particular FQH state
$$ \Psi(z_i) = [\chi_k(z_i)]^n $$
was constructed, where $\chi_k(z_i)$ is the IQH wave function with $k$ filled Landau levels. Such a state has a low energy effective theory which is
the $SU(n)$ level $k$ non-Abelian Chern-Simons theory. When $k >1,\ n>1$, it leads to quasi-particle-induced-topological-degeneracy and non-Abelian statistics.
In "G. Moore and N. Read, Nucl. Phys. B 360, 362 (1991)", the FQH wave function
is constructed as a correlation in a CFT. The conformal blocks correspond to
quasi-particle-induced-topological-degeneracy. | {
"source": [
"https://physics.stackexchange.com/questions/5029",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1545/"
]
} |
5,031 | One thing I know about black holes is that an object gets closer to the event horizon, gravitation time dilation make it move more slower from an outside perspective, so that it looks like it take an infinite amount of time for the object to reach the event horizon. It seems like a similar process should slow the formation of the black hole itself: As the star collapses, its gravitational time dilation make itself collapse more slowly. This make me wonder, are what astronomers claim to be black holes really black holes, or are they stars that progressively make themselves more similar to one without actually reaching the stage of having an event horizon? EDIT: Contemplating one answer, I realize the question is ambiguous. What does finite time mean in general relativity. Here is a less ambiguous question: Is there a connected solution of 3+1 dimensional general relativity with one space-like slice not have a singularity, and another space-like slice having one. | The conceptual key here is that time dilation is not something that happens to the infalling matter. Gravitational time dilation, like special-relativistic time dilation, is not a physical process but a difference between observers. When we say that there is infinite time dilation at the event horizon we don't mean that something dramatic happens there. Instead we mean that something dramatic appears to happen according to an observer infinitely far away. An observer in a spacesuit who falls through the event horizon doesn't experience anything special there, sees her own wristwatch continue to run normally, and does not take infinite time on her own clock to get to the horizon and pass on through. Once she passes through the horizon, she only takes a finite amount of clock time to reach the singularity and be annihilated. (In fact, this ending of observers' world-lines after a finite amount of their own clock time, called geodesic incompleteness, is a common way of defining the concept of a singularity.) When we say that a distant observer never sees matter hit the event horizon, the word "sees" implies receiving an optical signal. It's then obvious as a matter of definition that the observer never "sees" this happen, because the definition of a horizon is that it's the boundary of a region from which we can never see a signal. People who are bothered by these issues often acknowledge the external unobservability of matter passing through the horizon, and then want to pass from this to questions like, "Does that mean the black hole never really forms?" This presupposes that a distant observer has a uniquely defined notion of simultaneity that applies to a region of space stretching from their own position to the interior of the black hole, so that they can say what's going on inside the black hole "now." But the notion of simultaneity in GR is even more limited than its counterpart in SR. Not only is simultaneity in GR observer-dependent, as in SR, but it is also local rather than global. Is there a connected solution of 3+1 dimensional general relativity with one space-like slice not have a singularity, and another space-like slice having one. This is a sophisticated formulation, but I don't think it succeeds in getting around the fundamental limitations of GR's notion of "now." Figure 1 is a Penrose diagram for a spacetime that contains a black hole formed by gravitational collapse of a cloud of dust.[Seahra 2006] On this type of diagram, light cones look just like they would on a normal spacetime diagram of Minkowski space, but distance scales are highly distorted. The upright line on the left represents an axis of spherical symmetry, so that the 1+1-dimensional diagram represents 3+1 dimensions. The quadrilateral on the bottom right represents the entire spacetime outside the horizon, with the distortion fitting this entire infinite region into that finite area on the page. Despite the distortion, the diagram shows lightlike surfaces as 45-degree diagonals, so that's what the event horizon looks like. The triangle is the spacetime inside the event horizon. The dashed line is the singularity, which is spacelike. The green shape is the collapsing cloud of dust, and the only reason it looks smaller at early times is the distortion of the scales; it's really collapsing the whole time, not expanding and then recontracting. In figure 2, E is an event on the world-line of an observer. The red spacelike slice is one possible "now" for this observer. According to this slice, no dust particle has ever fallen in and reached the singularity; every such particle has a world-line that intersects the red slice, and therefore it's still on its way in. The blue spacelike slice is another possible "now" for the same observer at the same time. According to this definition of "now," none of the dust particles exists anymore. (None of them intersect the blue slice.) Therefore they have all already hit the singularity. If this was SR, then we could decide whether red or blue was the correct notion of simultaneity for the observer, based on the observer's state of motion. But in GR, this only works locally (which is why I made the red and blue slices coincide near E). There is no well-defined way of deciding whether red or blue is the correct way of extending this notion of simultaneity globally. So the literal answer to the quoted part of the question is yes, but I think it should be clear that this doesn't establish whether infalling matter has "already" hit the singularity at some "now" for a distant observer. Although it may seem strange that we can't say whether the singularity has "already" formed according to a distant observer, this is really just an inevitable result of the fact that the singularity is spacelike. The same thing happens in the case of a Schwarzschild spacetime, which we think of as a description of an eternal black hole, i.e., one that has always existed and always will. On the similar Penrose diagram for an eternal black hole, we can still draw a spacelike surface like the red one, representing a definition of "now" such that the singularity doesn't exist yet. Figure 3 shows the situation if we take into account black hole evaporation. For the observer at event E$_1$, we still have spacelike surfaces like the blue one according to which the matter has "already" hit the singularity, and others like the red according to which it hasn't. However, suppose the observer lives long enough to be at event E$_2$. There is no spacelike surface through E$_2$ that intersects the cloud of infalling dust. Therefore the observer can infer at this time that all the infalling matter has hit the singularity. This makes sense, of course, because the observer has seen the Hawking radiation begin and eventually cease, meaning that the black hole no longer exists and its history is over. Seahra, "An introduction to black holes," http://www.math.unb.ca/~seahra/resources/notes/black_holes.pdf | {
"source": [
"https://physics.stackexchange.com/questions/5031",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1859/"
]
} |
5,046 | Can I use a normal metal antenna to emit visible light? | with your question you tackle one of the most rapidly evolving fields at the moment: Nanophotonics . Let me explain in the following why this is indeed the case. Dipole antennas It was Heinrich Hertz , student of Kirchhoff and Helmholtz who tried to verify Maxwells equations by generating electromagnetic radiation building a dipole antenna out of a somehow transformed RLC-circuit : This is the physical ground on which we have to investigate your question to get an idea if optical antennas are possible and if you could use some of your ones to create red light. So let's look at the characteristic value of such an antenna: its length. Length Scales Roughly speaking, to be able to generate radiation with the mentioned form of antennas, one needs to impose a standing wave on the antenna. Assuming that the reflection at the ends of the antenna is somehow perfect, we find that an antenna will have a length of approximately $$L \approx \frac\lambda2\,.$$ If you look at the electromagnetic spectrum , red light has a wavelength of about 600 to 700nm, your antenna will have $$L\approx 300nm$$ which is incredibly small. This is the reason, antennas in optical frequencies are subject to current research , see e.g. Resonant Optical Antennas by Mühlschlegel et al. So, from this point of view it seems very unlikely that a Layman will be able to produce an antenna that can emit light at optical frequencies. Sincerely | {
"source": [
"https://physics.stackexchange.com/questions/5046",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/58/"
]
} |
5,265 | During breakfast with my colleagues, a question popped into my head: What is the fastest method to cool a cup of coffee, if your only available instrument is a spoon? A qualitative answer would be nice, but if we could find a mathematical model or even better make the experiment (we don't have the means here:-s) for this it would be great! :-D So far, the options that we have considered are (any other creative methods are also welcome): Stir the coffee with the spoon: Pros: The whirlpool has a greater surface than the flat coffee, so it is better for heat exchange with the air. Due to the difference in speed between the liquid and the surrounding air, the Bernoulli effect should lower the pressure and that would cool it too to keep the atmospheric pressure constant. Cons: Joule effect should heat the coffee. Leave the spoon inside the cup: As the metal is a good heat conductor (and we are not talking about a wooden spoon!), and there is some part inside the liquid and another outside, it should help with the heat transfer, right? A side question about this is what is better, to put it like normal or reversed, with the handle inside the cup? (I think it is better reversed, as there is more surface in contact with the air, as in the CPU heat sinks). Insert and remove the spoon repeatedly: The reasoning for this is that the spoon cools off faster when it's outside. (I personally think it doesn't pay off the difference between keeping it always inside, as as it gets cooler, the lesser the temperature gradient and the worse for the heat transfer). | I We did the experiment. (Early results indicate that dipping may win, though the final conclusion remains uncertain.) $\mathrm{H_2O}$ ice bath canning jar thermometer pot of boiling water stop watch There were four trials, each lasting 10 minutes. Boiling water was poured into the canning jar, and the spoon was taken from the ice bath and placed into the jar. A temperature reading was taken once every minute. After each trial the water was poured back into the pot of boiling water and the spoon was placed back into the ice bath. Method: Final Temp.
1. No Spoon 151 F
2. Spoon in, no motion 149 F
3. Spoon stirring 147 F
4. Spoon dipping 143 F Temperature readings have an uncertainty of $\pm1\,\mathrm{^\circ F}$ . Red line: no Spoon
Green line: Spoon in, no motion
Aqua line: Stirring
Blue line: Dipping $$\begin{array}{|c|cl|cl|cl|cl|}
\hline
\text{Min} & \text{No Spoon} & & \text{Spoon} & & \text{Stirring} & & \text{Dipping} \\ \hline
& \text{°F} & \text{°C} & \text{°F} & \text{°C} & \text{°F} & \text{°C} & \text{°F} & \text{°C} \\ \hline
1' & 180 & 82.22 & 175 & 79.44 & 175 & 79.44 & 177 & 80.56 \\
2' & 174 & 78.89 & 172 & 77.78 & 171 & 77.22 & 173 & 78.33 \\
3' & 171 & 77.22 & 168 & 75.56 & 167 & 75 & 168 & 75.56 \\
4' & 168 & 75.56 & 165 & 73.89 & 164 & 73.33 & 164 & 73.33 \\
5' & 164 & 73.33 & 162 & 72.22 & 161 & 71.67 & 160 & 71.11 \\
6' & 161 & 71.67 & 160 & 71.11 & 158 & 70 & 156 & 68.89 \\
7' & 158 & 70 & 156 & 68.89 & 155 & 68.33 & 152 & 66.67 \\
8' & 155 & 68.33 & 153 & 67.22 & 152 & 66.67 & 149 & 65 \\
9' & 153 & 67.22 & 151 & 66.11 & 150 & 65.56 & 146 & 63.33 \\
10' & 151 & 66.11 & 149 & 65 & 147 & 63.89 & 143 & 61.67 \\ \hline
\end{array}$$ | {
"source": [
"https://physics.stackexchange.com/questions/5265",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1942/"
]
} |
5,277 | As I remembered, at the 2 poles of a battery, positive or negative electric charges are gathered. So there'll be electric field existing inside the battery. This filed is neutralized by the chemical power of the battery so the electric charges will stay at the poles. Since there are electric charges at both poles, there must also be electric fields outside the battery. What happens when we connect a metal wire between the 2 poles of a battery? I vaguely remembered that the wire has the ability to constrain and reshape the electric field and keep it within the wire , maybe like an electric field tube. But is that true? | Yes Sam, there definitely is electric field reshaping in the wire. Strangely, it is not talked about in hardly any physics texts, but there are surface charge accumulations along the wire which maintain the electric field in the direction of the wire. (Note: it is a surface charge distribution since any extra charge on a conductor will reside on the surface.) It is the change in, or gradient of, the surface charge distribution on the wire that creates, and determines the direction of, the electric field within a wire or resistor. For instance, the surface charge density on the wire near the negative terminal of the battery will be more negative than the surface charge density on the wire near the positive terminal. The surface charge density, as you go around the circuit, will change only slightly along a good conducting wire (Hence the gradient is small, and there is only a small electric field). Corners or bends in the wire will also cause surface charge accumulations that make the electrons flow around in the direction of the wire instead of flowing into a dead end. Resistors inserted into the circuit will have a more negative surface charge density on one side of the resistor as compared to the other side of the resistor. This larger gradient in surface charge distribution near the resistor causes the relatively larger electric field in the resistor (as compared to the wire). The direction of the gradients for all the aforementioned surface charge densities determine the direction of the electric fields. This question is very fundamental, and is often misinterpreted or disregarded by people. We are all indoctrinated to just assume that a battery creates an electric field in the wire. However, when someone asks "how does the field get into the wire and how does the field know which way to go?" they are rarely given a straight answer. A follow up question might be, "If nonzero surface charge accumulations are responsible for the size and direction of the electric field in a wire, why doesn't a normal circuit with a resistor exert an electric force on a nearby pith ball from all the built up charge in the circuit?" The answer is that it does exert a force, but the surface charge and force are so small for normal voltages and operating conditions that you don't notice it. If you hook up a 100,000V source to a resistor you would be able to measure the surface charge accumulation and the force it could exert. Here's one more way to think about all this (excuse the length of this post, but there is so much confusion on this question it deserves appropriate detail). We all know there is an electric field in a wire connected to a battery. But the wire could be as long as desired, and so as far away from the battery terminals as desired. The charge on the battery terminals can't be directly and solely responsible for the size and direction of the electric field in the part of the wire miles away since the field would have died off and become too small there. (Yes, an infinite plane of charge, or other suitably exotic configurations, can create a field that does not decrease with distance, but we are not talking about anything like that.) If the charge near the terminals does not directly and solely determine the size and direction of the electric field in the part of the wire miles away, some other charge must be creating the field there (Yes, you can create an electric field with a changing magnetic field instead of a charge, but we can assume we have a steady current and non-varying magnetic field). The physical mechanism that creates the electric field in the part of the wire miles away is a small gradient of the nonzero surface charge distribution on the wire. And the direction of the gradient of that charge distribution is what determines the direction of the electric field there. For a rare and absolutely beautiful description of how and why surface charge creates and shapes the electric field in a wire refer to the textbook:
"Matter and Interactions: Volume 2 Electric and Magnetic Interactions"
by Chabay and Sherwood,
Chapter 18 "A Microscopic View of Electric Circuits"
pg 631-640. | {
"source": [
"https://physics.stackexchange.com/questions/5277",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1513/"
]
} |
5,304 | $$\mathbf D = \varepsilon \mathbf E$$
I don't understand the difference between $\mathbf D$ and $\mathbf E$.
When I have a plate capacitor, a different medium inside will change $\mathbf D$, right?
$\mathbf E$ is only dependent from the charges right? | $\mathbf E$ is the fundamental field in Maxwell equations, so it depends on all charges. But materials have lots of internal charges you usually don't care about. You can get rid of them by introducing polarization $\mathbf P$ (which is the material's response to the applied $\mathbf E$ field). Then you can subtract the effect of internal charges and you'll obtain equations just for free charges. These equations will look just like the original Maxwell equations but with $\mathbf E$ replaced by $\mathbf D$ and charges by just free charges.
Similar arguments hold for currents and magnetic fields. With this in mind, you see that you need to take $\mathbf D$ in your example because $\mathbf E$ is sensitive also to the polarized charges inside the medium (about which you don't know anything). So the $\mathbf E$ field inside will be $\varepsilon$ times that for the conductor in vacuum. | {
"source": [
"https://physics.stackexchange.com/questions/5304",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/534/"
]
} |
5,332 | The phenomenon of high temperature superconductivity has been known for decades, particularly layered cuprate superconductors. We know the precise lattice structure of the materials. We know the band theory of electrons and how electronic orbitals mix. But yet, theoreticians still haven't solved high Tc superconductivity yet. What is the obstacle to solving it? What are we missing? | One problem is that band theory isn't everything! Crucially, band theory completely neglects the interactions between electrons. The fact that often one can do this and obtain near correct results is actually amazing, and worth several lecture courses to flesh out the reasons. However, it cannot always be correct. In many materials the electron-electron interaction dominates --- a good example is the so-called Mott insulator, where by band structure calculations you would think you get a half-filled band and so a conductor, but because the electrons repel each other so strongly you actually get a grid-locked lattice of electrons which cannot move, because moving any of them would put two electrons on top of each other! The cuprates are known to be Mott insulating when they are undoped; this is good evidence that interactions are very important. Unfortunately, without the massively simplifying assumption that electrons are independent (i.e. non-interacting) it is an almost intractable problem to describe their behaviour; indeed, we know from other strongly interacting systems such as fractional quantum hall systems that it's possible to end up with no electrons at all, but fractions of them --- the possibilities for novel electronic structure are really unimaginable. The 2nd problem which has plagued the field is more technical, which simply that the materials don't behave in universal ways! Although we can point to many superficially similar aspects to many cuprates, it's actually not the case that quantitatively they are the same. For instance, the fabled "linear scaling" is actually incredibly hard to really get --- it's very sensitive on impurities, precise doping levels, etc. The flip-side to this is that if we just look at the qualitative features and ask "what theories predict these?" we actually have quite a few --- marginal Fermi liquid theories, quantum critical theories, strongly coupled gauge theories, Gutzwiller projection theory, etc. All of these will give a superconducting dome, with conducting behaviour at high doping, insulating at low, and some form of anomolous transport. However, experimental signatures are actually very hard to pin down without controversy about what really has been measured, so the debate continues. In addition, the historically long argument has created some unpleasant sociology; some (many?) would claim that actually things are pretty settled, and that their favourite theory is clearly superior. This hasn't helped a consensus to form. | {
"source": [
"https://physics.stackexchange.com/questions/5332",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1978/"
]
} |
5,456 | A thought experiment: Imagine the Sun is suddenly removed. We wouldn't notice a difference for 8 minutes, because that's how long light takes to get from the Sun's surface to Earth. However, what about the Sun's gravitational effect ? If gravity propagates at the speed of light, for 8 minutes the Earth will continue to follow an orbit around nothing. If however, gravity is due to a distortion of spacetime, this distortion will cease to exist as soon as the mass is removed, thus the Earth will leave through the orbit tangent, so we could observe the Sun's disappearance more quickly. What is the state of the research around such a thought experiment? Can this be inferred from observation? | Since general relativity is a local theory just like any good classical field theory, the Earth will respond to the local curvature which can change only once the information about the disappearance of the Sun has been communicated to the Earth's position (through the propagation of gravitational waves). So yes, the Earth would continue to orbit what should've been the position of the Sun for 8 minutes before flying off tangentially. But I should add that such a disappearance of mass is unphysical anyway since you can't have mass-energy just poofing away or even disappearing and instantaneously appearing somewhere else. (In the second case, mass-energy would be conserved only in the frame of reference in which the disappearance and appearance are simultaneous - this is all a consequence of GR being a classical field theory). A more realistic situation would be some mass configuration shifting its shape non-spherically in which case the orbits of satellites would be perturbed but only once there has been enough time for gravitational waves to reach the satellite. | {
"source": [
"https://physics.stackexchange.com/questions/5456",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/217/"
]
} |
5,614 | Can you suggest some references for rigorous treatment of thermodynamics? I want things like reversibility, equilibrium to be clearly defined in terms of the basic assumptions of the framework. | The pioneer of the rigorous treatment of thermodynamics is Constantin Carathéodory. His article (Carathéodory, C., Untersuchung über die Grundlagen der Thermodynamik, Math. Annalen
67, 355-386) is cited everywhere in this context, but probably you want some newer and more modern things. Buchdahl wrote a lot of papers about this subject in the 40's, 50's and 60's. He summarized these in the book:
H.A. Buchdahl, The Concepts of Classical Thermodynamics (Cambridge Monographs on Physics), 1966. There was a recent series of articles on this subject by Lieb and Yngavason which became famous. You can find the online version of these here , here , here and here :). Finally, I have come across the book T. Matolcsi, "Ordinary Thermodynamics" (since a few friends of mine went to the class of the author), which treats thermodynamics in a mathematically very rigorous way. I hope some of these will help you. | {
"source": [
"https://physics.stackexchange.com/questions/5614",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2138/"
]
} |
5,670 | Electricity takes the path of least resistance! Is this statement correct? If so, why is it the case? If there are two paths available, and one, for example, has a resistor, why would the current run through the other path only, and not both? | It's not true. To see this, you can try an experiment with some batteries and light bulbs. Hook up two bulbs of different wattages (that is, with different resistances) in parallel with a single battery: ------------------------------------------
| | |
Battery Bulb 1 Bulb 2
| | |
------------------------------------------ Both bulbs will light up, although with different brightnesses. That is, current is flowing through the one with more resistance as well as through the one with less resistance. | {
"source": [
"https://physics.stackexchange.com/questions/5670",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2165/"
]
} |
5,839 | Does magnetic propagation follow the speed of light? E.g. if you had some magnet of incredible strength and attached an iron wire that is one light year long to it, would the other end of the iron wire begin to show a magnetic field in a year, or could it possibly be faster than a year? | In your example, the relevant speed isn't the speed of propagation of disturbances in the magnetic field, but rather the speed of the alignment of iron atoms. You are really asking "Does magnetization of a wire/metal propagate at the speed of light?" The answer is no ; it propagates at the speed at which each individual iron atom can align its polarity. If you are asking, "Do changes in the magnetic field propagate at the speed of light?" The answer is yes ; if a giant, huge, powerful magnet appeared one light year away out of nowhere, then it would take exactly one year for magnets on Earth to feel its pull (however small it may be). That is, it would take one year for the "magnetic force" to reach the Earth. | {
"source": [
"https://physics.stackexchange.com/questions/5839",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2217/"
]
} |
5,851 | According to the first law of thermodynamics, sourced from wikipedia
"In any process in an isolated system, the total energy remains the same." So when lasers are used for cooling in traps, similar to the description here: http://optics.colorado.edu/~kelvin/classes/opticslab/LaserCooling3.doc.pdf where is the heat transferred? From what I gather of a cursory reading on traps, whether laser, magnetic, etc the general idea is to isolate the target, then transfer heat from it, thereby cooling it. I don't understand how sending photons at an atom or more can cause that structure to shed energy, and this mechanism seems to be the key to these systems. | There are two thermodynamic aspects to laser cooling that are worth mentioning. The first, as others have noted, has to do with the frequency of the light that is absorbed and emitted. In Doppler cooling, the laser is tuned slightly below the frequency that the atom wants to absorb. An atom moving toward the laser sees that light shifted slightly up in frequency, and is thus more likely to absorb it than an atom at rest or moving away from the laser is. When it absorbs the light, it also picks up the momentum associated with that photon, which is directed opposite the momentum of the atom (since the atom is headed in the opposite direction from the light), and thus slows down. When the atom spontaneously emits a photon a short time later, dropping back to the ground state, the emitted photon has exactly the resonant frequency of the atomic transition (in the atom frame). That means that its frequency is a little higher (in the lab frame) than the frequency of the absorbed light. Higher frequency means higher energy, so the atom has absorbed a low-energy photon and emitted a high-energy photon. The difference in energy between the two has to come from somewhere, and it comes out of the kinetic energy of the moving atom. (You might reasonably ask what happens with the momentum of the emitted photon; the emission process also gives the atom a kick in the direction opposite the emitted photon's direction. If this happens to be exactly the same as the direction of the laser, the resulting kick sends the atom back to the same initial velocity; however, the direction of spontaneous emission is random, and as a result, it tends to average out over many repeated absorption-emission cycles. On average, the atom loses one photon's worth of momentum with each cycle, and a corresponding amount of kinetic energy.) The other aspect, which you didn't ask about, but is worth mentioning, is the entropy. On the surface, it might seem that even though energy is conserved, there would be a thermodynamics problem here because the system moves from a higher entropy (lots of fast-moving atoms) to a lower entropy (lots of slow-moving atoms). The difference is made up in the entropy of the light field. Initially, you have a single monochromatic beam of photons, all with the same frequency, all headed in the same direction, which is about as low-entropy as it gets. After the cooling, those photons have been scattered out into all different directions, and with a range of frequencies, so the entropy of the light is much greater, and more than makes up for the decrease in the entropy of the atoms. That's probably way more detail than you wanted/needed, but there you go. | {
"source": [
"https://physics.stackexchange.com/questions/5851",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2226/"
]
} |
5,888 | I heard on a podcast recently that the supermassive black holes at the centre of some galaxies could have densities less than water, so in theory, they could float on the substance they were gobbling up... can someone explain how something with such mass could float? Please see the below link for the podcast in question: http://www.universetoday.com/83204/podcast-supermassive-black-holes/ | Well, it can't (float), since a Black Hole is not a solid object that has any kind of surface. When someone says that a super massive black hole has less density than water, one probably means that since the density goes like
$\frac{M}{R^3}$
where M is the mass and R is the typical size of the object, then for a black hole the typical size is the Schwarzschild radius which is $2M$, which gives for the density the result $$\rho\propto M^{-2}$$ You can see from that, that for very massive black holes you can get very small densities (all these are in units where the mass is also expressed in meters). But that doesn’t mean anything, since the Black Hole doesn’t have a surface at the Schwarzschild radius. It is just curved empty space. | {
"source": [
"https://physics.stackexchange.com/questions/5888",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2247/"
]
} |
6,108 | I am looking for a good source on group theory aimed at physicists. I'd prefer one with a good general introduction to group theory, not just focusing on Lie groups or crystal groups but one that covers "all" the basics, and then, in addition, talks about the specific subjects of group theory relevant to physicists, i.e. also some stuff on representations etc. Is Wigner's text a good way to start? I guess it's a "classic", but I fear that its notation might be a bit outdated? | There is a book titled "Group theory and Physics" by Sternberg that covers the basics, including crystal groups, Lie groups, representations. I think it's a good introduction to the topic. To quote a review on Amazon (albeit the only one): "This book is an excellent introduction to the use of group theory in
physics, especially in crystallography, special relativity and
particle physics. Perhaps most importantly, Sternberg includes a
highly accessible introduction to representation theory near the
beginning of the book. All together, this book is an excellent place
to get started in learning to use groups and representations in
physics." | {
"source": [
"https://physics.stackexchange.com/questions/6108",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/661/"
]
} |
6,157 | I'm trying to amass a list of physics books with open-source licenses, like Creative Commons, GPL, etc. The books can be about a particular field in physics or about physics in general. What are some freely available great physics books on the Internet?
edit: I'm aware that there are tons of freely available lecture notes online. Still, it'd be nice to be able to know the best available free resources around. As a starter: http://www.phys.uu.nl/~thooft/theorist.html jump to list sorted by medium / type Table of contents sorted by field (in alphabetical order): Chaos Theory Earth System Physics Mathematical Physics Quantum Field Theory | Books Galileo and Einstein very interesting book, 200 pages, by Michael Fowler , Text for Physics 109, Fall 2009 (from Babylonians and Greeks to Einstein) Physics Made Easy Karura notes Classical and quantum mechanics via Lie algebras by Arnold Neumaier, Dennis Westra , 502 pages, (arxiv) by Hans de Vries: 'Physics Quest' Understanding Relativistic Quantum Field Theory - I love this 'book in progress' to understand Special Relativity , and beyond. To see how a real Lorentz contraction do happen (ch. 4) and how magnetic field is induced by electrostactic field and Non-simultaneity (it is like a Coriollis effect) by Benjamin Crowell: 'Light and Matter' - General Relativity explore other physics topics here http://www.lightandmatter.com/ by Bo Thidé: Electromagnetic Field Theory - advanced Electrodynamics textbook Elecromagnetic Fields and Energy MIT Hypermedia Teaching Facility, by Herman A. Haus and James R. Melcher (with media) HyperPhysics - everything , in short. the physics hypertextbook detailed online book, very interesting Work in Progress. Relativity - The Special and General Theory by Albert Einstein (1920) Feynman Lectures (pdfs) ( final index ) Wikipedia Physics a portal to start digging. A colaborative gigantic work. WikiBooks -SR a textbook on Relativity. WikiSource - Relativity Portal find here "The Measure of Time" by Henri Poincaré and many other original sources. Stanford Encyclopedia of Philosophy a plethora of info related to physics (for ex. singularities ) EXPLORING THE BIOFLUIDDYNAMICS OF SWIMMING AND FLIGHT David Lentink The Physics of Waves by HOWARD GEORGI of Harvard The Physics of Ocean Waves (for physicists and surfers) , by Michael Twardos at UCI Photonics - The Basics and Applications 92 pages , University of Pennsylvania Photonic Crystals: Molding the Flow of Light 305 pages, Joannopoulos et al, Princeton Univ Press Computational Genomics Algorithms in Molecular Biology, Lecture notes by Ron Shamir (pdfs) Motion Mountain by Christoph Schiller Journals open access and online collections PSE-list-open-access-journals Directory of Open Access Journals free, full text, quality controlled scientific and scholarly journals (6286, been 2735 searchable at article level) MathPages - lectures on various subjects in physics and mathematics. livingreviews journal articles by invitaton on relativity and beyond livingreviews blog about the journal articles Calphysics research on the electromagnetic quantum vacuum (with care, controversial material) MIT - OpenCourseWare Several courses available MIT OCW fundamentals-of-photonics-quantum-electronics download pdfs Sources to use with precaution Preprints ARXIV door to papers that I cannot afford (sometimes good ideas) -- Cornell univ controlled I follow this archive thru this MIT's blog The Physics arXiv Blog VIXRA free to post the ugly, the bad, the crazy, and sometimes good ideas Independent researchers can publish here. The arXiv is usually closed to authors without academic affiliation. Portals Archive.org Access to a world of original digitized books, and much more. NASA ADS Absctract Data Service search Scribd - a generic social publishing site where I find books (scientific/technical) with full or partial access. scholar.google.com from the giant that is changing the observable universe of Human beings Cosmos Portal from Digital Library Encyclopedia of Earth from Digital Library NanoHub - A resource for nanoscience and technology Multimedia youtube Berkeley Chanel with courses Richard Feynman - Science Videos - 4 original videos (recorded at Auckland) arguably the greatest science lecturer ever. Videos for Shiraz's lectures on String Theory Leonard Susskind - Modern Theoretical Physics from his "physics for everyone" blog Fundamentals of Nanoelectronics NanoHub - Lectures (Purdue Univ ref. ECE 495N)
including Lecture 10: Shrödinger's Equation in 3-D (mp4) Math Multivariable calculus and vector analysis A set of on-line readings (Interactive click and drag with LiveGraphics3D) ; explore tab Topics KhanAcademy (videos) mission: to deliver a world-class education to anyone anywhere Math and physics online tools: Online Latex Equation Editor (right click the result and apply anywhere) wolframalpha - computational knowledge engine, do you want to calculate or know about? sage online - support a viable open source alternative to Magma, Maple, Mathematica, and MATLAB. Euler Math Toolbox free software for numerical and for symbolic computations geogebra - Free mathematics software for learning and teaching Modeling and simulation OpenModelica Physical modeling and simulation environment Elmer Open Source Finite Element Software for Multiphysical Problems ( examples ) Mason Multiagent based simulation (IA) ECJ Evolutionary Computation (IA) Breve A 3d Simulation Environment for Multi-Agent Simulations and Artificial Life
(IA) NanoHub-Periodic Potential Lab solves the time independent Schroedinger Equation in a 1-D spatial potential variation demonstrations.wolfram 7050 applets Astronomy and astrophysics Books and reviews Cosmology today-A brief review (arxiv 2011)
This is a brief review of the standard model of cosmology. We first introduce the FRW models and their flat solutions for energy fluids playing an important role in the dynamics at different epochs. We then introduce different cosmological lengths and some of their applications. The later part is dedicated to the physical processes and concepts necessary to understand the early and very early Universe and observations of it. review of Big Bang Nucleosynthesis (BBN) (arxiv 2008) Portals astro-canada Introduction to astronomy, light, instruments, etc. Data simbad search data on celestial bodies with the proper tools. NASA PDS: The Planetary Data System data related to Nasa missions Sky viewers Skyview , Nasa SkyView is a Virtual Observatory on the Net WWT World Wide Telescope, Microsoft Simulation and presentations Celestia free space simulation that lets you explore our universe in three dimensions. astrolab presentations_astronomiques (FR) Other resources Kirk McDonald page at Princeton.edu a handful of resources on EM,QED,QM (+-5Gb ;-) Springerlink's LaTeXsearch you can search articles by using latex formulas input | {
"source": [
"https://physics.stackexchange.com/questions/6157",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1814/"
]
} |
6,197 | In general relativity, light is subject to gravitational pull. Does light generate gravitational pull, and do two beams of light attract each other? | The general answer is "it depends." Light has energy, momentum, and puts a pressure in the direction of motion, and these are all equal in magnitude (in units of c = 1). All of these things contribute to the stress-energy tensor, so by the Einstein field equation, it is unambiguous to say that light produces gravitational effects. However, the relationship between energy, momentum, and pressure in the direction of propagation leads to some effects which might not otherwise be expected. The most famous is that the deflection of light by matter happens at exactly twice the amount predicted by a massive particle, at least in the sense that in linearized GTR, ignoring the pressure term halves the effect (one can also compare it a naive model of a massive particle at the speed of light in Newtonian gravity, and again the GTR result is exactly twice that). Similarly, antiparallel (opposite direction) light beams attract each other by four times the naive (pressureless or Newtonian) expectation, while parallel (same direction) light beams do not attract each other at all. A good paper to start with is: Tolman R.C., Ehrenfest P., and Podolsky B., Phys. Rev. 37 (1931) 602 . Something one might worry about is whether the result is true to higher orders as well, but the light beams would have to be extremely intense for them to matter. The first order (linearized) effect between light beams is already extremely small. | {
"source": [
"https://physics.stackexchange.com/questions/6197",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2328/"
]
} |
6,202 | Photons are massless, but if $m = 0$ and $E=mc^2$ , then $E = 0c^2 = 0$. This would say that photons have no energy, which is not true. However, given the formula $E = ℎf$, a photon does have energy as $ℎ$ is a non-zero constant and photons with a zero frequency do not exist. What am I missing here, or does the famous formula not apply to photons? | There are two explanations possible stemming from the fact that the definition of $m$ in the formula is ambiguous. Well, perhaps Einstein had only one of the two meanings in mind in his original paper but I'm afraid I wouldn't know that as I haven't read it. Be that as it may one should recall that special relativity mixes space and time and therefore also momentum and energy and the full formula relating all these fundamental quantities has to be $$E^2 = m_0^2c^4 + p^2c^2$$ where $m_0$ is the rest mass of the particle (zero for photons) whose importance lies in the fact that it is invariant w.r.t. Lorentz transformations (rotations and boosts).
So the first explanation is that the famous formula only holds in the object's rest frame ( $p=0$ ). But such a rest frame isn't available for photons so that formula indeed isn't valid for them. The other explanation is through the concept of relativistic mass. In that case the $m$ takes on the meaning of apparent mass because the faster the object goes the harder it will be to accelerate it (because of finite speed of light). So formally one can still talk about the photon having a relativistic (or effective) mass $m = E/c^2$ . But this concept of mass runs in all kinds of problems so its usage is discouraged. | {
"source": [
"https://physics.stackexchange.com/questions/6202",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
6,220 | Two towns are at the same elevation and are connected by two roads of the same length. One road is flat, the other road goes up and down some hills. Will an automobile always get the best mileage driving between the two towns on the flat road versus the hilly one, if operated by a perfect driver with knowledge of the route? Is there any hilly road on which a better mileage can be achieved? | I believe, the answer is a small but quantifiable, yes , there is a non flat road configuration that would lead to better gas mileage between any two points at the same height. I have numerically solved for such an optimal path. I believe I can give a nice explanation of why that is, but it will take some work, so bear with me. Granted, you can only expect to save about $0.01 on fuel by taking such an optimal path, more or less independent of the distance you need to travel. tl;dr: There is an optimal velocity to travel at to minimize fuel consumption over a fixed distance. If we next ask what the optimal path is between two fixed points assuming we start and end with zero velocity, the answer is roughly, to speed up to the optimal speed rather quickly, maintain that speed for most of the distance, then slow down at the end. If we allow hills, the best road will have a downward slope to start to help us get up to speed, and an upward slope at the end to help us slow down. This allows for better fuel efficiency. I found this question very interesting, albeit challenging. In what follows I hope to walk you through my thought process and hopefully introduce some clever techniques and results. As an overall strategy, I thought the original question was too hard. So, in the grand tradition of science, I started to break down the problem into easier ones. Trying to determine the optimal path was too difficult, as it seemed like I would need to know the optimal velocity profile first. So, then I tried to figure out the optimal velocity profile for a flat path. That itself was difficult, so I instead started with trying to figure out the optimal speed for a car travelling a fixed distance at constant speed. That is where we will begin. Optimal Fixed Speed for Fixed Distance We have all heard that there is an optimal speed to travel at to maximize fuel efficiency. But why is that? There ultimately has to be some kind of trade off, wherein we pay a penalty for going too slow, and pay a penalty for going too fast, so the optimum can be somewhere in between. It is easy to see why going to fast is bad, the faster we go, the more we lose out to air resistance. On the other end, if we go very very slowly, this is also inefficient, since it will take a very long time for us to reach our goal, during which we will lose fuel to having the car on and running. As I explored in my answer to How effective is speeding? , a very simple model that does a decent job of fitting measured fuel efficiency curves is to assume that the power drawn by the car takes the form $$ P = A v^3 + P_0 $$ here the $\propto v^3$ term is the cost of air resistance, since $F_{\text{air}} \propto v^2, P \propto F v \propto v^3$ , and the $P_0$ denotes our constant power losses due to having the car on. As I showed in that other answer, this does a decent job of modelling the observed fuel efficiencies. Trying to be a bit more accurate this time, let's include rolling friction and rotational losses in our car, using as our model of the force $$ F = A + Bv + Cv^2 $$ as per this EPA report (p. 8). The benefit then is that we can use their parameters for a 2004 Honda Civic DX (p. 99) $$ A = 105.47 \text{ N} \quad B = 5.4276 \text{ N/mps} \quad C = 0.2670 \text{ N/mps$^2$} \quad m = 1239 \text{ kg} $$ and obtain some level of modest realism. It remains to determine $P_0$ , which I set to $P_0 = 6 \text{ kW}$ in order to obtain a sane value for our optimal speed. Our resulting model gives, for the fuel efficiency of our model car: with its peak at 41 mph in this case. This also qualitatively agrees with observed fuel efficiency curves, i.e. this wikipedia figure or this post from automatic Now it should be clear that if we are interested in going a decent distance, our optimal trajectory will be to speed up to around 41 mph, then maintain that speed for most of the trip, and slow down at the end. This ensures the optimal tradeoff between the time we need the car to be on and the loses due to the various friction forces our car feels. This is going to give a very different character to my answer than to Edwards, as his strategy was to go as slow as possible to avoid the frictional losses. Unfortunately, taken to the extreme, this would mean we waste a lot of fuel just having the car on. Granted, one could imagine throwing the car in neutral and turning the engine off, but this isn't a safe or realistic strategy. In what follows, I will try to see what kind of answer we get, assuming we don't turn off the car or do any particularly clever hypermiling tricks. Optimal Control for Fixed Distance and Flat Terrain Having figured out what the optimal fixed speed for a fixed distance, we would like next to figure out what the optimal velocity profile is for a fixed distance, assuming we start and end at zero velocity. This turns out to be a problem of optimal control . To formally state the problem, we have the state of our car, $v(x)$ the velocity as a function of distance between two fixed points $x=0, x=X$ . We have control variable $u(x)$ which will roughly correspond to how much we are pressing on the accelerator or brake. We are searching for the $u(x)$ that minimizes our fuel consumption, which we will take as: $$ F = \int dt\, r(v,u) = \int_0^X dx\, \frac{r(v,u)}{v} $$ where $r(u,v)$ is the rate at which we consume fuel for a given $v,u$ . Our car is described by the given dynamics. $$ \dot v = a(u,v) = u - \frac{A}{m} (v>0) - \frac{B}{m} v - \frac{C}{m} v^2 $$ here we have the $A,B,C$ terms representing rolling resistance, rotational friction and air resistance, and $u$ is the extra acceleration provided by the car. What shall we take as our fuel consumption rate? I had a lot of trouble coming up with this, but I think a good definition is $$ r(v,u) = m v u (u>0) + P_0 $$ The intuition is that the fuel consumption should be proportional to the power our car needs to provide, our car provides the acceleration $u$ , so power $P=mvu$ . But, we only consume fuel if we have positive acceleration. When we brake, unless we have regenerative brakes, that energy is lost and not recovered, so I threshold our fuel consumption function to positive values of $u$ . Another important caveat is that cars have a maximum and minimum value of $u$ they can apply. For our model car, I will take $u$ to be bounded in the interval $[-b,a] = [-7.2, 3.2]$ which I obtained from 0-60mph acceleration times ( $u \leq a$ ) and braking distance measurements for honda civics. ( $u \geq -b$ ) Optimal Control Problem At this point, we have phrased our problem as a problem in the calculus of variations . Minimize the fuel use over all possible accelerator profiles $u(x)$ subject to the physical constraints and the dynamical constraint. $$ \min_{u(x) \in [-b, a]} \int dt\, r(v,u) \quad \text{subject to } \dot v = a(v,u), v(0) = 0, v(X) = 0$$ at this point, we could proceed by adding a Lagrange multiplier for our contraints, taking the functional derivative and finding things corresponding to the Euler Lagrange equations for this system. But that was too hard for me. Didn't work out. Next we could try to find conditions our optimum must satisfy by applying Pontryagin's minimum principle and working things out that way, but again this proved too difficult for me. So, instead, let's just opt for a numerical solution, which we will find by imploying dynamic programming . Dynamic Programming The gist here is that we don't have to solve the whole problem at once, instead let's do something akin to a proof by induction , we will try to break the problem down into its smallest piece, and write the solution to that small problem in terms of the solution to a slightly smaller problem. This will set up a sort of recurrence , which in combination with a solution to the smallest possible problem will allow us to brute force the solution to any problem we want. We start by replacing our problem with an even larger one. Let's seek optimal paths starting from any intermediate $x$ with any intermediate velocity $v$ and write $$ F_{x,v} [ u(x) ] = \int_x^X dx\, \frac{ r(u,v) }{v} \quad v(x) = v, v(X) = 0 $$ so that $F_{x,v} [ u(x) ]$ is the fuel consumed for a policy $u(x)$ starting at $x=x$ with $v(x)=v$ and ending at $x=X$ with $v(X)=0$ . In terms of this we can define the cost for these optimal partial paths. $$ C(x,v) = \min_{ u(x) \in [-b,a] } F_{x,v} [ u(x) ] $$ So that $C(x,v)$ gives us the fuel used for an optimal path starting at $x$ with velocity $v$ going to $x=X$ with velocity $v=0$ . It seems we've just made our lives worse. Instead of determining the cost for a single optimal path ( $C(0,0)$ in this case), we've created the problem of solving a whole set of optimal trajectories. The magic happens next. We imagine partitioning $x$ and $v$ into a discrete grid, $x_i$ , $v_i = v(x_i)$ , and try to write the solution at one of our grid points in terms of the solution at the next grid point. $$ C(x_i, v_i) = \min \left\{ \int_{x_i}^{X} dx\, \frac{r(u,v)}{v} \right\} = \min \left\{ \int_{x_i}^{x_{i+1}} dx\, \frac{r(u,v)}{v} + \int_{x_{i+1}}^{X} dx\, \frac{r(u,v)}{v} \right\} $$ but that second term is just $C(x_{i+1}, v_{i+1})$ , and our integral is a single step so we have: $$ C(x_i, v_i) = \min \left\{ \frac{r(u,v)}{v} \Delta x + C(x_{i+1}, v_{i+1}) \right\} $$ Alas, we've made progress. We have the value of $C(x_i, v_i)$ as a minimum of the fuel we use this step plus the optimal value at the next grid point. Knowing this, and that at the very end of our journey we have $$ C(X, v) = \begin{cases} 0 & v = 0 \\ \infty & v \neq 0 \end{cases} $$ we can proceed to compute $C(x,v)$ for all values of $x$ . We just need to define what we mean by $v$ and $u$ . For that, we need to ensure that we satisfy our dynamical contraints and do a decent job of approximating the integral by taking $r$ at a single point. So we will take $$ v = \frac 12 \left( v_{i-1} + v_{i} \right) $$ $$ \Delta x = x_{i} - x_{i-1} = \left( v + \frac 12 \dot v \Delta t \right) \Delta t $$ $$ \dot v = \frac{ v_{i} - v_{i-1} }{ \Delta t} = a(v,u) $$ these allow us to treat $\Delta x$ as a constant, and determine $u, r(v,u), a(v,u)$ all in terms of $v_i$ , and then take the minimum over all values of $v_i$ on the right. This approach is similar to the one adopted in this paper (doi) , which coincidently does a more complicated model with gear ratios and switching and the like and is worth checking out. Dynamical Program Results for Flat Terrain We can then code the whole thing up and solve for the optimal profiles for a fixed distance. I'll take as our distance 1 mile. This is what we get for our velocity and accelerator profiles: Notice that we have basically three stages. In the beginning, we accelerate rather rapidly up to nearly optimal speed. Then in the middle portion we travel at nearly optimal speed in order to cover ground, and then at some point, we coast to reduce our speed, and finally brake to end up with zero velocity at the end. We can also see our fuel use as a function of distance and compute the total fuel use and fuel economy for this trip: In this case, we use 0.29 Gallons , for an average fuel efficiency of 34.3 mpg . Notice however that the majority of our fuel use comes from the main stretch at near optimal speed. There isn't anything we are going to be able to due about the portion. If we hope to make any strides towards savings, it is going to have to be in reducing the fuel use in the initial portion of the trajectory. Just that first part where we accelerate up to speed takes up 0.008 gallons of gas or about $0.02 worth at current prices. This is really the only place we should hope to see some kind of benefit, so even if we do manage to do better, it isn't going to be by much. Intuition for Hill Answer Alright, having worked out what we should do on flat ground, now we are really in a position to tackle the question at hand. What would be the optimal design for
the height of the road in between our two endpoints? Is there any benefit to be gained? Now that we have set up the framework, it is straight forward to just run the numerical optimization problem again, this time with another dimension to our grid corresponding to road height, which we will do shortly, but let's see if we can
extract some intuition based on our answer for flat terrain. Notice that the optimal control path has basically three stages. Stage I is to accelerate up to nearly optimal speed, in which we incur our greatest fuel hit. Once up to speed in stage II we maintain that speed, which will cost us some small but constant and nearly optimal fuel consumption rate. And finally, as we approach our target, in Stage III we decelerate, first by nearly coasting and finishing up with some brakes. So, can we do better with hills? Well, since we are dealing with a car, it hurts us to accelerate using the engine, while we don't recover anything for braking, so if we are going to do better we need to do something about that initial acceleration. But, we could just use a downward slope to achieve that initial acceleration. Let gravity do the work for us and we won't incur any fuel penalty. The physics is very similar to a classic demo they show in intro mechanics classes. Consider the following setup: We have two ramps. One with a gentle constant slope the whole way and one with a dip in the middle. Written in big red letters across the thing is "CONSERVATION OF ENERGY". The question that is asked of students is: Which ball will reach the end first? Think about it a moment before scrolling down. Here is the result: Notice that the ball that dips down handily beats the first ball. Now, this demo is usually fun because students all say that they balls will take the same amount of time, largely because of the prompting caused by the "CONSERVATION OF ENERGY" written across the apparatus. But, while energy is conserved, the time it takes to travel between the two points is not. Since some of the fuel loss we incur is some constant power $P_0$ for having our car on in our model, we could benefit from essentially the same design. Use gravity to reduce the time it takes for us to go from A to B, and assist in coming up to nearly optimal speed, and we might be able to save on some fuel. In particular, as we suggested a moment ago, the real trick would be to try to mitigate the large fuel cost of our initial acceleration in the beginning. If only we could use a slopping road to speed us up, then just maintain constant nearly optimal speed for most of the trajectory and then slope up at the end, wherein again gravity would help us brake without having to apply it ourselves. In fact, let's look at the integrated acceleration of our optimal path as a function of distance: Here I have even divided through by $g$ , so that this is a path on which our car would feel the same acceleration profile it takes in the optimal trajectory for the flat terrain case. Our truly optimal road should look qualitatively like this. We will use an initial downward slope to speed the car up to nearly optimal speed, then we will maintain that speed for most of the trajectory. When we get to the end, we'll use the upward slope to help slow the car down. Notice that our integrated acceleration doesn't quite make it back up to $h=0$ . This is due to our frictional losses, and it will mean if we try to use a downward dip, we'll have to throw in some acceleration in the end in order to make it back up. In our flat trajectory, we didn't need to use extra fuel at the end, since we applied our brakes. Here we'll have to give it some oomph at the end, but it should only take some fraction of the fuel it took in the flat case to get us up to speed. In the language of this hill thing, the downward slope gets us down to speed, and while its true we won't come all the way back up, we will certainly make it some or most of the way back up the final hill. Let's now actually compute the optimal path for the road, by extending our dynamical program. Optimal Hill Design We'll use the same numeric technique we used before, but with a new control variable $h(x)$ , the height of our road as a function of distance. We have to modify our dynamical constraint, this time we will have $$ \dot v = a(v,u,h) = u - \frac{A}{m} (v>0) - \frac{B}{m} v - \frac{C}{m} v^2 - g \frac{dh}{dx} $$ where we have added another term to our acceleration given by the shape of our road. Our fuel use will stay the same, as we only incur a fuel cost for using our accelerator. But now we will need to solve the larger dynamic program: $$ C(x_i,v_i,h_i) = \min \left\{ \frac{r(v,u,h) }{v} \Delta x + C(x_{i+1}, v_{i+1}, h_{i+1}) \right\} $$ Where we will impose our dynamical constraints in much the same way above, but now our minimization will be a two dimensional minimization over candidate velocities and heights. In either case, the program still runs, just takes some more time and we find for our optimal road : Which honestly looks pretty similar to what our intuition suggested. As before, we can look at the optimal $v(x), u(x)$ : Where our velocity profile looks pretty similar. Our accelerator shows some goofy behavior, but overall has the kinds of trends we expected. We let the car more or less glide down the hill, then maintain a constant nearly optimal speed, and then let the hill slow us down but add some juice at the end. The real test will be to compute the fuel use in this case: and TADA, we use slightly less fuel, 0.027 gallons corresponding to an average efficiency of 37.6 mpg if we allow the road to find its own optimal configuration. This is 0.002 gallons less than the flat terrain case, or about half a cent's worth of gas at current prices. This saving should also be more or less independent of the distance we are hoping to drive, as in either case we have a long stretch of time where we are travelling at nearly optimal speed, the difference was just in how we try to mitigate the initial speed up from our flat terrain case. Conclusion Yes you can do slight better, about 1 cent better. The trick is to let the road speed the car up, then just maintain a nearly optimal speed for most of the journey, and let the hill mostly slow you down at the end. This lets you come out slightly ahead. All of the code used to generate these results is available as an IPython Notebook . Very fun question. References: Previous Answer on fuel efficiency at constant speed EPA report on car efficiency pdf Notes on Optimal Control pdf Optimal Control of Automobiles for Fuel Economy doi IPython Notebook of code for this answer ipynb viewer Appendix A: Hypermiling As Floris suggests in the comment, we might be interested in how the story changes if we allow our car driver to try to be maximally efficient and turn off their engine when it is not in use. We can solve this case as well. In fact, we can model this scenario by just modifying our fuel consumption function $$ r(u,v) = \left( m v u + P_0 \right) (u>0) $$ Now, we actually incur no penalty if we are not drawing any acceleration from our engine. For our flat trajectory, the optimal strategy becomes: So that our driver just turns on the engine in pulses, and otherwise coasts for most of the journey. This actually cuts our fuel consumption nearly in half: using only 0.013 gallons , or an average fuel efficiency for this route of 76.8 mpg . | {
"source": [
"https://physics.stackexchange.com/questions/6220",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2072/"
]
} |
6,329 | "Heat rises" or "warm air rises" is a widely used phrase (and widely accepted phenomenon). Does hot air really rise? Or is it simply displaced by colder (denser) air pulled down by gravity? | The mechanism responsible for the rising of hot air is flotation: Hot air is less dense than cold air and hence air pressure will exert an upwards force, in the same way air rises in water.
Now if cold air was magically unaffected by gravity, then it would not be able to exert pressure on the hot air and thus it would not rise. The statement that "heat rises", by the way, is not universally true.
Look at water. Here, it is the cold water that is less dense than warm water (at least in the temperature regime of importance to freezing). In winter, when water gets colder, the cold water raises to the top and eventually will freeze, while the water below remains liquid for the moment. | {
"source": [
"https://physics.stackexchange.com/questions/6329",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2375/"
]
} |
6,438 | A question in four parts. What are the main problems which supersymmetry purports to solve? What would constitute lack of evidence for SUSY at the proposed LHC energy scales (e.g. certain predicted superpartners are not in fact observed)? Are there alternative theoretical approaches which would address the SUSY problem set and which would still be credible in such an LHC no-SUSY-scenario? Where would LHC-disconfirmation of SUSY leave String Theory? I would like to think that these four points could be taken together as one question. | First, let me emphasize something that is being covered by a thick layer of misinformation in the media these days: it is totally premature to conclude whether the LHC will see SUSY or not. The major detectors have only collected 45/pb (and evaluated 35/pb) of the data. The "slash pb" should be pronounced as "inverse picobarns". The LHC is designed to collect hundreds or thousands times more data than what it has recorded so far, and it should eventually run at a doubled energy (14 TeV total energy instead of the current 7 TeV total energy). Each multiplication of the integrated luminosity (number of collisions) by 10 corresponds to the access of new particles whose masses are approximately 2 times larger or so. It means that the LHC will be able to decide about the existence of new particles at masses that are 4-16 times higher than the current lower bounds (16 also includes the likely upgrade from 2x 3.5 TeV to 7 TeV). There are at least two "mostly independent" parameters with the dimension of mass in SUSY - I mean $m_0$ and $m_{1/2}$. So the number from the previous sentence should really be squared, and in some sensible counting and with a reasonable measure, the LHC has only probed about 1/16 - 1/256 of the parameter space that is accessible to the LHC over its lifetime. So the only thing we can say now is that SUSY wasn't discovered at an extremely early stage of the experiment - which many people have hoped for but this possibility was never supported by anything else than a wishful thinking. Whether the LHC may see SUSY may remain an open question for several years - unless the LHC will see it much sooner than that. It's an experiment that may continue to 2020 and beyond. We don't really know where the superpartner masses could be - but they may sit at a few TeV and this would still mean that they're accessible by the LHC. Now, your questions: What SUSY helps to solve First, SUSY is a natural - and mostly inevitable - consequence of string theory, the only consistent quantum theory that includes gravity as well as the Yang-Mills forces as of 2011. See http://motls.blogspot.com/2010/06/why-string-theory-implies-supersymmetry.html In this context, supersymmetry is needed for the stability of the vacuum and other things, at least at a certain level. For other reasons, to be discussed below, it's natural to expect that SUSY should be unbroken up to LHC-like energy scales (i.e. that it should be visible at the LHC) - but there's no sharp argument that might calculate the superpartner scale. Some string theorists even say that it should be expected that supersymmetry is broken at a very high scale (near the GUT scale or Planck scale) - because this is a "generic behavior" in the stringy landscape (the "majority" of the minima have a high-scale SUSY breaking which would make SUSY unavailable to any doable experiments) - so these proponents of the anthropic reasoning don't expect SUSY to be seen at the LHC. However, more phenomenological considerations make it more natural for SUSY to be accessible by the LHC. Why? There are several main arguments: SUSY may offer a very natural dark matter particle candidate, namely the LSP (lightest supersymmetric particle), most likely the neutralino (the superpartner of the photon or Z-boson or the Higgs bosons, or their mixture), that seems to have the right approximate mass, strength of interactions, and other things to play the role of the majority of the dark matter in the Universe (so that the Big Bang theory with this extra particle ends up with a Universe similar to ours after 13.7 billion years). See an article about SUSY and dark matter: http://motls.blogspot.com/2010/07/susy-and-dark-matter.html Also, SUSY with superpartners not far from the TeV or LHC energy scale improves the gauge coupling unification so that the strengths of the couplings get unified really nicely near the GUT scale (and maybe incorporated into a single and simple group at a higher energy scale not far from the Planck scale), see: http://motls.blogspot.com/2010/06/susy-and-gauge-coupling-unification.html The unification in the simplest supersymmetric models is only good if the superpartners are not too far from the TeV scale - but if they're around 10 TeV, it's still marginally OK. The same comment with the same value 10 TeV also holds for the dark matter job of the neutralinos discussed above. Finally and most famously, SUSY with superpartner masses not far from the TeV or LHC scale stabilizes the Higgs mass - it explains why the Higgs mass (and, consequently, the masses of W-bosons and Z-bosons, among other particles) is not driven towards a huge energy scale such as the Planck scale by the quantum corrections (with loops of particle-antiparticle pairs in the Feynman diagrams). Those otherwise expected quantum corrections get canceled at the TeV accuracy if the superpartner masses are near a TeV - and the resulting Higgs mass may then be naturally in the expected 100 GeV - 200 GeV window with an extra 10:1 luck (which is not bad). The lighter the superpartner masses are, the more "naturally" SUSY explains why the Higgs mass remains light. But there is no strict argument that the superpartners have to be lighter than 1 TeV or 10 TeV. It just "sounds strange" if they were much higher than that because a non-negligible portion of the hierarchy problem would remain. See a text on SUSY and the hierarchy problem: http://motls.blogspot.com/2010/07/susy-and-hierarchy-problem.html One may say that experiments already do disprove 99.999999999+ percent of the natural a priori interval for a conceivable Higgs mass in the Standard Model. SUSY changes this counting - the probability that the Higgs mass ends up being approximately as low as suggested by the electroweak observations becomes comparable to 100 percent according to a SUSY theory. To agree with other available experiments, SUSY needs to adjust some other parameters but at good points of the parameter space, none of the adjustments are as extreme as the adjustment of the Higgs mass in the non-supersymmetric Standard Model. Can we decide whether SUSY is there at the LHC? SUSY may hide for some time but the LHC is simply scheduled to perform a certain number of collisions at a certain energy, and those collisions may eventually be studied by the most up-to-date methods and the evidence for SUSY will either be there in the data or not. Some phenomenologists often want to stay very modest and they talk about numerous complex ways how SUSY may keep on hiding - or remain de facto indistinguishable from other models. However, sometimes the very same people are capable of reverse-engineering a randomly constructed man-made model (fictitiously produced collision data) within a weekend: these are the games played at the LHC Olympics. So I don't really expect too much hiding. With the data, the fate of the LHC-scale SUSY will ultimately be decided. Obviously, if SUSY is there at the LHC scale, the LHC will eventually be discovering fireworks of new effects (SUSY is also the most attractive realistic possibility for the experimenters) - all the superpartners of the known particles, among other things (such as an extended Higgs sector relatively to the Standard Model). Their spins and couplings will have to be checked to (dis)agree with those of the known particles, and so on. All the masses may be surprising for us - we don't really know any of them although we have various models of SUSY breaking which predict various patterns. Alternatives in the case of SUSY non-observation The dark matter may be composed of ad hoc particles that don't require any grand structures - but such alternatives would be justified by nothing else than the simple and single job that they should play. Of course that there are many alternatives in the literature but none of them seem to be as justified by other evidence - i.e. not ad hoc - as SUSY. I think that in the case of no SUSY at the LHC, the LHC will remain some distance away from "completely disproving" SUSY particles as the source of dark matter because this role may work up to 10 TeV masses or so, and much of this interval will remain inaccessible to the LHC. So the LHC is a great gadget which is stronger than the previous one - but one simply can't guarantee that it has to give definitive answers about all the questions we want to be answered. This fact may be inconvenient (and many laymen love to be promised that all questions will inevitably be answered for those billions of dollars - whether it's true or not) but it's simply a fact that the LHC is not a machine to see every face of God. There are various alternatives how to solve the hierarchy problem - the little Higgs model, the Randall-Sundrum models (which may be disproved at the end of the LHC, too - the LHC is expected to decide about the fate of each solution to the hierarchy problem although they may always remain some uncertainty), etc. - but I am convinced that even in the case that SUSY is not observed at the LHC, superpartners with slightly higher masses than those accessible by the LHC will remain the most well-motivated solution of the problems above. Of course, if someone finds some better new models, or some amazing experimental LHC (or other) evidence for some existing models, the situation may change. But right now, away from SUSY, there are really no alternative theories that naturally explain or solve the three problems above at the same moment. This ability of SUSY to solve many things simultaneously is surely no proof it has to be the right solution of all of them - but it is a big hint. It's the reason why particle physicists think it's the most likely new physics at this point - a conclusion that may change but only if new (theoretical or experimental) evidence arrives. While it is clear that the absence of SUSY at the LHC would weaken the case for SUSY and all related directions, I am convinced that unless some spectacular new alternatives or spectacular new proofs of other theories are found in the future, SUSY will still remain the single most serious direction in phenomenology. In formal theory, its key role is pretty much guaranteed to remain paramount regardless of the results of LHC or any conceivably doable experiments. The more formal portions of high-energy theory a theorist studies, obviously, the less dependent his or her work is on the LHC findings. I don't have to explain that the absence of SUSY at the LHC would mean a sharper splitting of the particle physics community. Absence of SUSY and string theory Clearly, if no SUSY were seen until 2012 or 2015 or 2020, the critics of string theory would be louder than ever before. Within string theory, the anthropic voices and attempts to find a sensible vacuum with the SUSY breaking at a high-energy scale would strengthen. But nothing would really change qualitatively. The LHC is great but it is just moving the energy frontier of the Tevatron at most by 1-1.5 order(s) of magnitude or so. If there is some non-SUSY new physics found at the LHC, most particle physicists will obviously be interested in whatever models that can be relevant for the observations. If the LHC sees no new physics, e.g. if it only sees a single Higgs boson, and nothing else ever appears, the current situation will qualitatively continue and the tensions will only get amplified. Serious physicists will have to continue their predominantly theoretical and ever more careful studies (based on the observations that have been incorporated into theories decades ago) without any guidance about new physics from the available new experiments (simply because there wouldn't be any new data!) - while the not so serious physicists and people around science will strengthen their hostile and utterly irrational claims that physics is no longer science. Sociologically, the situation would almost certainly become unpleasant for good physicists and pleasant for populist and uneducated critics of science who are not really interested in the truth about the physical world. But Nature works in whatever way She does. She is not obliged to regularly uncover a part of Her secrets. A paper with the same question in the title Amusingly, there exists a 2-week-old preprint by 8 authors: http://arxiv.org/abs/arXiv:1102.4693 What if the LHC does not find supersymmetry in the sqrt(s)=7 TeV run? You may see that the question in their title is almost identical to your question at Physics Stack Exchange. Their answer is much like my answer above: if the LHC is not found during the 7 TeV run (that should continue until the end of 2012), SUSY would still remain an acceptable solution to all the problems I mentioned above; just our idea about the masses of the strongly interacting superpartners (gluinos and squarks) would have to be raised above 1 TeV or so. It's pretty natural for those strongly interacting superpartners to be the heaviest ones among the superpartners - which automatically makes them harder to be seen at hadron colliders such as the LHC. | {
"source": [
"https://physics.stackexchange.com/questions/6438",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1394/"
]
} |
6,443 | In general, it seems cosmological theories that encompass more and more of the phenomena of the universe are expected to be more and more mathematically elegant, in conception if not in detail. Our experience seems to teach that it is a reasonable expectation to assume more fundamental theories will be more "elegant" or "beautiful" than previous theories. Two questions: It seems like a similar expectation is what triggered Einstein to reject Quantum Mechanics. We know his experience led him astray. How do we know our experience isn't causing a similar result in our search for a Theory of Everything? A correllation between "mathematical elegance" and explanatory power would seem to infer that "elegance" is more than just a human construct. How can that be? Why should there be a correllation between what we find pleasing, and how the Universe works? | First, let me emphasize something that is being covered by a thick layer of misinformation in the media these days: it is totally premature to conclude whether the LHC will see SUSY or not. The major detectors have only collected 45/pb (and evaluated 35/pb) of the data. The "slash pb" should be pronounced as "inverse picobarns". The LHC is designed to collect hundreds or thousands times more data than what it has recorded so far, and it should eventually run at a doubled energy (14 TeV total energy instead of the current 7 TeV total energy). Each multiplication of the integrated luminosity (number of collisions) by 10 corresponds to the access of new particles whose masses are approximately 2 times larger or so. It means that the LHC will be able to decide about the existence of new particles at masses that are 4-16 times higher than the current lower bounds (16 also includes the likely upgrade from 2x 3.5 TeV to 7 TeV). There are at least two "mostly independent" parameters with the dimension of mass in SUSY - I mean $m_0$ and $m_{1/2}$. So the number from the previous sentence should really be squared, and in some sensible counting and with a reasonable measure, the LHC has only probed about 1/16 - 1/256 of the parameter space that is accessible to the LHC over its lifetime. So the only thing we can say now is that SUSY wasn't discovered at an extremely early stage of the experiment - which many people have hoped for but this possibility was never supported by anything else than a wishful thinking. Whether the LHC may see SUSY may remain an open question for several years - unless the LHC will see it much sooner than that. It's an experiment that may continue to 2020 and beyond. We don't really know where the superpartner masses could be - but they may sit at a few TeV and this would still mean that they're accessible by the LHC. Now, your questions: What SUSY helps to solve First, SUSY is a natural - and mostly inevitable - consequence of string theory, the only consistent quantum theory that includes gravity as well as the Yang-Mills forces as of 2011. See http://motls.blogspot.com/2010/06/why-string-theory-implies-supersymmetry.html In this context, supersymmetry is needed for the stability of the vacuum and other things, at least at a certain level. For other reasons, to be discussed below, it's natural to expect that SUSY should be unbroken up to LHC-like energy scales (i.e. that it should be visible at the LHC) - but there's no sharp argument that might calculate the superpartner scale. Some string theorists even say that it should be expected that supersymmetry is broken at a very high scale (near the GUT scale or Planck scale) - because this is a "generic behavior" in the stringy landscape (the "majority" of the minima have a high-scale SUSY breaking which would make SUSY unavailable to any doable experiments) - so these proponents of the anthropic reasoning don't expect SUSY to be seen at the LHC. However, more phenomenological considerations make it more natural for SUSY to be accessible by the LHC. Why? There are several main arguments: SUSY may offer a very natural dark matter particle candidate, namely the LSP (lightest supersymmetric particle), most likely the neutralino (the superpartner of the photon or Z-boson or the Higgs bosons, or their mixture), that seems to have the right approximate mass, strength of interactions, and other things to play the role of the majority of the dark matter in the Universe (so that the Big Bang theory with this extra particle ends up with a Universe similar to ours after 13.7 billion years). See an article about SUSY and dark matter: http://motls.blogspot.com/2010/07/susy-and-dark-matter.html Also, SUSY with superpartners not far from the TeV or LHC energy scale improves the gauge coupling unification so that the strengths of the couplings get unified really nicely near the GUT scale (and maybe incorporated into a single and simple group at a higher energy scale not far from the Planck scale), see: http://motls.blogspot.com/2010/06/susy-and-gauge-coupling-unification.html The unification in the simplest supersymmetric models is only good if the superpartners are not too far from the TeV scale - but if they're around 10 TeV, it's still marginally OK. The same comment with the same value 10 TeV also holds for the dark matter job of the neutralinos discussed above. Finally and most famously, SUSY with superpartner masses not far from the TeV or LHC scale stabilizes the Higgs mass - it explains why the Higgs mass (and, consequently, the masses of W-bosons and Z-bosons, among other particles) is not driven towards a huge energy scale such as the Planck scale by the quantum corrections (with loops of particle-antiparticle pairs in the Feynman diagrams). Those otherwise expected quantum corrections get canceled at the TeV accuracy if the superpartner masses are near a TeV - and the resulting Higgs mass may then be naturally in the expected 100 GeV - 200 GeV window with an extra 10:1 luck (which is not bad). The lighter the superpartner masses are, the more "naturally" SUSY explains why the Higgs mass remains light. But there is no strict argument that the superpartners have to be lighter than 1 TeV or 10 TeV. It just "sounds strange" if they were much higher than that because a non-negligible portion of the hierarchy problem would remain. See a text on SUSY and the hierarchy problem: http://motls.blogspot.com/2010/07/susy-and-hierarchy-problem.html One may say that experiments already do disprove 99.999999999+ percent of the natural a priori interval for a conceivable Higgs mass in the Standard Model. SUSY changes this counting - the probability that the Higgs mass ends up being approximately as low as suggested by the electroweak observations becomes comparable to 100 percent according to a SUSY theory. To agree with other available experiments, SUSY needs to adjust some other parameters but at good points of the parameter space, none of the adjustments are as extreme as the adjustment of the Higgs mass in the non-supersymmetric Standard Model. Can we decide whether SUSY is there at the LHC? SUSY may hide for some time but the LHC is simply scheduled to perform a certain number of collisions at a certain energy, and those collisions may eventually be studied by the most up-to-date methods and the evidence for SUSY will either be there in the data or not. Some phenomenologists often want to stay very modest and they talk about numerous complex ways how SUSY may keep on hiding - or remain de facto indistinguishable from other models. However, sometimes the very same people are capable of reverse-engineering a randomly constructed man-made model (fictitiously produced collision data) within a weekend: these are the games played at the LHC Olympics. So I don't really expect too much hiding. With the data, the fate of the LHC-scale SUSY will ultimately be decided. Obviously, if SUSY is there at the LHC scale, the LHC will eventually be discovering fireworks of new effects (SUSY is also the most attractive realistic possibility for the experimenters) - all the superpartners of the known particles, among other things (such as an extended Higgs sector relatively to the Standard Model). Their spins and couplings will have to be checked to (dis)agree with those of the known particles, and so on. All the masses may be surprising for us - we don't really know any of them although we have various models of SUSY breaking which predict various patterns. Alternatives in the case of SUSY non-observation The dark matter may be composed of ad hoc particles that don't require any grand structures - but such alternatives would be justified by nothing else than the simple and single job that they should play. Of course that there are many alternatives in the literature but none of them seem to be as justified by other evidence - i.e. not ad hoc - as SUSY. I think that in the case of no SUSY at the LHC, the LHC will remain some distance away from "completely disproving" SUSY particles as the source of dark matter because this role may work up to 10 TeV masses or so, and much of this interval will remain inaccessible to the LHC. So the LHC is a great gadget which is stronger than the previous one - but one simply can't guarantee that it has to give definitive answers about all the questions we want to be answered. This fact may be inconvenient (and many laymen love to be promised that all questions will inevitably be answered for those billions of dollars - whether it's true or not) but it's simply a fact that the LHC is not a machine to see every face of God. There are various alternatives how to solve the hierarchy problem - the little Higgs model, the Randall-Sundrum models (which may be disproved at the end of the LHC, too - the LHC is expected to decide about the fate of each solution to the hierarchy problem although they may always remain some uncertainty), etc. - but I am convinced that even in the case that SUSY is not observed at the LHC, superpartners with slightly higher masses than those accessible by the LHC will remain the most well-motivated solution of the problems above. Of course, if someone finds some better new models, or some amazing experimental LHC (or other) evidence for some existing models, the situation may change. But right now, away from SUSY, there are really no alternative theories that naturally explain or solve the three problems above at the same moment. This ability of SUSY to solve many things simultaneously is surely no proof it has to be the right solution of all of them - but it is a big hint. It's the reason why particle physicists think it's the most likely new physics at this point - a conclusion that may change but only if new (theoretical or experimental) evidence arrives. While it is clear that the absence of SUSY at the LHC would weaken the case for SUSY and all related directions, I am convinced that unless some spectacular new alternatives or spectacular new proofs of other theories are found in the future, SUSY will still remain the single most serious direction in phenomenology. In formal theory, its key role is pretty much guaranteed to remain paramount regardless of the results of LHC or any conceivably doable experiments. The more formal portions of high-energy theory a theorist studies, obviously, the less dependent his or her work is on the LHC findings. I don't have to explain that the absence of SUSY at the LHC would mean a sharper splitting of the particle physics community. Absence of SUSY and string theory Clearly, if no SUSY were seen until 2012 or 2015 or 2020, the critics of string theory would be louder than ever before. Within string theory, the anthropic voices and attempts to find a sensible vacuum with the SUSY breaking at a high-energy scale would strengthen. But nothing would really change qualitatively. The LHC is great but it is just moving the energy frontier of the Tevatron at most by 1-1.5 order(s) of magnitude or so. If there is some non-SUSY new physics found at the LHC, most particle physicists will obviously be interested in whatever models that can be relevant for the observations. If the LHC sees no new physics, e.g. if it only sees a single Higgs boson, and nothing else ever appears, the current situation will qualitatively continue and the tensions will only get amplified. Serious physicists will have to continue their predominantly theoretical and ever more careful studies (based on the observations that have been incorporated into theories decades ago) without any guidance about new physics from the available new experiments (simply because there wouldn't be any new data!) - while the not so serious physicists and people around science will strengthen their hostile and utterly irrational claims that physics is no longer science. Sociologically, the situation would almost certainly become unpleasant for good physicists and pleasant for populist and uneducated critics of science who are not really interested in the truth about the physical world. But Nature works in whatever way She does. She is not obliged to regularly uncover a part of Her secrets. A paper with the same question in the title Amusingly, there exists a 2-week-old preprint by 8 authors: http://arxiv.org/abs/arXiv:1102.4693 What if the LHC does not find supersymmetry in the sqrt(s)=7 TeV run? You may see that the question in their title is almost identical to your question at Physics Stack Exchange. Their answer is much like my answer above: if the LHC is not found during the 7 TeV run (that should continue until the end of 2012), SUSY would still remain an acceptable solution to all the problems I mentioned above; just our idea about the masses of the strongly interacting superpartners (gluinos and squarks) would have to be raised above 1 TeV or so. It's pretty natural for those strongly interacting superpartners to be the heaviest ones among the superpartners - which automatically makes them harder to be seen at hadron colliders such as the LHC. | {
"source": [
"https://physics.stackexchange.com/questions/6443",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2226/"
]
} |
6,446 | I'm a layman without a university background in physics / math. Since I don't have a background, reading a paper is more of an effort.
Consequently, when I come across an interesting paper, I can't really just give it a glance, and see if it is science, or psuedoscience. This question is about this paper: The Schwarzschild Proton. Nassin Haramein. Originally at The Resonance Project ; archived here by the Wayback Machine on 20/02/2012. [Edit by bcrowell, 19/08/2013: this web page provides a very detailed discussion, and offers heavy criticism.] If the advent of the internet has taught me anything, its that there is an inverse relationship between cost to publish and the need to vet what is published On the other hand, Lisi's E8 paper has taught me that a paper doesn't necessarily need to be correllated to an establishment like the Perimeter Institute or the Institute for Advanced Study (two examples off the top of my head) to be worth reading. Is the paper worth reading? | I'm going with "Nonsense." on account of Mixing physics 101 mechanics with special relativity with no evident effort made to tell which case is applicable. In particular, modeling the proton as a black hole, and asserting that he can use physics 101 circular motion to describe the acceleration of two such objects whose event horizons are in contact! Claiming that the proton's mass arises from "cohering" some of the vacuum, and making no attempt to explain whence the charge comes and why it is always the same. No effort to explain where the neutron comes from, or why it is chargeless, or why it is unstable. No hint of an explanation of how or why nucleons can bind together, and why some states are stable and some are not. Frankly, I gave up looking at the alleged physics at this point...there is some verbiage that purports to relate the anomalous magnetic moment of the proton, and the author finds it necessary to use scare quotes on anomalous. Not promising. A general sense of "snake oil" in the web site. A side note that may be of interest. AIP conferences accept a few papers from authors whose theories are...ehm...not well regarded. I don't know why. I do know that people flock to see these talks as comic relief (and they are generally scheduled very late in the day). Personally I find myself intensely embarrassed on the presenters behalf: I figure they must know how the audience feels, and don't know why they do it. | {
"source": [
"https://physics.stackexchange.com/questions/6446",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2226/"
]
} |
6,561 | I'm a statistician with a little training in physics and would just like to know the general consensus on a few things. I'm reading a book by John Moffat which basically tries to state how GR makes failed predictions in certain situations. I know GR is well extremely tested, but I imagine all physicist are aware it doesn't always hold up. The book tries to put forth modified theories of gravity that make do without the need of dark matter and dark energy to make GR match real world observations. (ie speed of galaxy rotations etc) Are modified theories of gravity creditable? Is dark energy/matter the 'ether' of the 20th/21st century? Is it likely scientists are looking for something that simply doesn't exist and there are unknown fundamental forces at work? What's the best evidence for it's existence other than observations based on the 'bullet' cluster ? | Excellent question! In short, there are two logical possibilities to explain the data: There is dark matter and a cosmological constant (standard model) Gravity needs to be modified Interestingly, both possibilities have historical precedent: The discovery of Neptune (by Johann Gottfried Galle and Heinrich Louis d’Arrest) one year after its prediction by Urbain le Verrier was a success-story for the dark matter idea. (Of course, after its discovery by astronomers it was no longer dark...) The non-discovery of Vulcan was the a failure of the dark matter idea - instead, gravity had to be modified from Newton to Einstein. (Funnily, Vulcan actually WAS observed by Lescarbault a year after its prediction by Urbain le Verrier, but this observation was never confirmed by anyone else.) So basically you are asking: are we in a Neptune or a Vulcan scenario? And could not the Vulcan scenario be more credible? The likely answer appears to be no. Modifications of gravity that seem to explain galactic rotation curves are usually either in conflict with solar system precision tests (where Einstein's theory works extraordinarily well) or they are complicated and less predictive than Einstein's theory (like TeVeS) or they are not theories to begin with (like MOND). Besides the gravitational evidence for dark matter, there is also indirect evidence from particle physics. For instance, if you believe in Grand Unification then you must also accept supersymmetry so that the coupling constants merge in one point at the GUT scale. Then you have a natural dark matter candidate, the lightest supersymmetric particle. There are also other particle physics predictions that lead to dark matter candidates, like axions. So the point is, there is no lack of dark matter candidates (rather, there is an abundance of them) that may be responsible for the galactic rotation curves, the dynamics of clusters, the structure formation etc. Note also that the Standard Model of Cosmology is a rather precise model (at the percent level), and it requires around 23% of dark matter. There are a lot of independent measurements that have scrutinized this model (CMB anisotropies, supernovae data, clusters etc.), so we do have reasonable confidence in its validity. In some sense, the best evidence for dark matter is perhaps the lack of good alternatives. Still, as long as dark matter is not detected directly through some particle/astro-particle physics experiment it is scientifically sound to try to look for alternatives (I plead guilty in this regard). It just seems doubtful that some ad-hoc alternative passes all the observational tests. | {
"source": [
"https://physics.stackexchange.com/questions/6561",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1880/"
]
} |
6,588 | Could someone explain in simple terms (let's say, limited to a high school calculus vocabulary) why decibels are measured on a logarithmic scale? (This isn't homework, just good old fashioned curiousity.) | Human senses, nearly all, work in a manner and obey Weber–Fetcher law , that response of the sense machinery is logarithm of an input. It is true at least for hearing, but also for eye sensitivity, temperature sense etc. And of course, in areas where it works normally. Because in extreme, there are other processes such as pain, etc. So as in a cause of hearing, what you experience is the logarithm of power of a sound wave, by "biological, natural, hear sense construction. So, it is natural to use logarithmic units. | {
"source": [
"https://physics.stackexchange.com/questions/6588",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2456/"
]
} |
6,687 | Why massless particles have zero chemical potential? | Massless particles don't always have zero chemical potential. Suppose that you have a box full of photons and other particles, and it's possible for the photons to exchange energy with other particles, but the number of photons cannot change. Then the system will reach a thermal equilibrium in which the photons are described by a Bose-Einstein distribution with (in general) a nonzero chemical potential. The reason this doesn't usually happen with photons is that the number of photons is often not conserved in situations like this. If there are photon-number-changing processes, then the equilibrium state for the photons will have zero chemical potential (since otherwise entropy could go up by creating or destroying a photon). In summary, the rules are that the chemical potential must be zero if particle-number-changing interactions are possible, but not otherwise. That distinction often coincides with the massless or massive nature of the particles, but not always. By the way, there was a period of time in the early Universe when we were in precisely this situation: photons could thermalize via Compton scattering with electrons, but at the temperature and density at the time, photon-number-changing processes essentially did not occur. That means that the cosmic microwave background radiation today could have a nonzero chemical potential. People have tried to measure the chemical potential, but as it turns out it's consistent with zero to quite good precision. This makes sense, as long as the photons and electrons came into thermal equilibrium at an earlier epoch (when photon-number-changing processes did occur), and nothing happened during the later epoch to mess up that equilibrium. Various theories in which particle decays inject energy into the Universe during the constant-photon-number epoch are ruled out by this observation. | {
"source": [
"https://physics.stackexchange.com/questions/6687",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/592/"
]
} |
6,720 | A sign of a tsunami is that the water rushes away from the shore, then comes back to higher levels. It seems that waves should be both + and - polarized and that some tsunamis should go in the opposite direction. That is the first indication of them would be that the water begins rising. However, other than situations very close to the source, it seems that the wave always begins with the water drawing away from the coast. For example, the wikipedia article on tsunamis states that: In the 2004 Indian Ocean tsunami
drawback was not reported on the
African coast or any other eastern
coasts it reached. This was because
the wave moved downwards on the
eastern side of the fault line and
upwards on the western side. The
western pulse hit coastal Africa and
other western areas. The above is widely repeated. However, when you search the scientific literature, you find that this is not the case: Proc. IASPEI General Assembly 2009, Cape Town, South Africa., Hermann M. Fritza, Jose C. Borrerob, "Somalia Field Survey after the December 2004 Indian Ocean Tsunami" : The Italian-speaking vice council,
Mahad X. Said, standing at the
waterfront outside the mosque upon the
arrival of the tsunami (Figure 10a),
gave a very detailed description of
the initial wave sequence. At first, a
100-m drawback was noticed, followed
by a first wave flooding the beach.
Next, the water withdrew again by 900
m before the second wave partially
flooded the town. Finally, the water
withdrew again by 1,300 m offshore
before the third and most powerful
wave washed through the town. These
drawbacks correspond to 0.5-m, 4-m,
and 6-m depths. The detailed
eyewitness account of the numerous
drawbacks is founded on the locations
of the offshore pillars. So is there a physical reason why tsunamis, perhaps over longer distances, tend to be oriented so that the first effect is a withdrawal of the water? | Water waves are rather complicated, and the differential equations which describe them are call Boussenesq equations. A tsunami is not a transverse wave. It is a pressure wave with a longitudinal mode. It also travels very fast at about 700km/hr. What happens is that this travels as a pressure wave in the open ocean, but when it reaches a continental shelf the wave is reflected partially upwards. This has the effect of converting it into a transverse wave as water moving along is now pushed upwards. This is a very nonlinear process and nontrivial to model. This pushing up of the water does initially cause water at the shore to recede outwards. The wave which seconds later reaches shore is much more slow moving, and a lot of that wave energy is converted into the towering wave front that sweeps in. | {
"source": [
"https://physics.stackexchange.com/questions/6720",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1272/"
]
} |
6,912 | I understand that information cannot be transmitted at a velocity greater than speed of light. I think of this in terms of the radio broadcast: the station sends out carrier frequencies $\omega_c$ but the actual content is carried in the modulated wave, the sidebands $\omega_c\pm\omega_m$. The modulation envelop has its group velocity and this is the speed at which information is being transmitted. We also know, for example, that x-rays in glass have a phase velocity which is greater than the speed of light in vacuum. My question is, what exactly is it that is travelling faster than the speed of light? EDIT : I know it couldn't be anything physical as its superluminal. My problem is, what is it that has absolutely no information content yet we associate a velocity with it. | Shine a flashlight on a wall. Rotate the flashlight so the illuminated spot moves. Q: How fast does the spot move? A: It depends how far away the wall is. Q: How fast can the spot possibly move? A: There is no limit. Put the wall far enough away, and the spot can move with any speed. Q: What is moving across the wall? A: Nothing. The light that makes up the spot at one instant is unrelated to the light that makes up the spot an instant later. This is how a wave can be apparently superluminal: we interpret a series of unrelated events as a continuous 'wave'. Group velocity can also be superluminal ; even though the individual chunks of energy are going at roughly $c$, the region where they superpose constructively (the 'crest of the wave') goes faster than $c$. | {
"source": [
"https://physics.stackexchange.com/questions/6912",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1671/"
]
} |
7,060 | So I have learned in class that light can get red-shifted as it travels through space. As I understand it, space itself expands and stretches out the wavelength of the light. This results in the light having a lower frequency which equates to lowering its energy. My question is, where does the energy of the light go? Energy must go somewhere! Does the energy the light had before go into the mechanism that's expanding the space? I'm imagining that light is being stretched out when its being red-shifted. So would this mean that the energy is still there and that it is just spread out over more space? | Dear QEntanglement, the photons - e.g. cosmic microwave background photons - are increasing their wavelength proportionally to the linear expansion of the Universe, $a(t)$, and their energy correspondingly drops as $1/a(t)$. Where does the energy go? It just disappears. Energy is not conserved in cosmology. Much more generally, the total energy conservation law becomes either invalid or vacuous in general relativity unless one guarantees that physics occurs in an asymptotically flat - or another asymptotically static - Universe. That's because the energy conservation law arises from the time-translational symmetry, via Noether's theorem, and this symmetry is broken in generic situations in general relativity. See http://motls.blogspot.com/2010/08/why-and-how-energy-is-not-conserved-in.html Why energy is not conserved in cosmology Cosmic inflation is the most extreme example - the energy density stays constant (a version of the cosmological constant with a very high value) but the total volume of the Universe exponentially grows, so the total energy exponentially grows, too. That's why Alan Guth, the main father of inflation, said that "the Universe is the ultimate free lunch". This mechanism (inflation) able to produce exponentially huge masses in a reasonable time frame is the explanation why the mass of the visible Universe is so much greater than the Planck mass, a natural microscopic unit of mass. | {
"source": [
"https://physics.stackexchange.com/questions/7060",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2578/"
]
} |
7,131 | In all the discussions about how the heavy elements in the universe are forged in the guts of stars and especially during a star's death, I usually hear that once the star begins fusing lighter atoms to produce iron (Fe) that's the end of the star's life and the whole system collapses onto itself; and based on how massive the star was initially, it has different outcome - like a white dwarf, a neutron star or a black hole. I have rarely heard a detailed explanation of how the elements heavier than iron are produced. I would appreciate a convincing explanation of this process. | Elements heavier than iron are produced mainly by neutron-capture inside stars, although there are other more minor contributors (cosmic ray spallation, radioactive decay). They are not only produced in stars that explode as supernovae. This has now been established fact since the detection of short-lived Technetium in the atmospheres of red giant and AGB stars in the 1950s (e.g. Merrill 1952 ), and it is tiresome to have to continue correcting this egregious pop-sci claim more than 60 years later (e.g. here ). The r-process Neutron capture can occur rapidly (the r-process ) and this process occurs mostly inside and during supernova explosions (though other mechanisms such as merging neutron stars have been mooted). The free neutrons are created by electron capture in the final moments of core collapse. At the same time this can lead to the build up of neutron-rich nuclei and the decay products of these lead to many of the chemical elements heavier than iron once they are ejected into the interstellar medium during the supernova explosion. The r-process is almost exclusively responsible for elements heavier than lead and contributes to the abundances of many elements between iron and lead. There is still ongoing debate about the site of the primary r-process. My judgement from a scan of recent literature is that whilst core-collapse supernovae proponents were in the majority, there is a growing case to be made that neutron star mergers may become more dominant, particularly for the r-process elements with $A>110$ (e.g. Berger et al. 2013 ; Tsujimoto & Shigeyama 2014 ). In fact some of the latest research I have found suggests that the pattern of r-process elemental abundances in the solar system could be entirely produced by neutron star mergers (e.g. Wanajo et al. 2004 ), though models of core-collapse supernovae that incorporate magneto-rotational instabilities or from rapidly-rotating "collapsar" models, also claim to be able to reproduce the solar-system abundance pattern ( Nishimura et al. 2017 ) and may be necessary to explain the enhanced r-process abundances found in some very old halo stars (see for example Brauer et al. 2020 ). Significant new information on this debate comes from observations of kilonovae and in particular, the spectacular confirmation, in the form of GW170817 , that kilonovae can be produced by the merger of two neutron stars. Observations of the presumably neutron-rich ejecta, have confirmed the opacity signature (rapid optical decay, longer IR decay and the appearance of very broad absorption features) that suggest the production of lanthanides and other heavy r-process elements (e.g. Pian et al. 2017 ; Chornock et al. 2017 ). Whether neutron star mergers are the dominant source of r-process elements awaits an accurate assessment of how frequently they occur and how much r-process material is produced in each event - both of which are uncertain by factors of a few at least. A paper by Siegel (2019) reviews the merits of neutron star merger vs production of r-process elements in rare types of core collapse supernovae (aka "collapsars"). Their conclusion is that collapsars are responsible for the majority of the r-process elements in the Milky Way and that neutron star mergers, whilst probably common enough, do not explain the r-process enhancements seen in some very old halo stars and dwarf galaxies and the falling level of europium (an r-process element) to Iron with increased iron abundance - (i.e. the Eu behaves like "alpha" elements like oxygen and neon that are produced in supernovae). The s-process However, many of the chemical elements heavier than iron are also produced by slow neutron capture; the so-called s-process . The free neutrons for these neutron-capture events come from alpha particle reactions with carbon 13 (inside asymptotic giant branch [AGB] stars with masses of 1-8 solar masses) or neon 22 in giant stars above 10 solar masses. After a neutron capture, a neutron in the new nucleus may then beta decay, thus creating a nucleus with a higher mass number and proton number. A chain of such events can produce a range of heavy nuclei, starting with iron-peak nuclei as seeds. Examples of elements produced mainly in this way include Sr, Y, Rb, Ba, Pb and many others. Proof that this mechanism is effective is seen in the massive overabundances of such elements that are seen in the photospheres of AGB stars. A clincher is the presence of Technetium in the photospheres of some AGB stars, which has a short half life and therefore must have been produced in situ. According to Pignatari et al. (2010) , models suggests that the s-process in high mass stars (that will become supernovae) dominates the s-process production of elements with $A<90$ , but for everything else up to and including Lead the s-process elements are mainly produced in modest sized AGB stars that never become supernovae. The processed material is simply expelled into the interstellar medium by mass loss during thermal pulsations during the AGB phase. The overall picture As a further addition, just to drive home the point that not all heavy elements are produced by supernovae, here is a plot from the epic review by Wallerstein et al. (1997) , which shows the fraction of the heavy elements in the solar system that are produced in the r-process (i.e. an upper limit to what is produced in supernovae explosions). Note that this fraction is very small for some elements (where the s-process dominates), but that the r-process produces everything beyond lead. A more up-to-date visualisation of what goes on (produced by Jennifer Johnson ) and which attempts to identify the sites (as a percentage) for each chemical element is shown below. It should be stressed that the details are still subject to a lot of model-dependent uncertainty. An even more recent version of this diagram is provided by Arcones & Thielemann (2022) . If you look carefully there are some minor differences between these two diagrams (e.g. Bi). | {
"source": [
"https://physics.stackexchange.com/questions/7131",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2616/"
]
} |
7,172 | I was not able to find an answer for this question... Some radioactive elements have half-life measured in thousands of years and some others even in millions, but over 4.5 billion years all the radioactive material that was part of the initial material that formed the planet earth should have decayed by now? However, there is still radioactive material with short half-life to be found in nature. How is this possible and if the answer is that the new radioactive material is constantly being generated somehow, can you explain the mechanism of how this happens? Thanks. | The half-life of Uranium 238 is about the age of the Earth, so only about half of the original supply should have decayed by now. Also, there are some radioactive nuclei that get created by interactions with cosmic rays in the upper atmosphere (carbon-14) or decay from more stable nuclei (all of the daughter nuclei between U-238 and lead, for example). | {
"source": [
"https://physics.stackexchange.com/questions/7172",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2628/"
]
} |
7,179 | According to Hubble's law, light and other kinds of electromagnetic radiation emitted from distant objects are redshifted. The more distant the source, the more intense is the redshift. Now, the expansion of the universe is expected to explain the redshift and its nearly linear dependence on distance between source and observer. But isn't there an other source influencing the redshift? We know that a light beam passing the Sun is deflected by the Sun's gravity in accordance with predictions made by Einstein's general theory of relativity. This deflection is dependant on a gravitational interaction between the Sun and the light beam. Thus, the position of the Sun is affected by the light beam, though by such a tiny amount that it is impossible to detect the disturbance of the Sun's position. Now, during its journey to the Earth a light beam, originating from a distant source in the Universe, is passing a certain amount of elementary particles and atoms. If the light beam interacts gravitationally with those elementary particles and atoms, affecting the microscopic mechanical properties of the individual elementary particles and atoms at issue, can this interaction be detected as a redshift of the light beam? If so, could we use this gravity redshift to measure the mean density of matter and energy in space? The beginning of the sentence "If the light beam interacts gravitationally with those elementary particles and atoms..." should be interpreted to say "If the light beam interacts gravitationally with those elementary particles and atoms by way of leaving them in a state of acceleration different from their initial state of acceleration...." This clarification seems to necessitate the additional question: "why would a gravitationally interacting object (a cluster of photons) passing another gravitationally interacting object (a mass) leave that mass in the same state as before the passage?" | The half-life of Uranium 238 is about the age of the Earth, so only about half of the original supply should have decayed by now. Also, there are some radioactive nuclei that get created by interactions with cosmic rays in the upper atmosphere (carbon-14) or decay from more stable nuclei (all of the daughter nuclei between U-238 and lead, for example). | {
"source": [
"https://physics.stackexchange.com/questions/7179",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2630/"
]
} |
7,231 | Recently there have been some interesting questions on standard QM and especially on uncertainty principle and I enjoyed reviewing these basic concepts. And I came to realize I have an interesting question of my own. I guess the answer should be known but I wasn't able to resolve the problem myself so I hope it's not entirely trivial. So, what do we know about the error of simultaneous measurement under time evolution? More precisely, is it always true that for $t \geq 0$
$$\left<x(t)^2\right>\left<p(t)^2\right> \geq \left<x(0)^2\right>\left<p(0)^2\right>$$
(here argument $(t)$ denotes expectation in evolved state $\psi(t)$, or equivalently for operator in Heisenberg picture). I tried to get general bounds from Schrodinger equation and decomposition into energy eigenstates, etc. but I don't see any way of proving this. I know this statement is true for a free Gaussian wave packet. In this case, we obtain equality, in fact (because the packet stays Gaussian and because it minimizes HUP). I believe this is, in fact, the best we can get and for other distributions, we would obtain strict inequality. So, to summarize the questions Is the statement true? If so, how does one prove it? And is there an intuitive way to see it is true? | The question asks about the time dependence of the function $$f(t) := \langle\psi(t)|(\Delta \hat{x})^2|\psi(t)\rangle
\langle\psi(t)|(\Delta \hat{p})^2|\psi(t)\rangle,$$ where $$\Delta \hat{x} := \hat{x} - \langle\psi(t)|\hat{x}|\psi(t)\rangle, \qquad
\Delta \hat{p} := \hat{p} - \langle\psi(t)|\hat{p}|\psi(t)\rangle, \qquad
\langle\psi(t)|\psi(t)\rangle=1.$$ We will here use the Schrödinger picture where operators are constant in time, while the kets and bras are evolving. Edit : Spurred by remarks of Moshe R. and Ted Bunn let us add that (under assumption (1) below) the Schroedinger equation itself is invariant under the time reversal operator $\hat{T}$, which is a conjugated linear operator, so that $$\hat{T} t = - t \hat{T}, \qquad \hat{T}\hat{x} = \hat{x}\hat{T}, \qquad \hat{T}\hat{p} = -\hat{p}\hat{T}, \qquad \hat{T}^2=1.$$ Here we are restricting ourselves to Hamiltonians $\hat{H}$ so that $$[\hat{T},\hat{H}]=0.\qquad (1)$$ Moreover, if $$|\psi(t)\rangle = \sum_n\psi_n(t) |n\rangle$$ is a solution to the Schrödinger equation in a certain basis $|n\rangle$, then $$\hat{T}|\psi(t)\rangle := \sum_n\psi^{*}_n(-t) |n\rangle$$ will also be a solution to the Schrödinger equation with a time reflected function $f(-t)$. Thus if $f(t)$ is non-constant in time, then we may assume (possibly after a time reversal operation) that there exist two times $t_1<t_2$ with $f(t_1)>f(t_2)$. This would contradict the statement in the original question. To finish the argument, we provide below an example of a non-constant function $f(t)$. Consider a simple harmonic oscillator Hamiltonian with the zero point energy $\frac{1}{2}\hbar\omega$ subtracted for later convenience. $$\hat{H}:=\frac{\hat{p}^2}{2m}+\frac{1}{2}m\omega^{2}\hat{x}^2
-\frac{1}{2}\hbar\omega=\hbar\omega\hat{N},$$ where $\hat{N}:=\hat{a}^{\dagger}\hat{a}$ is the number operator. Let us put the constants $m=\hbar=\omega=1$ to one for simplicity. Then the annihilation and creation operators are $$\hat{a}=\frac{1}{\sqrt{2}}(\hat{x} + i \hat{p}), \qquad
\hat{a}^{\dagger}=\frac{1}{\sqrt{2}}(\hat{x} - i \hat{p}), \qquad
[\hat{a},\hat{a}^{\dagger}]=1,$$ or conversely, $$\hat{x}=\frac{1}{\sqrt{2}}(\hat{a}^{\dagger}+\hat{a}), \qquad
\hat{p}=\frac{i}{\sqrt{2}}(\hat{a}^{\dagger}-\hat{a}), \qquad
[\hat{x},\hat{p}]=i,$$ $$\hat{x}^2=\hat{N}+\frac{1}{2}\left(1+\hat{a}^2+(\hat{a}^{\dagger})^2\right), \qquad
\hat{p}^2=\hat{N}+\frac{1}{2}\left(1-\hat{a}^2-(\hat{a}^{\dagger})^2\right).$$ Consider Fock space $|n\rangle := \frac{1}{\sqrt{n!}}(\hat{a}^{\dagger})^n |0\rangle$
such that $\hat{a}|0\rangle = 0$. Consider initial state $$|\psi(0)\rangle := \frac{1}{\sqrt{2}}\left(|0\rangle+|2\rangle\right), \qquad
\langle \psi(0)| = \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|\right).$$ Then $$|\psi(t)\rangle = e^{-i\hat{H}t}|\psi(0)\rangle
= \frac{1}{\sqrt{2}}\left(|0\rangle+e^{-2it}|2\rangle\right),$$ $$\langle \psi(t)| = \langle\psi(0)|e^{i\hat{H}t}
= \frac{1}{\sqrt{2}}\left(\langle 0|+\langle 2|e^{2it}\right),$$ $$\langle\psi(t)|\hat{x}|\psi(t)\rangle=0, \qquad
\langle\psi(t)|\hat{p}|\psi(t)\rangle=0.$$ Moreover, $$\langle\psi(t)|\hat{x}^2|\psi(t)\rangle=\frac{3}{2}+\frac{1}{\sqrt{2}}\cos(2t), \qquad
\langle\psi(t)|\hat{p}^2|\psi(t)\rangle=\frac{3}{2}-\frac{1}{\sqrt{2}}\cos(2t),$$ because $\hat{a}^2|2\rangle=\sqrt{2}|0\rangle$. Therefore, $$f(t) = \frac{9}{4} - \frac{1}{2}\cos^2(2t),$$ which is non-constant in time, and we are done. Or alternatively, we can complete the counter-example without the use of above time reversal argument by simply performing an appropriate time translation $t\to t-t_0$. | {
"source": [
"https://physics.stackexchange.com/questions/7231",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/329/"
]
} |
7,290 | The no hair theorem says that a black hole can be characterized by a small number of parameters that are visible from distance - mass, angular momentum and electric charge. For me it is puzzling why local quantities are not included, i.e. quantum numbers different from electrical charge. Lack of such parameters means breaking of the conservations laws (for a black hole made of baryons, Hawking radiation then is 50% baryonic and 50% anti-baryonic). The question is: If lack of baryonic number as a black hole parameter is a well established relation? OR It is (or may be) only an artifact of lack of unification between QFT and GR? | The no hair theorem is proven in classical gravity, in asymptotically flat 4 dimensional spacetimes, and with particular matter content. When looking at more general circumstances, we are starting to see that variations of the original assumptions give the black hole more hair. For example, for asymptotically AdS one can have scalar hair (a fact which is used to build holographic superconductors). For five dimensional spaces black holes (and black rings) can have dipole moments of gauge charges. Maybe there are more surprises. But, the basic intuition behind the no hair theorem is still valid. The basic fact used in all these constructions is that when the object falls into a black hole, it can imprint its existence on the black hole exterior only if it associated with a long ranged field. So for example an electron will change the charge of the black hole which means that black hole will have a Coulomb field. You'd be able to measure the total charge by an appropriate Gauss surface. Note that gravity has no conserved local currents (see this discussion ), the only thing you'd be able to measure is the total charge. As for baryon number, it is not associated with long ranged force, when it falls into black hole there is nothing to remember that fact, and the baryon number is not conserved. This is just one of the reasons there is a general belief that global charges (those quantities which are not accompanied by long ranged forces) are not really conserved. For the Baryon number we know that for a fact: our world has more baryons than anti baryons, so the observed baryon number symmetry must only be approximate. It must have been violated in the early universe when all baryons were generated (look for a related discussion here ), a process which is referred to as baryogenesis. | {
"source": [
"https://physics.stackexchange.com/questions/7290",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/184/"
]
} |
7,437 | Once I asked this question from my teacher and he replied "Because it passes light.". "And why does it pass light?" I asked and he said, "Because it is transparent.". The same question again, Why glass is transparent? Why does light pass through it, while for opaque objects, it does not? | Photons pass through glass because they are not absorbed. And they are not absorbed because there is nothing which "absorbs" light in visual frequencies in glass. You may have heard that ultraviolet photons are absorbed by glass, so glass is not transparent for them. Exactly the same happens with X-rays for which our body is nearly transparent whilst a metal plate absorbs it. This is experimental evidence. Any photon has certain frequency - which for visible light is related to the colour of light, whilst for lower or upper frequencies in the electromagnetic spectrum it is simply a measure of the energy transported by photon. A material's absorption spectrum (which frequencies are absorbed and how much so) depends on the structure of the material at atomic scale. Absorption may be from atoms which absorb photons (remember - electrons go to upper energetic states by absorbing photons), from molecules, or from lattices.
There are important differences in these absorption possibilities: Atoms absorb well-defined discrete frequencies. Usually single atoms absorb only a few frequencies - it depends on the energetic spectrum of its electrons. Regarding atomic absorption, the graph of absorption (plotted as a function of frequency of light) contains well-defined peaks for frequencies when absorption occurs, and no absorption at all between them. Molecules absorb discrete frequencies but there are many more absorption lines because even a simple molecule has many more energetic levels than any atom. So molecules absorb much more light. Crystalline lattices may absorb not only discrete frequencies but also continuous bands of frequencies, mainly because of discrepancies in the crystalline structure. As glass is a non-crystalline, overcooled fluid, consisting of molecules, its absorption occurs in the 1st and 2nd ways, but because of the matter it is composed of, it absorbs outside our visible spectrum. | {
"source": [
"https://physics.stackexchange.com/questions/7437",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/847/"
]
} |
7,446 | Let's say I fire a bus through space at (almost) the speed of light in vacuum. If I'm inside the bus (sitting on the back seat) and I run up the aisle of the bus toward the front, does that mean I'm traveling faster than the speed of light? (Relative to Earth that I just took off from.) | Your question has to do with addition of velocities in special relativity. For objects moving at low speeds, your intuition is correct: say the bus move at speed $v$ relative to earth, and you run at speed $u$ on the bus, then the combined speed is simply $u+v$. But, when objects start to move fast, this is not quite the way things work. The reason is that time measurements start depending on the observer as well, so the way you measure time is just a bit different from the way it is measured on the bus, or on earth. Taking this into account, your speed compared to the earth will be $\frac{u+v}{1+ uv/c^2}$. where $c$ is the speed of light. This formula is derived from special relativity. Some comments on this formula provide direct answer to your question: If both speeds are small compared with the speed of light, they approximately add up as your intuition tells you. If one of the speeds is the speed of light $c$, you can see that adding any other speed to it does not in fact change it: the speed of light is the same in all reference frames. If you add up any two speeds below $c$, you end up still below the speed of light. So, any material object which has a mass (unlike light, which doesn't), moves at a speed less than $c$. Adding to it according to the correct rule makes it closer to the speed of light, but you can never exceed it, or in fact not even reach it. I'd recommend Wheeler and Taylor's "Spacetime Physics" to read about this. Unlike the reputation of the subject it is actually pretty intuitive (I learned that formula in high school). | {
"source": [
"https://physics.stackexchange.com/questions/7446",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2721/"
]
} |
7,462 | When trying to find the Fourier transform of the Coulomb potential $$V(\mathbf{r})=-\frac{e^2}{r}$$ one is faced with the problem that the resulting integral is divergent. Usually, it is then argued to introduce a screening factor $e^{-\mu r}$ and take the limit $\lim_{\mu \to 0}$ at the end of the calculation. This always seemed somewhat ad hoc to me, and I would like to know what the mathematical justification for this procedure is. I could imagine the answer to be along the lines of: well, the Coulomb potential doesn't have a proper FT, but it has a weak FT defined like this … Edit: Let me try another formulation: What would a mathematician (being unaware of any physical meanings) do when asked to find the Fourier transform of the Coulomb potential? | I really appreciate the physical explanations made in other answers, but I want to add that Fourier transform of the Coulomb potential makes mathematical sense, too . This answer is meant to clarify on what sense the standard calculation is valid mathematically. Firstly, and maybe more importantly, I want to emphasize that The Fourier transform of f is not simply just $\int{f(x)e^{-ikx}dx}$ . For an $L^1$ function (a function which is norm integrable), this is always the case but Coulomb potential is definitely not in $L^1$ . So the Fourier transform of it, if it ever exists, is not expected to be the integral above. So here comes the second question: can Fourier transformation be defined on functions other than $L^1$ ? The answer is "yes", and there are many Fourier transformations. Here are two examples. Fourier transformation on $L^2$ functions (i.e., square integrable functions) . It turns out that the Fourier transform behaves more nicely on $L^2$ than on $L^1$ , thanks to the Plancherel's theorem. However, as we mentioned above, if an $L^2$ function is not in $L^1$ , then the above integral may not exist and Fourier transform is not given by that integral, either. (However, it has a simple characterization theorem, saying that in this case the Fourier transform is given by the principle-value integration of the above integral.) Fourier transform of distributions (generalized functions) It is in this sense that the Forier transform of Coulomb potential holds. The Coulomb potential, although not an $L^1$ or $L^2$ function, is a distribution . So we need to use the definition of the Fourier transform to distributions in this case. Indeed, one can check the definition and directly calculate the Fourier transform of it. However, the physicists' calculation illustrates another point. Fourier transformation on distributions (however it is defined) is continuous (under a certain topology on the distribution space, but let's not be too specific about it). Remember that if f is continuous, then $x_\epsilon\rightarrow x$ implies that $f(x_\epsilon)\rightarrow f(x)$ . Now $\frac{1}{r}e^{-\mu r}\rightarrow\frac{1}{r}$ when $\mu\rightarrow 0$ (again, under the "certain topology" mentioned above), and therefore continuity implies $$\operatorname{Fourier}\left\{\frac{1}{r}e^{-\mu r}\right\}\rightarrow \operatorname{Fourier}\left\{\frac{1}{r}\right\}.$$ However, $\frac{1}{r}e^{-\mu r}$ is in $L^1$ and therefore its Fourier transformation can be computed using the integral $\int{f(x)e^{-ikx}dx}$ . Therefore those physicists' computations make perfect mathematical sense, but it's on Fourier transform of distributions, which is much more general than that on $L^1$ functions. Wish this answer can build people's confidence that the Coulomb potential Fourier transformation problem is not only physically reasonable but also mathematically justifiable. | {
"source": [
"https://physics.stackexchange.com/questions/7462",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2726/"
]
} |
7,470 | This is something probably very basic but I was led back to this issue while listening to a recent seminar by Allan Adams on holographic superconductors. He seemed very worried to have a theory at hand where the chemical potential is negative. (why?) For fermions, isn't the sign of the chemical potential a matter of definition? The way we normally write our equations for the Fermi-Dirac distribution the chemical potential happens to that value of energy at which the corresponding state has a occupation probability of half. And within this definition the holes in a semiconductor have a negative chemical potential. It would be helpful if someone can help make a statement about the chemical potential which is independent of any convention. {Like one argues that negative temperature is a sign of instability of the system.} Also isn't it possible for fermions in an interacting theory to have a negative chemical potential? Also if there is a "physical argument" as to why bosons can't have a positive chemical potential? (Again, can an interacting theory of bosons make a difference to the scenario?) And how do these issues change when thinking in the framework of QFT?
(No one draws the QCD phase diagram with the chemical potential on the negative X-axis!) In QFT does the chemical potential get some intrinsic meaning since relativistically there is a finite lower bound of the energy of any particle given by its rest mass? | consider the grand canonical ensemble,
$$ \rho \sim \exp[-\beta (E-\mu N)] $$
In the exponent, the inverse temperature $\beta = 1/kT$ is the coefficient in front of one conserved quantity, the (minus) energy, while another coefficient, $\beta\mu$, is in front of the number of particles $N$. The chemical potential is therefore the coefficient in front of the number of particles - except for the extra $\beta$. The number of particles has to be conserved as well if $\mu$ is non-zero. As long as the sign of $N$ is well-defined, the sign of the chemical potential is well-defined, too. Now, for bosons, $\mu$ can't be positive because the distribution would be an exponentially increasing function of $N$. Note that in the grand canonical ensemble - which is really the ensemble in which $\mu$ is sharply defined - the dual variable to $\mu$, namely $N$, is not sharply defined. However, the probability for ever larger - infinite $N$ would be larger, so the distribution would be peaked at $N=\infty$. Such a distribution couldn't be well-defined. We want, in the thermodynamic limit, the grand canonical ensemble, while assuming a fixed $\mu$, also generate a finite and almost well-defined $N$, within an error margin that goes to zero in the thermodynamic limit. That couldn't happen for bosons and a positive $\mu$. This catastrophe would be possible because $N=\sum_i n_i$, a summation over microstates $i$, and every $n_i$ can be an arbitrarily high integer for bosons. For fermions, the problem doesn't occur because $n_i=0$ or $1$ for each state $i$. So for fermions, we can't argue that $\mu$ has to be positive. Note that $E-\mu N$ in the exponent is the sum $\sum_i N_i (e_i-\mu)$. For bosons, the problem occurred for states for which $e_i-\mu$ was negative i.e. $e_i$ was low enough. For fermions, however, the number of such states - and therefore the maximum number of fermions in them - is finite, so the divergence doesn't occur if the chemical potential is positive. For fermions, positive $\mu$ is OK. In fact, for fermions, both positive and negative $\mu$ is OK. Also, it is easy to see that if both particles and antiparticles exist, $\mu$ of the antiparticle has to be minus $\mu$ of the particle because only the difference $N_{\rm particles} - N_{\rm antiparticles}$ is conserved; this is true both for bosons and fermions. So if the potential for electrons is positive, the potential for positrons or holes (which play the very same role) has to be negative, and vice versa. Nothing changes about the meaning of the chemical potential when one switches from classical physics to quantum physics: in fact, above, I was assuming that there are "discrete states" for the particles, just like in quantum physics - otherwise we wouldn't be talking about bosons and fermions which are only relevant in the quantum setup. Classical physics is a limit of quantum physics in which the number of states is infinite because $\hbar$ goes to zero, so a finite number of particles never ends up in "exactly the same state". In some sense, Ludwig Boltzmann, while working in the context of classical statistical physics, was inherently using the thinking and intuition of quantum statistical physics - he was a truly ingenious "forefather" of quantum physics. In relativity, one has to be careful how we define the energy of a state. Note that the physically meaningful combination that appears in the exponent is $e_i-\mu$, so if one shifts $e_i$ e.g. by $mc^2$, the latent energy, one has to shift $\mu$ in the same direction by the same amount. The notions of chemical potential obviously work in relativity, too. Relativistic physics is not a "completely new type of physics". It is just a type of the old physics that happens to respect a symmetry - the Lorentz symmetry. Again, in quantum field theory which combines both quantum mechanics and relativity, statistical physics including the notion of the chemical potential also works but one must be careful that particle-antiparticle pairs may be created with enough energy. That implies $\mu_{p}=-\mu_{np}$, as I said. There can't be any general ban on a negative chemical potential of fermions: fermionic $\mu$ can have both signs. However, in the particular theory that Allan wanted to describe, he could have had more detailed reasons why $\mu$ should have been positive for his fermions. I am afraid that this would be an entirely different, more specific question - one about superconductors. As stated, your question above was one about statistical physics and I tend to believe that the text above exhausts all universal facts about the sign of the chemical potential in statistical physics. | {
"source": [
"https://physics.stackexchange.com/questions/7470",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1637/"
]
} |
7,584 | If a radioactive material takes a very long time to decay, how is its half life measured or calculated? Do we have to actually observe the radioactive material for a very long time to extrapolate its half life? | No, one doesn't need to measure the material for years - or even millions or billions of years. It's enough to watch it for a few minutes (for time $t$) and count the number of atoms $\Delta N$ (convention: a positive number) that have decayed. The lifetime $T$ is calculated from
$$ \exp(-t/T) = \frac{N - \Delta N}{N}$$
where $N$ is the total number of atoms in the sample. This $N$ can be calculated as
$$N={\rm mass}_{\rm total} / {\rm mass}_{\rm atom}.$$
If we know that the lifetime is much longer than the time of the measurement, it's legitimate to Taylor-expand the exponential above and only keep the first uncancelled term:
$$ \frac{t}{T} = \frac{\Delta N}{N}.$$
The decay of the material proceeds atom-by-atom and the chances for individual atoms to decay are independent and equal. To get some idea about the number of decays, consider 1 kilogram of uranium 238. Its atomic mass is $3.95\times 10^{-25}$ kilograms and its lifetime is $T=6.45$ billion years. By inverting the atomic mass, one sees that there are $2.53\times 10^{24}$ atoms in one kilogram. So if you take one kilogram of uranium 238, it will take $2.53\times 10^{24}$ times shorter a time for an average decay, e.g. the typical separation between two decays is
$$t_{\rm average} = \frac{6.45\times 10^9\times 365.2422\times 86400}{2.53\times 10^{24}}{\rm seconds} = 8.05\times 10^{-8} {\rm seconds}. $$
So one gets about 12.4 million decays during one second. (Thanks for the factor of 1000 fix.) These decays may be observed on an individual basis. Just to be sure, $T$ was always a lifetime in the text above. The half-life is simply $\ln(2) T$, about 69 percent of the lifetime, because of some simple maths (switching from the base $e$ to the base $2$ and vice versa). If we observe $\Delta N$ decays, the typical relative statistical error of the number of decays is proportional to $1/(\Delta N)^{1/2}$. So if you want the accuracy "1 part in 1 thousand", you need to observe at least 1 million decays, and so on. | {
"source": [
"https://physics.stackexchange.com/questions/7584",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2766/"
]
} |
7,668 | In dimensional analysis, it does not make sense to, for instance, add together two numbers with different units together. Nor does it make sense to exponentiate two numbers with different units (or for that matter, with units at all) together; these expressions make no sense: $$(5 \:\mathrm{m})^{7 \:\mathrm{s}}$$ $$(14 \:\mathrm{A})^{3 \:\mathrm{A}}$$ Now my question is plainly this: why do they not make sense? Why does only multiplying together numbers with units make sense, and not, for instance, exponentiating them together? I understand that raising a number with a unit to the power of another number with a unit is quite unintuitive - however, that's not really a good reason, is it? | A standard argument to deny possibility of inserting dimensionful quantities into transcendental functions is the following expression for Taylor expansion of e.g. $\exp(\cdot)$: $$ e^x = \sum_n \frac{x^{n}}{n!} = 1 + x +\frac{x^2}{2} + \dots\,.\tag1$$ Here we'd add quantities with different dimensions, which you have already accepted makes no sense. OTOH, there's an argument (paywalled paper), that in the Taylor expansion where the derivatives are taken "correctly", you'd get something like the following for a function $f$: \begin{multline}
f(x+\delta x)=f(x)+\delta x\frac{df(x)}{dx}+\frac{\delta x^2}2\frac{d^2f(x)}{dx^2}+\frac{\delta x^3}{3!}\frac{d^3f(x)}{dx^3}+\dots=\\
=f(x)+\sum_{n=1}^\infty\frac{\delta x^n}{n!}\frac{d^nf(x)}{dx^n},\tag2
\end{multline} and the dimensions of derivatives are those of $1/dx^n$, which cancel those of $\delta x^n$ terms, making the argument above specious. | {
"source": [
"https://physics.stackexchange.com/questions/7668",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2802/"
]
} |
7,777 | This is not a question pertaining to interpretations, after the last one I realized I should not open Pandora's Box ;) For theories to be consistent, they must reduced to known laws in the classical domains. The classical domain can be summed up as: $$\hbar=0 ; c=\infty$$ Which is OK. I need to know, however, is that if QM is an independent and fundamental theory , why does it rely so heavily on the classical formalism. Is it necessary for a classical formalism to exist in order to have a quantum formalism? From as far as I have read, it does not seem so, and I find this puzzling. Suppose you have a dissipative system, or an open system when you cannot write an autonomous Hamiltonian in the classical case, how then do we approach these quantum mechanically, when we cannot even write down the corresponding Hamiltonian. | Quantum mechanics and quantum mechanical theories are totally independent of the classical ones. The classical theories may appear and often do appear as limits of the quantum theories. This is the case of all "textbook" theories - because the classical limit was known before the full quantum theory, and the quantum theory was actually "guessed" by adding the hats to the classical one. In a category of cases, the full quantum theory may be "reverse engineered" from the classical limit. However, one must realize that this situation is just an artifact of the history of physics on Earth and it is not generally true. There are classical theories that can't be quantized - e.g. field theories with gauge anomalies - and there are quantum theories that have no classical limits - e.g. the six-dimensional $(2,0)$ superconformal field theory in the unbroken phase. Moreover, it's typical that the quantum versions of classical theories lead to new ordering ambiguities (the identity of all $O(\hbar^k)$ terms in the Hamiltonian is undetermined by the classical limit in which all choices of this form vanish, anyway), divergences, and new parameters and renormalization of them that have to be applied. Also, the predictions of quantum mechanics don't need any classical crutches. Quantum mechanics works independently of its classical limits, and the classical behavior may be deduced from quantum mechanics and nothing else in the required limit. Historically, people discussed quantum mechanics as a tool to describe the microscopic world only, assuming that the large objects followed the classical logic. The Copenhagen folks divided the world in these two subworlds, in an ad hoc way, and that simplified their reasoning because they didn't need to study quantum physics of the macroscopic measurement devices etc. But these days, we fully understand the actual physical mechanism - decoherence - that is responsible for the emergence of the classical logic in the right limits. Because of decoherence, which is a mechanism that only depends on the rules of quantum mechqnics, we know that quantum mechanics applies to small as well as large objects, to all objects in the world, and the classical behavior is an approximate consequence, an emergent law. To know the evolution in time, one needs to know the Hamiltonian - or something equivalent that determines the dynamics. The previous sentence is true both in classical physics and quantum mechanics, for similar reasons, but independently. If a classical theory is a limit of a quantum theory, it of course also means that its classical Hamiltonian may be derived as a limit of the quantum Hamiltonian. Of course, if you don't know the Hamiltonian operator, you won't be able to determine the dynamics and evolution with time. Guessing the quantum Hamiltonian from its classical limit is one frequent, but in no way "universally inevitable", way to find a quantum Hamiltonian of a quantum theory. | {
"source": [
"https://physics.stackexchange.com/questions/7777",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2599/"
]
} |
7,781 | Whenever I read about the curvature of spacetime as an explanation for gravity, I see pictures of a sheet (spacetime) with various masses indenting the sheet to form "gravity wells." Objects which are gravitationally attracted are said to roll down the curved sheet of spacetime into the gravity well. This is troubling to me, because, in order for objects on the locally slanted spacetime sheet to accelerate, gravity must be assumed. Therefore I ask; does the explanation of gravity as the curvature of spacetime assume gravity? If yes, what is the point of the theory? If No, what am I missing? | I greatly sympathize with your question. It is indeed a very misleading analogy given in popular accounts. I assure you that curvature or in general, general relativity (GR) describe gravity, they don't assume it. As you appear to be uninitiated I shall try to give you some basic hints about how gravity is described by GR. In the absence of matter/energy the spacetime (space and time according to the relativity theories are so intimately related with each other it makes more sense to combine them in a 4 dimensional object called space-time) is flat like a table top. This resembles closely with (not completely) Euclidean geometry of plane surfaces. We call this spacetime, Minkowski space. In this space the shortest distance between any two points are straight lines. However as soon as there is some matter/energy the geometry of the surrounding spacetime is affected. It no longer remains Minkowski space, it becomes a (pseudo) Riemannian manifold. By this I mean the geometry is no longer like geometries of a plane surface but rather like geometries of a curved surface. In this curved spacetime the shortest distance between any two points are not straight lines in general, rather they are curved lines. It is not very hard to understand. Our Earth is a curved surface and the shortest distance between any two points are great circles rather than straight lines. Similarly the shortest distance between any two points in the 4 dimensional spacetime are curved lines. An object like sun makes the geometry of spacetime curved in such a way that the shortest distance between any two points are curved. This is called a geodesic. A particle follows this curved geometry by moving along this geodesic. Einstein's equations are mathematical descriptions of the relation of the geometry to the matter/energy. This is how gravity is described in general relativity. | {
"source": [
"https://physics.stackexchange.com/questions/7781",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2843/"
]
} |
7,905 | Are there any massless (zero invariant mass) particles carrying electric charge ? If not, why not? Do we expect to see any or are they a theoretical impossibility? | There's no problem in writing down a theory that contains massless charged particles. Simple $\mathcal{L} = \partial_{\mu} \phi \partial^{\mu} \phi^*$ for a complex field $\phi$ will do the job. You might run into problems with renormalization but I don't want to get into that here (mostly because there are better people here who can fill in the details if necessary). Disregarding theory, those particles would be easy to observe assuming their high enough density. Also, as you probably know, particles in Standard Model compulsorily decay (sooner or later) into lighter particles as long as conservation laws (such as electric charge conservation law) are satisfied. So assuming massless charged particles exist would immediately make all the charged matter (in particular electrons) unstable unless those new particles differed in some other quantum numbers. Now, if you didn't mention electric charge in particular, the answer would be simpler as we have massless (color-)charged gluons in our models. So it's definitely nothing strange to consider massless charged particles. It's up to you whether you consider electric charge more important than color charge. Another take on this issue is that Standard Model particles (and in particular charged ones) were massless before electrosymmetric breaking (at least disregarding other mechanisms of mass generation). So at some point in the past, this was actually quite common. | {
"source": [
"https://physics.stackexchange.com/questions/7905",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/824/"
]
} |
8,049 | Suppose Stanford Research Systems starts selling a two-level atom factory. Your grad student pushes a button, and bang, he gets a two level atom. Half the time the atom is produced in the ground state, and half the time the atom is produced in the excited state, but other than that you get the exact same atom every time. National Instruments sells a cheap knockoff two-level atom factory that looks the same, but doesn't have the same output. In the NI machine, if your grad student pushes a button, he gets the same two-level atom the SRS machine makes, but the atom is always in a 50/50 superposition of ground and excited states with a random relative phase between the two states. The "random relative phase between the two states" of the NI knockoff varies from atom to atom, and is unknown to the device's user. Are these two machines distinguishable? What experiment would you do to distinguish their outputs? | These systems are not distiguishable. The average density matrix is the same, and the probability distribution obtained by performing any measurement depends only on the average density matrix. For the first system, the density matrix is
$$\frac{1}{2} \left[\left(\begin{array}{cc}1&0\cr 0&0\end{array}\right)+ \left(\begin{array}{cc}0&0\cr 0&1\end{array}\right)\right].$$ For the second system, the density matrix is
$$\frac{1}{2\pi} \int_\theta \frac{1}{2}\left(\begin{array}{cc}1&e^{-i\theta}\cr e^{i \theta}&1\end{array}\right) d \theta.$$ It is easily checked that these are the same. | {
"source": [
"https://physics.stackexchange.com/questions/8049",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2359/"
]
} |
8,062 | 1. Why is the wave function complex? I've collected some layman explanations but they are incomplete and unsatisfactory. However in the book by Merzbacher in the initial few pages he provides an explanation that I need some help with: that the de Broglie wavelength and the wavelength of an elastic wave do not show similar properties under a Galilean transformation. He basically says that both are equivalent under a gauge transform and also, separately by Lorentz transforms. This, accompanied with the observation that $\psi$ is not observable, so there is no "reason for it being real". Can someone give me an intuitive prelude by what is a gauge transform and why does it give the same result as a Lorentz tranformation in a non-relativistic setting? And eventually how in this "grand scheme" the complex nature of the wave function becomes evident.. in a way that a dummy like me can understand. 2. A wavefunction can be thought of as a scalar field (has a scalar value in every point ($r,t$) given by $\psi:\mathbb{R^3}\times \mathbb{R}\rightarrow \mathbb{C}$ and also as a ray in Hilbert space (a vector). How are these two perspectives the same (this is possibly something elementary that I am missing out, or getting confused by definitions and terminology, if that is the case I am desperate for help ;) 3. One way I have thought about the above question is that the wave function can be equivalently written in $\psi:\mathbb{R^3}\times \mathbb{R}\rightarrow \mathbb{R}^2 $ i.e, Since a wave function is complex, the Schroedinger equation could in principle be written equivalently as coupled differential equations in two real functions which staisfy the Cauchy-Riemann conditions. ie, if $$\psi(x,t) = u(x,t) + i v(x,t)$$ and $u_x=v_t$ ; $u_t = -v_x$ and we get $$\hbar \partial_t u = -\frac{\hbar^2}{2m} \partial_x^2v + V v$$ $$\hbar \partial_t v = \frac{\hbar^2}{2m} \partial_x^2u - V u$$
(..in 1-D) If this is correct what are the interpretations of the $u,v$.. and why isn't it useful. (I am assuming that physical problems always have an analytic $\psi(r,t)$). | More physically than a lot of the other answers here (a lot of which amount to "the formalism of quantum mechanics has complex numbers, so quantum mechanics should have complex numbers), you can account for the complex nature of the wave function by writing it as $\Psi (x) = |\Psi (x)|e^{i \phi (x)}$, where $i\phi$ is a complex phase factor. It turns out that this phase factor is not directly measurable, but has many measurable consequences, such as the double slit experiment and the Aharonov-Bohm effect . Why are complex numbers essential for explaining these things? Because you need a representation that both doesn't induce nonphysical time and space dependencies in the magnitude of $|\Psi (x)|^{2}$ (like multiplying by real phases would), AND that DOES allow for interference effects like those cited above. The most natural way of doing this is to multiply the wave amplitude by a complex phase. | {
"source": [
"https://physics.stackexchange.com/questions/8062",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2599/"
]
} |
8,074 | I made a naive calculation of the height of Earth's equatorial bulge and found that it should be about 10km. The true height is about 20km. My question is: why is there this discrepancy? The calculation I did was to imagine placing a ball down on the spinning Earth. Wherever I place it, it shouldn't move. The gravitational potential per unit mass of the ball is $hg$, with $h$ the height above the pole-to-center distance of the Earth (call that $R$) and $g$ gravitational acceleration. Gravity wants to pull the ball towards the poles, away from the bulge. It is balanced by the centrifugal force, which has a potential $-\omega^2R^2\sin^2\theta/2$ per unit mass, with $\omega$ Earth's angular velocity and $\theta$ the angle from the north pole. This comes taking what in an inertial frame would be the ball's kinetic energy and making it a potential in the accelerating frame. If the ball doesn't move, this potential must be constant, so $$U = hg - \frac{(\omega R \sin\theta)^2}{2} = \textrm{const}$$ we might as well let the constant be zero and write $$h = \frac{(\omega R \sin\theta)^2}{2g}$$ For the Earth, $$R = 6.4*10^6m$$ $$\omega = \frac{2\pi}{24\ \textrm{hours}}$$ $$g = 9.8\ m/s^2$$ This gives 10.8 km when $\theta = \pi/2$, so the equatorial bulge should be roughly that large. According to Wikipedia , the Earth is 42.72 km wider in diameter at the equator than pole-to-pole, meaning the bulge is about twice as large as I expected. (Wikipedia cites diameter; I estimated radius.) Where is the extra bulge coming from? My simple calculation uses $g$ and $R$ as constants, but neither varies more than a percent or so. It's true the Earth does not have uniform density, but it's not clear to me how this should affect the calculation, so long as the density distribution is still spherically-symmetric (or nearly so). (Wikipedia also includes an expression , without derivation, that agrees with mine.) | The error is that you assume that the density distribution is "nearly spherically symmetric". It's far enough from spherical symmetry if you want to calculate first-order subleading effects such as the equatorial bulge. If your goal is to compute the deviations of the sea level away from the spherical symmetry (to the first order), it is inconsistent to neglect equally large, first-order corrections to the spherical symmetry on the other side - the source of gravity. In other words, the term $hg$ in your potential is wrong. Just imagine that the Earth is an ellipsoid with an equatorial bulge, it's not spinning, and there's no water on the surface. What would be the potential on the surface or the potential at a fixed distance from the center of the ellipsoid? You have de facto assumed that in this case, it would be $-GM/R+h(\theta)g$ where $R$ is the fixed Earth's radius (of a spherical matter distribution) and $R+h(\theta)$ is the actual distance of the probe from the origin (center of Earth). However, by this Ansatz, you have only acknowledged the variable distance of the probe from a spherically symmetric source of gravity: you have still neglected the bulge's contribution to the non-sphericity of the gravitational field. If you include the non-spherically-symmetric correction to the gravitational field of the Earth, $hg$ will approximately change to $hg-hg/2=hg/2$, and correspondingly, the required bulge $\Delta h$ will have to be doubled to compensate for the rotational potential. A heuristic explanation of the factor of $1/2$ is that the true potential above an ellipsoid depends on "something in between" the distance from the center of mass and the distance from the surface. In other words, a "constant potential surface" around an ellipsoidal source of matter is "exactly in between" the actual surface of the ellipsoid and the spherical $R={\rm const}$ surface. I will try to add more accurate formulae for the gravitational field of the ellipsoid in an updated version of this answer. Update: gravitational field of an ellipsoid I have numerically verified that the gravitational field of the ellipsoid has exactly the halving effect I sketched above, using a Monte Carlo Mathematica code - to avoid double integrals which might be calculable analytically but I just found it annoying so far. I took millions of random points inside a prolate ellipsoid with "radii" $(r_x,r_y,r_z)=(0.9,0.9,1.0)$; note that the difference between the two radii is $0.1$. The average value of $1/r$, the inverse distance between the random point of the ellipsoid and a chosen point above the ellipsoid, is $0.05=0.1/2$ smaller if the chosen point is above the equator than if it is above a pole, assuming that the distance from the origin is the same for both chosen points. Code: {xt, yt, zt} = {1.1, 0, 0};
runs = 200000;
totalRinverse = 0;
total = 0;
For[i = 1, i < runs, i++,
x = RandomReal[]*2 - 1;
y = RandomReal[]*2 - 1;
z = RandomReal[]*2 - 1;
inside = x^2/0.81 + y^2/0.81 + z^2 < 1;
total = If[inside, total + 1, total];
totalRinverse =
totalRinverse +
If[inside, 1/Sqrt[(x - xt)^2 + (y - yt)^2 + (z - zt)^2], 0];
]
res1 = N[total/runs / (4 Pi/3/8)]
res2 = N[totalRinverse/runs / (4 Pi/3/8)]
res2/res1 Description Use the Mathematica code above: its goal is to calculate a single purely numerical constant because the proportionality of the non-sphericity of the gravitational field to the bulge; mass; Newton's constant is self-evident. The final number that is printed by the code is the average value of $1/r$. If {1.1, 0, 0} is chosen instead of {0, 0, 1.1} at the beginning, the program generates 0.89 instead of 0.94. That proves that the gravitational potential of the ellipsoid behaves as $-GM/R - hg/2$ at distance $R$ from the origin where $h$ is the local height of the surface relatively to the idealized spherical surface. In the code above, I chose the ellipsoid with radii (0.9, 0.9, 1) which is a prolate spheroid (long, stick-like), unlike the Earth which is close to an oblate spheroid (flat, disk-like). So don't be confused by some signs - they work out OK. Bonus from Isaac Mariano C. has pointed out the following solution by a rather well-known author: http://books.google.com/books?id=ySYULc7VEwsC&lpg=PP1&dq=principia%20mathematica&pg=PA424#v=onepage&q&f=false | {
"source": [
"https://physics.stackexchange.com/questions/8074",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/74/"
]
} |
8,227 | Why is it that when you look in the mirror left and right directions appear flipped, but not the up and down? | Here's a video of physicist Richard Feynman discussing this question. Imagine a blue dot and a red dot. They are in front of you, and the blue dot is on the right. Behind them is a mirror, and you can see their image in the mirror. The image of the blue dot is still on the right in the mirror. What's different is that in the mirror, there's also a reflection of you. From that reflection's point of view, the blue dot is on the left. What the mirror really does is flip the order of things in the direction perpendicular to its surface. Going on a line from behind you to in front of you, the order in real space is Your back Your front Dots Mirror The order in the image space is Mirror Dots Your front Your back Although left and right are not reversed, the blue dot, which in reality is lined up with your right eye, is lined up with your left eye in the image. The key is that you are roughly left/right symmetric. The eye the blue dot is lined up with is still your right eye, even in the image. Imagine instead that Two-Face was looking in the mirror. (This is a fictional character whose left and right side of his face look different. His image on Wikipedia looks like this:) If two-face looked in the mirror, he would instantly see that it was not himself looking back! If he had an identical twin and looked right at the identical twin, the "normal" sides of their face would be opposite each other. Two-face's good side is the right. When he looked at his twin, the twin's good side would be to the original two-face's left. Instead, the mirror Two-face's good side is also to the right. Here is an illustration: So two-face would not be confused by the dots. If the blue dot is lined up with Two-Face's good side, it is still lined up with his good side in the mirror. Here it is with the dots: Two-face would recognize that left and right haven't been flipped so much as forward and backward, creating a different version of himself that cannot be rotated around to fit on top the original. | {
"source": [
"https://physics.stackexchange.com/questions/8227",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2978/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.