source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
8,373
Forgive me if this topic is too much in the realm of philosophy. John Baez has an interesting perspective on the relative importance of dimensionless constants, which he calls fundamental like alpha, versus dimensioned constants like $G$ or $c$ [ http://math.ucr.edu/home/baez/constants.html ]. What is the relative importance or significance of one class versus the other and is this an area that physicists have real concerns or expend significant research?
first of all, the question you are asking is very important and you may master it completely. Dimensionful constants are those that have units - like $c, \hbar, G$, or even $k_{\rm Boltzmann}$ or $\epsilon_0$ in SI. The units - such as meter; kilogram; second; Ampere; kelvin - have been chosen partially arbitrarily. They're results of random cultural accidents in the history of mankind. A second was original chosen as 1/86,400 of a solar day, one meter as 1/40,000,000 of the average meridian, one kilogram as the mass of 1/1,000 cubic meters (liter) of water or later the mass of a randomly chosen prototype, one Ampere so that $4\pi \epsilon_0 c^2$ is a simple power of 10 in SI units, one Kelvin as 1/100 of the difference between the melting and boiling points of water. Clearly, the circumference of the Earth, the solar day, a platinum prototype brick in a French castle, or phase transitions of water are not among the most "fundamental" features of the Universe. There are lots of other ways how the units could be chosen. Someone could choose 1.75 meters - an average man's height - to be his unit of length (some weird people in the history have even used their feet to measure distances) and he could still call it "one meter". It would be his meter. In those units, the numerical values of the speed of light would be different. Exactly the products or ratios of powers of fundamental constants that are dimensionless are those that don't have any units, by definition, which means that they are independent of all the random cultural choices of the units. So all civilizations in the Universe - despite the absence of any interactions between them in the past - will agree about the numerical value of the proton-electron mass ratio - which is about $6\pi^5=1836.15$ (the formula is just a teaser I noticed when I was 10!) - and about the fine-structure constant, $\alpha\sim 1/137.036$, and so on. In the Standard Model of particle physics, there are about 19 such dimensionless parameters that "really" determine the character of physics; all other constants such as $\hbar,c,G,k_{\rm Boltzmann}, \epsilon_0$ depend on the choice of units, and the number of independent units (meter, kilogram, second, Ampere, Kelvin) is actually exactly large enough that all those constants, $\hbar,c,G,k_{\rm Boltzmann},\epsilon_0$, may be set equal to one which simplifies all fundamental equations in physics where these fundamental constants appear frequently. By changing the value of $c$, one only changes social conventions (what the units mean), not the laws of physics. The units where all these constants are numerically equal to 1 are called the Planck units or natural units, and Max Planck understood that this was the most natural choice already 100 years ago. $c=1$ is being set in any "mature" analysis that involves special relativity; $\hbar=1$ is used everywhere in "adult" quantum mechanics; $G=1$ or $8\pi G=1$ is sometimes used in the research of gravity; $k_{\rm Boltzmann}=1$ is used whenever thermal phenomena are studied microscopically, at a professional level; $4\pi\epsilon_0$ is just an annoying factor that may be set to one (and in Gaussian 19th century units, such things are actually set to one, with a different treatment of the $4\pi$ factor); instead of one mole in chemistry, physicists (researchers in a more fundamental discipline) simply count the molecules or atoms and they know that a mole is just a package of $6.022\times 10^{23}$ atoms or molecules. The 19 (or 20?) actual dimensionless parameters of the Standard Model may be classified as the three fine-structure constants $g_1,g_2,g_3$ of the $U(1)\times SU(2)\times SU(3)$ gauge group; Higgs vacuum expectation value divided by the Planck mass (the only thing that brings a mass scale, and this mass scale only distinguishes different theories once we also take gravity into account); the Yukawa couplings with the Higgs that determine the quarks and fermion masses and their mixing. One should also consider the strong CP-angle of QCD and a few others. Once you choose a modified Standard Model that appreciates that the neutrinos are massive and oscillate, 19 is lifted to about 30. New physics of course inflates the number. SUSY described by soft SUSY breaking has about 105 parameters in the minimal model. The original 19 parameters of the Standard Model may be expressed in terms of more "fundamental" parameters. For example, $\alpha$ of electromagnetism is not terribly fundamental in high-energy physics because electromagnetism and weak interactions get unified at higher energies, so it's more natural to calculate $\alpha$ from $g_1,g_2$ of the $U(1)\times SU(2)$ gauge group. Also, these couplings $g_1,g_2$ and $g_3$ run - depend on the energy scale approximately logarithmically. The values such as $1/137$ for the fine-structure constant are the low-energy values, but the high-energy values are actually more fundamental because the fundamental laws of physics are those that describe very short-distance physics while long-distance (low-energy) physics is derived from that. I mentioned that the number of dimensionless parameters increases if you add new physics such as SUSY with soft breaking. However, more complete, unifying theories - such as grand unified theories and especially string theory - also imply various relations between the previously independent constants, so they reduce the number of independent dimensionless parameters of the Universe. Grand unified theories basically set $g_1=g_2=g_3$ (with the right factor of $\sqrt{3/5}$ added to $g_1$) at their characteristic "GUT" energy scale; they may also relate certain Yukawa couplings. String theory is perfectionist in this job. In principle, all dimensionless continuous constants may be calculated from any stabilized string vacuum - so all continuous uncertainty may be removed by string theory; one may actually prove that it is the case. There is nothing to continuously adjust in string theory. However, string theory comes with a large discrete class of stabilized vacua - which is at most countable and possibly finite but large. Still, if there are $10^{500}$ stabilized semi-realistic stringy vacua, there are only 500 digits to adjust (and then you may predict everything with any accuracy, in principle) - while the Standard Model with its 19 continuous parameters has 19 times infinity of digits to adjust according to experiments.
{ "source": [ "https://physics.stackexchange.com/questions/8373", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1924/" ] }
8,441
There's a fairly standard two or three-semester curriculum for introductory quantum field theory, which covers topics such as: classical field theory background canonical quantization, path integrals the Dirac field quantum electrodynamics computing $S$-matrix elements in perturbation theory, decay rates, cross sections renormalization at one loop Yang-Mills theory spontaneous symmetry breaking the Standard Model What is a good, complete and comprehensive book that covers topics such as these?
Lecture notes. David Tong's lecture notes . These are very basic and intuitive, and may be a good starting point for someone who has never acquainted themselves with QFT. My suggestion is to skim over these notes, and not to get hung up on the details. Once the general picture is more or less clear, the reader should move on to more advanced/precise texts. Timo Weigand's lecture notes . I find these notes to be more precise than those of Tong, so I like them more. These notes, like those of Tong, use as main source the book by Peskin and Schröder, which I never quite liked. But Weigand, unlike Tong, has as secondary reference the book by Itzykson and Zuber, which I love. P&S aim at intuitiveness, while I&Z aim at precision; therefore, Tong may be easier/more accesible but Weigand is more correct/technical. Sidney Coleman's lecture notes . The name of the author should be enough to make it clear that these notes are a must-read. The approach is somewhat idiosyncratic, and the text is very conversational and contains several interesting historical notes about the development of the theory, in which the author had a considerable role. Great notes to read at least once, but don't expect to learn everything there is to know there; they are meant as an introduction. Important but advanced topics are not discussed. Jorge Crispim Romão's lecture notes . These notes are great if you are looking for a lecture-style (as opposed to textbook-style) discussion of some advanced topics. I really like these notes, because the exposition is modern, and they discuss many different topics without going into unnecessary details or becoming overly technical. The appendices are particularly useful IMHO. Timothy J. Hollowood's lecture notes . The "Renormalization Group" part is remarkably good IMHO. Textbooks. Matthew D. Schwartz, Quantum Field Theory and the Standard Model . This might be one of the best introductory textbooks out there. It is not overly technical and it covers a wide range of different topics, always from a very intuitive and modern point of view. The concepts are well motivated when introduced, and their role is usually more or less clear. This book thought me some very useful techniques that I have been using ever since. Mark Srednicki, Quantum Field Theory . I really like the organisation and design of the book, which consists of around a hundred of short and essentially self-contained chapters that introduce a single topic, discuss it in the necessary level of detail, and move on to the next topic. The discourse is linear (which is not always easy to archive), in the sense that it flows naturally from topic to topic, from easy to difficult. The only drawback of this book is that IMHO some derivations are oversimplified, and the author fails to explicitly state the omission of some technical complications. Great book nevertheless. Beware: this book is not really an introduction; it should definitely not be the first book you read. I consider it more of a reference textbook where I can check single chapters when I need to refresh some concept. Itzykson C., Zuber J.B., Quantum field theory . One of my personal favourites. The book is very precise (on the level of rigour of physics), and it contains dozens of detailed and complicated derivations that most books tend to omit. I'm not sure this book is very good as an introduction; the first few chapters are accessible but the book quickly gains momentum. Beginners may find the book slightly too demanding on a first read due to the level of detail and generality it contains. Unfortunately, it is starting to have an old feel. Not outdated, but at some points the approach is slightly obsolete by today's standards. Weinberg S., Quantum theory of fields . As with Coleman, and even more so, the mere name of the author should be a good enough reason to read this series of books. Weinberg, one of the founding fathers of quantum field theory, presents in these books his very own way to understand the framework. His approach is very idiosyncratic but, IMHO, much more logical than the rest of books. Weinberg's approach is very general and rigorous (on the level of physicists), and it left me with a very satisfactory opinion on quantum field theory: despite the obvious problems with this framework, Weinberg's presentation highlights the intrinsic beauty of the theory and the inevitability of most of its ingredients. Make sure to read it at least once. Zinn-Justin J., Quantum Field theory and Critical Phenomena . This is a very long and thorough book, which contains material that cannot easily be found elsewhere. I haven't read all of it, but I loved some of its chapters. His definition and characterisation of functional integrals, and his analysis of renormalisation and divergences are flawless. The philosophy of the book is great, and the level of detail and rigour is always adequate. Very good book altogether. DeWitt B.S., The global approach to quantum field theory . The perfect book is yet to be written, but if something comes close it's DeWitt's book. It is the best book I've read so far. If you want precision and generality, you can't do better than this. The book is daunting and mathematically demanding (and the notation is... ehem... terrible?), but it is certainly worth the effort. I've mentioned this book many times already , and I'll continue to do so. In a perfect world, this would be the standard QFT textbook. Ticciati R., Quantum Field Theory for Mathematicians . In spite of its title, I'm not sure mathematicians will find this book particularly clear or useful. On the other hand, I - as a physicist - found some chapters of this book very useful myself. The book is rather precise in its statements, and the author is upfront about technical difficulties and the ill-definedness (is this a word?) of the relevant objects. I very much recommend giving it a read. Scharf G., Finite quantum electrodynamics . This book will teach you that there is another way to do QFT. One that is in-between physicists' QFT and mathematicians' QFT. It is rigorous and precise, but it addresses the problems physicists care about (i.e., Feynman diagrams). In essence, the book presents the so-called causal approach to QFT, which is the only way to make computations rigorous. Spoiler: there are no divergences anywhere. This is archived by treating distributions with respect, instead of pretending that they are regular functions. The precise definition of superficial degree of divergence and momentum-space subtraction is particularly beautiful. The book left me delighted: QFT is not that bad after all. Zeidler E., Quantum field theory, Vol. 1 , 2 and 3 . Initially intended to be a six-volume set, although I believe the author only got to publish the first three pieces, each of which is more than a thousand pages long! Needless to say, with that many pages the book is (painfully) slow. It will gradually walk you through each and every aspect of QFT, but it takes the author twenty pages to explain what others would explain in two paragraphs. This is a double-edged sword: if your intention is to read the whole series, you will probably find it annoyingly verbose; if, on the other hand, your intention is to review a particular topic that you wish to learn for good, you will probably find the extreme level of detail helpful. To each their own I guess, but I cannot say I love this book; I prefer more concise treatments. Other. Henneaux M., Teitelboim C., Quantization of gauge systems . Not a QFT book per se, but it contains a lot of material that is essential if one wants to formulate and understand QFT properly. The presentation is very general and detailed, and the statements are very precise and rigorous. A wonderful book without a doubt. Bogolubov, Anatoly A. Logunov, A.I. Oksak, I. Todorov, General principles of quantum field theory . A standard reference for mathematically precise treatments. It omits many topics that are important to physicists, but the ones they analyse, they do so in a perfectly rigorous and thorough manner. I believe mathematicians will like this book much more than physicists. For one thing, it will not teach you how (most) physicists think about QFT. A lovely book nevertheless; make sure to check out the index so that you will remember what is there in case you need it some time in the future. Folland G.B., Quantum Field Theory . Similar to above, but much more approachable. The subtitle "A Tourist Guide for Mathematicians" is very descriptive. It will walk you through several important topics, but it won't in general get your hands dirty with the details. Salmhofer M., Renormalization. An Introduction . If you care about the formalisation of Feynman diagrams and perturbation theory, I cannot recommend this book enough (or, at least, its first few chapters; I cannot really speak for the last one). It is a lovely short book. Raymond F. Streater, Arthur S. Wightman, PCT, spin and statistics and all that . A classic text. It is short and clean, and it contains many interesting remarks. Smirnov V., Analytic tools for Feynman integrals . A very complete collection of useful techniques that are essential to perturbative calculations, from analytic to numerical methods.
{ "source": [ "https://physics.stackexchange.com/questions/8441", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/592/" ] }
8,452
The equation describing the force due to gravity is $$F = G \frac{m_1 m_2}{r^2}.$$ Similarly the force due to the electrostatic force is $$F = k \frac{q_1 q_2}{r^2}.$$ Is there a similar equation that describes the force due to the strong nuclear force? What are the equivalent of masses/charges if there is? Is it still inverse square or something more complicated?
From the study of the spectrum of quarkonium (bound system of quark and antiquark) and the comparison with positronium one finds as potential for the strong force $$V(r) = - \dfrac{4}{3} \dfrac{\alpha_s(r) \hbar c}{r} + kr$$ where the constant $k$ determines the field energy per unit length and is called string tension. For short distances this resembles the Coulomb law, while for large distances the $k\,r$ factor dominates (confinement). It is important to note that the coupling $\alpha_s$ also depends on the distance between the quarks. This formula is valid and in agreement with theoretical predictions only for the quarkonium system and its typical energies and distances. For example charmonium: $r \approx 0.4 \ {\rm fm}$. So it is not as universal as eg. the gravity law in Newtonian gravity.
{ "source": [ "https://physics.stackexchange.com/questions/8452", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2908/" ] }
8,453
D'Alembert's principle suggests that the work done by the internal forces for a virtual displacement of a mechanical system in harmony with the constraints is zero. This is obviously true for the constraint of a rigid body where all the particles maintain a constant distance from one another. It's also true for constraining force where the virtual displacement is normal to it. Can anyone think of a case where the virtual displacements are in harmony with the constraints of a mechanical system, yet the total work done by the internal forces is non-zero, making D'Alembert's principle false?
Given a system of $N$ point-particles with positions ${\bf r}_1, \ldots , {\bf r}_N$; with corresponding virtual displacements $\delta{\bf r}_1$, $\ldots $, $\delta{\bf r}_N$; with momenta ${\bf p}_1, \ldots , {\bf p}_N$; and with applied forces ${\bf F}_1^{(a)}, \ldots , {\bf F}_N^{(a)}$. Then D'Alembert's principle states that $$\tag{1} \sum_{j=1}^N ( {\bf F}_j^{(a)} - \dot{\bf p}_j ) \cdot \delta {\bf r}_j~=~0. $$ The total force $${\bf F}_j ~=~ {\bf F}_j^{(a)} +{\bf F}^{(ec)}_j+{\bf F}^{(ic)}_j + {\bf F}^{(i)}_j + {\bf F}_j^{(o)}$$ on the $j$'th particle can be divided into five types: applied forces ${\bf F}_j^{(a)}$ (that we keep track of and that are not constraint forces). an external constraint force ${\bf F}^{(ec)}_j$ from the environment. an internal constraint force ${\bf F}^{(ic)}_j$ from the $N-1$ other particles. an internal force ${\bf F}^{(i)}_j$ (that is not an applied or a constraint force of type 1 or 3, respectively) from the $N-1$ other particles. Other forces ${\bf F}_j^{(o)}$ not already included in type 1, 2, 3 and 4. Because of Newton's 2nd law ${\bf F}_j= \dot{\bf p}_j$, D'Alembert's principle (1) is equivalent to$^1$ $$\tag{2} \sum_{j=1}^N ( {\bf F}^{(ec)}_j+{\bf F}^{(ic)}_j+{\bf F}^{(i)}_j+{\bf F}_j^{(o)}) \cdot \delta {\bf r}_j~=~0. $$ So OP's question can essentially be rephrased as Are there examples in classical mechanics where eq. (2) fails? Eq. (2) could trivially fail, if we have forces ${\bf F}_j^{(o)}$ of type 5, e.g. sliding friction, that we (for some reason) don't count as applied forces of type 1. However, OP asks specifically about internal forces. For a rigid body , to exclude pairwise contributions of type 3, one needs the strong Newton's 3rd law, cf. this Phys.SE answer. So if these forces fail to be collinear, this could lead to violation of eq. (2). For internal forces of type 4, there is in general no reason that they should respect eq. (2). Example: Consider a system of two point-masses connected by an ideal spring. This system has no constraints, so there are no restrictions to the class of virtual displacements. It is easy to violate eq. (2) if we count the spring force as a type 4 force. Reference: H. Goldstein, Classical Mechanics, Chapter 1. -- $^1$It is tempting to call eq. (2) the Principle of virtual work , but strictly speaking, the principle of virtual work is just D'Alembert's principle (1) for a static system.
{ "source": [ "https://physics.stackexchange.com/questions/8453", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2146/" ] }
8,477
Photons do not have (rest) mass (that's why they can move at the speed of "light"). So my question is: how can the gravity of a classical $^1$ black hole stop light from escaping? -- $^1$ We ignore quantum mechanical effects, such as, Hawking radiation.
Black holes affect the causal structure of spacetime in such a manner that all future light cones within a black hole lie within the event horizon of it. Although photons are massless they have energy and have to obey the geometry of a curved spacetime. Since all future lies within the event horizon, photons are trapped inside the black hole.
{ "source": [ "https://physics.stackexchange.com/questions/8477", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3057/" ] }
8,502
Most images you see of the solar system are 2D and all planets orbit in the same plane. In a 3D view, are really all planets orbiting in similar planes? Is there a reason for this? I'd expect that the orbits distributed all around the sun, in 3D. Has an object made by man (a probe) ever left the Solar System?
Nic and Approximist's answers hit the main points, but it's worth adding an additional word on the reason the orbits lie roughly in the same plane: Conservation of angular momentum. The Solar System began as a large cloud of stuff, many times larger than its current size. It had some very slight initial angular momentum -- that is, it was, on average, rotating about a certain axis. (Why? Maybe just randomly! All of the constituents were flying around, and if you add up those random motions, there'll generically be some nonzero angular momentum.) Because angular momentum is conserved, as the cloud collapsed the rotation rate sped up (the usual example being the figure skater who pulls in her arms as she spins, and speeds up accordingly). Further collapse in the direction perpendicular to the plane of rotation doesn't change the angular momentum, but collapse in the other directions would change it. So the collapse turns the initial cloud, whatever its shape, into a pancake. The planets formed out of that pancake. By the way, you can see the signs of that initial angular momentum in other things too: not only are all of the planets orbiting in roughly the same plane, but so are most of their moons, and most of the planets' rotations about their axes as well.
{ "source": [ "https://physics.stackexchange.com/questions/8502", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3064/" ] }
8,518
Noether's theorem states that, for every continuous symmetry of an action, there exists a conserved quantity, e.g. energy conservation for time invariance, charge conservation for $U(1)$. Is there any similar statement for discrete symmetries?
For continuous global symmetries, Noether theorem gives you a locally conserved charge density (and an associated current), whose integral over all of space is conserved (i.e. time independent). For global discrete symmetries, you have to distinguish between the cases where the conserved charge is continuous or discrete. For infinite symmetries like lattice translations the conserved quantity is continuous, albeit a periodic one. So in such case momentum is conserved modulo vectors in the reciprocal lattice. The conservation is local just as in the case of continuous symmetries. In the case of finite group of symmetries the conserved quantity is itself discrete. You then don't have local conservation laws because the conserved quantity cannot vary continuously in space. Nevertheless, for such symmetries you still have a conserved charge which gives constraints (selection rules) on allowed processes. For example, for parity invariant theories you can give each state of a particle a "parity charge" which is simply a sign, and the total charge has to be conserved for any process, otherwise the amplitude for it is zero.
{ "source": [ "https://physics.stackexchange.com/questions/8518", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/97/" ] }
8,522
Is it energy? Is it energy per unit volume? Is it energy per unit time i.e power? What is it?
I'll try to give an answer in purely classical thermodynamics. Summary Heat is a way of accounting for energy transfer between thermodynamic systems . Whatever energy is not transferred as work is transferred as heat. If you observe a thermodynamic process and calculate that system A lost $Q$ calories of heat, this means that if the environment around system A were replaced with $Q$ grams of water at $14\sideset{^{\circ}}{}{\mathrm{C}}$ and the process were repeated, the temperature of that water would rise to $15\sideset{^{\circ}}{}{\mathrm{C}}$ . Energy Energy is a number associated with the state of a system. It can be calculated if you give the state variables - things like mass, temperature, chemical composition, pressure, and volume. (These state variables are not all independent, so you only need to give some combination of them.) Sometimes the energy can be accounted very simply. For an ideal gas, the energy is simply proportional to the temperature, number of molecules, and number of dimensions. For a system with interesting chemistry, internal stresses and deformation, gravitational potential, etc. the energy may be more complicated. Essentially, we get to invent the formulas for energy that are most useful to us. There's a nice overview of energy in The Feynman Lectures, here . For a more theoretical point of view on where these energy formulas come free, see Lubos Motl's answer here . Energy Conservation As long as we make the right definitions of energy, it turns out that energy is conserved. Suppose we have an isolated system. If it is not in equilibrium, its state may change. Energy conservation means that at the end of the change, the new state will have the same energy. (For this reason, energy is often treated as a constraint. For example, an isolated system will maximize its entropy subject to the constraint that energy is conserved.) This leaves the question of what an isolated system is. If we take another system (the environment) and keep it around the isolated system, we find no observable changes in the environment as the state of the isolated system changes. For example, changes in an isolated system cannot change the temperature, pressure, or volume of the environment. Practically, an isolated system should have no physical mechanisms for interacting with the rest of the universe. Matter and radiation cannot leave or enter, and there can be no heat conduction (I'm jumping the gun on that last one, of course, but take "heat conduction" as a rough term for now). A perfectly isolated system is an idealization only. Next we observe systems A and B interacting. Before the interaction, A has 100 joules of energy. After interacting, A has 90 joules of energy, so it has lost 10 joules. Energy conservation says that if we measure the energy in system B before and after the interaction, we will always find that system B has gained 10 joules of energy. In general, system B will always gain exactly however much system A loses, so the total amount is constant. There are nuances and caveats to energy conservation. See this question , for example. Work Work is defined by $$\textrm{d}W = P\textrm{d}V$$ $P$ is pressure ; $V$ is volume , and it is fairly easy to give operational definitions of both. Using this equation, we must ensure that $P$ is the pressure the environment exerts on the system. For example, if we took a balloon into outer space, it would begin expanding. However, it would do no work because the pressure on the balloon is zero. However, if the balloon expands on Earth, it does work given by the product of its volume change and the atmospheric pressure. That example treats the entire balloon as the system. Instead, we might think of only the air inside the balloon as a system. Its environment is the rubber of the balloon. Then, as the balloon expands in outer space, the air inside does work against the pressure from the elastic balloon. I wrote more about work in this answer . Adiabatic Processes Work and energy, as described so far, are independent ideas. It turns out that in certain circumstances, they are intimately related. For some systems, we find that the decrease in energy of the system is exactly the same as the work it does. For example, if we took that balloon in space and watched it expand, the air in the balloon would wind up losing energy as it expanded. We'd know because we measure the temperature, pressure, and volume of the air before and after the expansion and calculate the energy change from a formula. Meanwhile, the air would have done work on the balloon. We can calculate this work by measuring the pressure the balloon exerts on the air and multiplying by the volume change (or integrating if the pressure isn't constant). Remarkably, we could find that these two numbers, the work and the energy change, always turned out to be exactly the same except for a minus sign. Such a process is called adiabatic. In reality, adiabatic processes are approximations. They work best with systems that are almost isolated, but have a limited way of interacting with the environment, or else occur too quickly for interactions beside pressure-volume ones to be important. In our balloon, the expansion might fail to be adiabatic due to radiation or conduction between the balloon and the air. If the balloon were a perfect insulator and perfectly white, we'd expect the process to be adiabatic. Sound waves propagate essentially adiabatically, not because there are no mechanisms for one little mass of air to interact with nearby ones, but because those mechanisms (diffusion, convection, etc.) are too slow to operate on the time scale of the period of a sound wave (about a thousandth of a second). This leads us to thinking of work in a new way. In adiabatic processes, work is the exchange of energy from one system to another. Work is still calculated from $P\textrm{d}V$ , but once we calculate the work, we know the energy change. Heat Real processes are not adiabatic. Some are close, but others are not close at all. For example, if I put a pot of water on the stove and turn on the burner, the water's volume hardly changes at all, so the work done as the water heats is nearly zero, and what work is done by the water is positive, meaning the water should lose energy. The water actually gains a great deal of energy, though, which we can discover by observing the temperature change and using a formula for energy that involves temperature. Energy got into the pot, but not by work. This means that work is not a sufficient concept for describing energy transfer. We invent a new, blanket term for energy transfer that is not done by work. That term is "heat". Heat is simply any energy transferred between two systems by means aside from work. The energy entering the boiling pot is entering by heat. This leads to the thermodynamic equation $$\textrm{d}E = -\textrm{d}W + \textrm{d}Q$$ $E$ is energy, $W$ work, and $Q$ heat. The minus sign is a convention. It says the if a system does work, it loses energy, but if it receives heat, it gains energy. Interpreting Heat I used to be very confused about heat because it felt like something of a deus ex machina to say, "all the leftover energy must be heat". What does it mean to say something has "lost 30 calories through heat"? How can you look at it and tell? Pressure, temperature, volume are all defined in terms of very definite, concrete things, and work is defined in terms of pressure and volume. Heat seems too abstract by comparison. One way to get a handle on heat, as well as review everything so far, is to look at the experiments of James Joule . Joule put a paddle wheel in a tub of water, connected the wheel to a weight so that the weight would drive the wheel around, and let the weight fall. Here's the Wikipedia picture of the set up: $\hspace{100px}$ . As the weight fell, it did work on the water; at any given moment, there was some pressure on the paddles, and they were sweeping out a volume proportional to their area and speed. Joule assumed that all the energy transferred to the water was transferred by work. The weights lost energy as they fell because their gravitational potential energy went down. Assuming energy is conserved, Joule could then find how much energy went into the water. He also measured the temperature of the water. This allowed him to find how the energy of water changes as its temperature changes. Next suppose Joule starting heating the water with a fire. This time the energy is transferred as heat, but if he raises the temperature of the water over exactly the same range as in the work experiment, then the heat transfer in this trial must be the same as the work done in the previous one. So we now have an idea of what heat does in terms of work. Joule found that it takes 4.2 joules of work to raise the temperature of one gram of water from $14\sideset{^{\circ}}{}{\mathrm{C}}$ to $15\sideset{^{\circ}}{}{\mathrm{C}}$ . If you have more water than that, it takes more work proportionally. 4.2 joules is called one calorie. At last we can give a physical interpretation to heat. Think of some generic thermodynamic process. Imagine it happening in a piston so that we can easily track the pressure and volume. We measure the energy change and the work during the process. Then we attribute any missing energy transfer to heat, and say "the system gave up 1000 joules (or 239 calories) of heat". This means that if we took the piston and surrounded it with 239 grams of water at $14\sideset{^{\circ}}{}{\mathrm{C}}$ , then did exactly the same process, the water temperature would rise to $15\sideset{^{\circ}}{}{\mathrm{C}}$ . Misconceptions What I discussed in this post is the first law of thermodynamics - energy conservation. Students frequently get confused about what heat is because they mix up its definition with the role it plays in the second law of thermodynamics, which I didn't discuss here. This section is intended to point out that some commonly-said things about heat are either loose use of language (which is okay as long as everyone understands what's being said), or correct use of heat, but not directly a discussion of what heat is. Things do not have a certain amount of heat sitting inside them. Imagine a house with a front door and a back door. People can come and go through either door. If you're watching the house, you might say "the house lost 3 back-door people today". Of course, the people in the house are just people. The door only describes how they left. Similarly, energy is just energy. "Work" and "heat" describe what mechanism it used to leave or enter the system. (Note that energy itself is not a thing like people, only a number calculated from the state, so the analogy only stretches so far.) We frequently say that energy is "lost to heat". For example, if you hit the brakes on your car, all the kinetic energy seems to disappear. We notice that the brake pads, the rubber in the tires, and the road all get a little hotter, and we say "the kinetic energy of the car was turned into heat." This is imprecise. It's a colloquialism for saying, "the kinetic energy of the car was transferred as heat into the brake pads, rubber, and road, where it now exists as thermal energy ." Heat is not the same as temperature. Temperature is what you measure with a thermometer. When heat is transferred into a system, its temperature will increase, but its temperature can also increase because you do work on it. The relationship between heat and temperature involves a new state variable, entropy , and is described by the second law of thermodynamics . Statements such as "heat flows spontaneously from hot bodies to cold bodies" are describing this second law of thermodynamics, and are really statements about how to use heat along with certain state variables to decide whether or not a given process is spontaneous; they aren't directly statements about what heat is. Heat is not "low quality energy" because it is not energy. Such statements are, again, discussion of the second law of thermodynamics. Reference This post is based on what I remember from the first couple of chapters in Enrico Fermi's Thermodynamics .
{ "source": [ "https://physics.stackexchange.com/questions/8522", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2146/" ] }
8,602
What is the difference between $|0\rangle $ and $0$ in the context of $$a_- |0\rangle =0~?$$
$|0\rangle$ is just a quantum state that happens to be labeled by the number 0. It's conventional to use that label to denote the ground state (or vacuum state), the one with the lowest energy. But the label you put on a quantum state is actually kind of arbitrary. You could choose a different convention in which you label the ground state with, say, 5, and although it would confuse a lot of people, you could still do physics perfectly well with it. The point is, $|0\rangle$ is just a particular quantum state. The fact that it's labeled with a 0 doesn't have to mean that anything about it is actually zero. In contrast, $0$ (not written as a ket) is actually zero . You could perhaps think of it as the quantum state of an object that doesn't exist (although I suspect that analogy will come back to bite me... just don't take it too literally). If you calculate any matrix element of some operator $A$ in the "state" $0$, you will get 0 as a result because you're basically multiplying by zero: $$\langle\psi| A (a_-|0\rangle) = 0$$ for any state $\langle\psi|$. In contrast, you can do this for the ground state without necessarily getting zero: $$\langle\psi| A |0\rangle = \text{can be anything}$$
{ "source": [ "https://physics.stackexchange.com/questions/8602", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3239/" ] }
8,610
I have learned in my physics classes about five different types of masses and I am confused about the differences between them. What's the difference between the five masses: inertial mass, gravitational mass, rest mass, invariant mass, relativistic mass?
Let us define the inertial mass, gravitational mass and rest mass of a particle. Inertial mass: To every particle in nature we can associate a real number with it so that the value of the number gives the measure of inertia (the amount of resistance of the particle to accelerate for a definite force applied on it) of the particle. Using Newton's laws of motion, $$m_i = \frac{F}{a}$$ Gravitational mass: (This is defined using Newton's law of universal gravitation i.e. the gravitational force between any two particle a definite distance apart is proportional the product of the gravitational masses of the two particles.) To every particle in nature we can associate a real number with it so that the value of the number gives the measure of the response of the particle to the gravitational force. $$F = \frac{Gm_{G1}m_{G2}}{R^2}$$ All experiments carried out till date have shown that $m_G = m_i$ This is the reason why the acceleration due to gravity is independent of the inertial or gravitational mass of the particle. $$m_ia = \frac{Gm_{G1}m_{G2}}{R^2}$$ If $m_{G1} = m_i$ then $$a = \frac{Gm_{G2}}{R^2}$$ That is acceleration due to gravity of the particle is independent of its inertial or gravitational mass. Rest mass: This is simply called the mass and is defined as the inertial mass of a particle as measured by an observer, with respect to whom, the particle is at rest. There was an obsolete term called relativistic mass which is the inertial mass as measured by an observer, with respect to whom, the particle is at motion. The relation between the rest mass and the relativistic mass is given as $$m = \frac{m_0}{\sqrt{1-v^2/c^2}}$$ where $v$ is the speed of the particle and $c$ is the speed of light, $m$ is the relativistic mass and $m_0$ is the rest mass.
{ "source": [ "https://physics.stackexchange.com/questions/8610", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2578/" ] }
8,643
I am not a physiologist, but whatever little I know about human eyes always makes me wonder by its details of optical subtleties. A question always comes to mind. Are human eyes the best possible optical instrument evolved by natural selection? Is there any possible room for further major improvement as per the laws of physics in principle? Can there be any camera prepared by human beings which can be superior to human eyes?
This question is sort of difficult to answer in an objective way, because it depends very strongly on your definition of "best." Natural selection favors traits which provide a reproductive advantage; no more, no less. Could our eyes be better by the standards of modern optical design, in terms of precision and features? Sure. I could easily design a camera with a larger aperture, better resolution, less abberation, a broader wavelength sensitivity, etc. There are even some precedents for this in nature. Some animals can see ultraviolet or infrared; some can even detect polarization (mostly birds I think?). I believe there are some fish that can swivel their eyes around through nearly a full circle in any direction. Cats and dogs have higher sensitivity in low light due to the reflective layer behind their retina, and squid have eyes with truly huge apertures. It is worth pointing out though that the human eye, despite what I've said above, is still a pretty good camera. It can re-focus pretty quickly, it's got a fantastic image processing computer attached to it that can do incredible amounts of pattern recognition, noise reduction, and image stabilization. While its wavefront aberration is not perfect, it is pretty good for a biological system. They have very low chromatic aberration, and are quite compact. Additionally, our eyes have a curved focal plane (our retina) which is a lens designers dream -- it eliminates the problem of field curvature, which is an inherent aberration in any optical system that is nearly impossible to eliminate.
{ "source": [ "https://physics.stackexchange.com/questions/8643", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
8,663
In the book "Quantum Mechanics and Path Integrals" Feynman & Hibbs state that the probability $P(b,a)$ to go from point $x_a$ at time $t_a$ to the point $x_b$ at the time $t_b$ is $P(b,a) = \|K(b,a)\|^2$ of an amplitude $K(b,a)$ to go from $a$ to $b$ . This amplitude is the sum of contributions $\phi[x(t)]$ from each path. $$ K(b,a) = \sum_{\text{paths from $a$ to $b$}} \phi[x(t)]$$ The contributions of a path has a phase proportional to the action $S$ : $$ \phi[x(t)] = \text{const}\ e^{(i/\hbar)S[x(t)]}$$ Why must the contribution of a path be $\sim e^{(i/\hbar)S[x(t)]}$ ? Can this be somehow derived or explained? Why can't the contribution of a path be something else e.g. $\sim \frac{S}{\hbar}$ , $\sim \cos(S/\hbar)$ , $\log(S/\hbar)$ or $e^{- (S[x(t)]/\hbar)^2}$ ? Edit: I have to admit that in the first version of this question, I didn't exclude the possibility to derive the contribution of a path directly from Schrödinger's equation. So answers along this line are valid although not so interesting. I think when Feynman developed his formalism his goal was to find a way to quantize systems, which cannot be treated by Schrödinger's equation, because they cannot be described in terms of a Hamiltonian (e.g. the Wheeler-Feynman absorber theory). So I think a good answer would explain Feynman's Ansatz without referring to Schrödinger's equation, because I think Schrödinger's equation can only handle a specific subset of all the systems that can be treated by Feynman's more general principle.
There are already several good answers. Here I will only answer the very last question, i.e., if the Boltzmann factor in the path integral is $f(S(t_f,t_i))$ , with action $$S(t_f,t_i)~=~\int_{t_i}^{t_f} dt \ L(t)\tag{1},$$ why is the function $f:\mathbb{R}\to\mathbb{C}$ an exponential function, and not something else? Well, since the Feynman "sum over histories" propagator should have the group property $$\begin{align} K(x_3,t_3;x_1,t_1) ~=~&\cr \int_{-\infty}^{\infty}\mathrm{d}x_2 \ K(x_3,t_3;x_2,t_2)& K(x_2,t_2;x_1,t_1),\end{align}\tag{2}$$ one must demand that $$\begin{align}f(S(t_3,t_2)&f(S(t_2,t_1)) \cr~=~& f(S(t_3,t_1)) \cr~=~& f(S(t_3,t_2)+S(t_2,t_1)).\end{align}\tag{3}$$ In the last equality of eq. (3) we used the additivity of the action (1). Eq. (3) implies that $$f(0)~=~f(S(t_1,t_1)) ~=~ 1.\tag{4}$$ (The other possibility $f\equiv 0$ is physically un-acceptable.) So the question boils down to: How many continuous functions $f:\mathbb{R}\to\mathbb{C}$ satisfy $$f(s)f(s^{\prime}) ~=~f(s+s^{\prime})\quad\text{and}\quad f(0) ~=~1~?\tag{5}$$ Answer: The exponential function! Proof (ignoring some mathematical technicalities): If $s$ is infinitesimally small, then one may Taylor expand $$\begin{align}f(s) ~=~& f(0) + f^{\prime}(0)s +{\cal O}(s^{2}) \cr ~=~& 1+cs+{\cal O}(s^{2}) \end{align}\tag{6}$$ with some constant $c:=f^{\prime}(0)$ . Then one calculates $$\begin{align} f(s) ~=~&\lim_{n\to\infty}f(\frac{s}{n})^n\cr ~=~&\lim_{n\to\infty}\left(1+\frac{cs}{n}+o(\frac{1}{n})\right)^n\cr ~=~&e^{cs},\end{align} \tag{7}$$ i.e., the exponential function! $\Box$
{ "source": [ "https://physics.stackexchange.com/questions/8663", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1648/" ] }
8,932
If you draw a big triangle in Earth 2D surface you will have an approximated spherical triangle, this will be a non euclidean geometry. but from a 3D perspective, for example the same triangle from space, it could be depicted as euclidean in 3D. Then why we talk about non-Euclidean instead of adding 1 dimension? It is easier the non-Euclidean approach? I don't see how! Sorry if the question is perhaps naive, but this is a long time doubt for me, so I will be grateful for an answer.
You have to add more than one dimension, in general. Mathematicians have studied in great detail the question of how many extra dimensions you need in order to embed a curved manifold in a flat one. One key result is the Nash Embedding Theorem , which says that you can isometrically embed an $m$-dimensional Riemannian manifold in $n$-dimensional flat space, for some $n\le m(m+1)(3m+11)/2$. (Isometric embedding means embedding in a way that preserves lengths, etc., which is what seems to be relevant here.) That's 120 dimensions for 3-dimensional space! This theorem only applies to Riemannian manifolds, not Lorentzian ones -- that is, it applies to space, not spacetime with its annoying minus sign in the metric. If it did apply to spacetime, so that we could apply the result with $m=4$, you'd need 230 dimensions. As far as I know, there's not a comparably clean result for Lorentzian spacetime. There are bunches of references here . But it certainly can't be any easier to embed spacetime than to embed space! Anyway, I think this illustrates why people don't like to think of general relativity in terms of embeddings in higher-dimensional space. It's going to be way, way harder than the standard approach. In addition, many people have a philosophical preference not to populate a theory with unobservable entities. If you don't need those extra dimensions, why postulate them? One addendum, after thinking about Marek's comment that one might expect to get by generically with only one (or maybe a few) extra dimensions, rather than the large number in the Nash theorem. I mentioned in the comments that my intuition was different, although I wasn't sure. I just want to expand on that a bit. You can run into trouble when using two-dimensional intuition to make guesses about higher-dimensional manifolds. In two dimensions, the Riemann curvature tensor only has one component -- that is, the curvature is described by a single number at each point. I conjecture that that's the reason why it seems intuitive that you can embed 2 dimensions in 3: you just need to "bend" the surface one way at each point to account for the curvature. (Even so, it turns out that you can't embed 2 dimensions in 3, even in relatively simple cases.) But the number of components of the curvature tensor grows rapidly with dimension. In 3 dimensions, there are 6 components, and in 4 there are 20. It's wildly implausible, to me, that you could "usually" account for all of those extra degrees of freedom with one or two extra dimensions. (A bit of a digression, just because I think it's cool: Another example of how 2-D intuition can be a bad guide to higher dimensions. The problem of topologically classifying 2-D manifolds was solved ages ago. One might have guessed that the problem of classifying 3-D manifolds would be similar, but it turns out to be vastly harder. Last time I checked, it was thought that this problem had been solved, but there was some doubt about whether the solution was correct. And in 4-D or more, the problem is apparently known to be undecidable !) One more point: even if it's true that you can "usually" get by with fewer dimensions, I'm not sure how relevant that is. Any manifold is a possible solution to Einstein's equation (for some stress tensor), so if you try to recast the theory in terms of extra dimensions, you'll need enough extra dimensions to account for all possibilities, not just the simple ones.
{ "source": [ "https://physics.stackexchange.com/questions/8932", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1916/" ] }
9,049
Why doesn't the Moon fall onto the Earth? For that matter, why doesn't anything rotating a larger body ever fall onto the larger body?
The moon does not fall to Earth because it is in an orbit . One of the most difficult things to learn about physics is the concept of force. Just because there is a force on something does not mean it will be moving in the direction of the force. Instead, the force influences the motion to be a bit more in the direction of the force than it was before. For example, if you roll a bowling ball straight down a lane, then run up beside it and kick it towards the gutter, you apply a force towards the gutter, but the ball doesn't go straight into the gutter. Instead it keeps going down the lane, but picks up a little bit of diagonal motion as well. Imagine you're standing at the edge of a cliff 100m tall. If you drop a rock off, it will fall straight down because it had no velocity to begin with, so the only velocity it picks up is downward from the downward force. If you throw the rock out horizontally, it will still fall, but it will keep moving out horizontally as it does so, and falls at an angle. (The angle isn't constant - the shape is a curve called a parabola, but that's relatively unimportant here.) The the force is straight down, but that force doesn't stop the rock from moving horizontally. If you throw the rock harder, it goes further, and falls at a shallower angle. The force on it from gravity is the same, but the original velocity was much bigger and so the deflection is less. Now imagine throwing the rock so hard it travels one kilometer horizontally before it hits the ground. If you do that, something slightly new happens. The rock still falls, but it has to fall more than just 100m before it hits the ground. The reason is that the Earth is curved, and so as the rock traveled out that kilometer, the Earth was actually curving away underneath of it. In one kilometer, it turns out the Earth curves away by about 10 centimeters - a small difference, but a real one. As you throw the rock even harder than that, the curving away of the Earth underneath becomes more significant. If you could throw the rock 10 kilometers, the Earth would now curve away by 10 meters, and for a 100 km throw the Earth curves away by an entire kilometer. Now the stone has to fall a very long way down compared to the 100m cliff it was dropped from. Check out the following drawing. It was made by Isaac Newton, the first person to understand orbits. IMHO it is one of the greatest diagrams ever made. What it shows is that if you could throw the rock hard enough, the Earth would curve away from underneath the rock so much that the rock actually never gets any closer to the ground. It goes all the way around in the circle and might hit you in the back of the head! This is an orbit. It's what satellites and the moon are doing. We can't actually do it here close to the surface of the Earth due to wind resistance, but on the surface of the moon, where there's no atmosphere, you could indeed have a very low orbit. This is the mechanism by which things "stay up" in space. Gravity gets weaker as you go further out. The Earth's gravity is much weaker at the moon than at a low-earth orbit satellite. Because gravity is so much weaker at the moon, the moon orbits much more slowly than the International Space Station, for example. The moon takes one month to go around. The ISS takes a few hours. An interesting consequence is that if you go out just the right amount in between, about six Earth radii, you reach a point where gravity is weakened enough that an orbit around the Earth takes 24 hours. There, you could have a "geosynchronous orbit", a satellite that orbits so that it stays above the same spot on Earth's equator as Earth spins. Although gravity gets weaker as you go further out, there is no cut-off distance. In theory, gravity extends forever. However, if you went towards the sun, eventually the sun's gravity would be stronger than the Earth's, and then you wouldn't fall back to Earth any more, even lacking the speed to orbit. That would happen if you went about .1% of the distance to the sun, or about 250,000 km, or 40 Earth radii. (This is actually less than the distance to the moon, but the moon doesn't fall into the Sun because it's orbiting the sun, just like the Earth itself is.) So the moon "falls" toward Earth due to gravity, but doesn't get any closer to Earth because its motion is an orbit, and the dynamics of the orbit are determined by the strength of gravity at that distance and by Newton's laws of motion. note: adapted from an answer I wrote to a similar question on quora
{ "source": [ "https://physics.stackexchange.com/questions/9049", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2530/" ] }
9,098
I know outside a nucleus, neutrons are unstable and they have half life of about 15 minutes. But when they are together with protons inside the nucleus, they are stable. How does that happen? I got this from wikipedia: When bound inside of a nucleus, the instability of a single neutron to beta decay is balanced against the instability that would be acquired by the nucleus as a whole if an additional proton were to participate in repulsive interactions with the other protons that are already present in the nucleus. As such, although free neutrons are unstable, bound neutrons are not necessarily so. The same reasoning explains why protons, which are stable in empty space, may transform into neutrons when bound inside of a nucleus. But I don't think I get what that really means. What happens inside the nucleus that makes neutrons stable? Is it the same thing that happens inside a neutron star's core? Because, neutrons seem to be stable in there too.
Spontaneous processes such as neutron decay require that the final state is lower in energy than the initial state. In (stable) nuclei, this is not the case, because the energy you gain from the neutron decay is lower than the energy it costs you to have an additional proton in the core. For neutron decay in the nuclei to be energetically favorable, the energy gained by the decay must be larger than the energy cost of adding that proton. This generally happens in neutron-rich isotopes: An example is the $\beta^-$-decay of Cesium: $$\phantom{Cs}^{137}_{55} \mathrm{Cs} \rightarrow \vphantom{Ba}^{137}_{56}\mathrm{Ba} + e^- + \bar{\nu}_e$$ For a first impression of the energies involved, you can consult the semi-empirical Bethe-Weizsäcker formula which lets you plug in the number of protons and neutrons and tells you the binding energy of the nucleus. By comparing the energies of two nuclei related via the $\beta^-$-decay you can tell whether or not this process should be possible.
{ "source": [ "https://physics.stackexchange.com/questions/9098", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1927/" ] }
9,122
What is the difference between implicit, explicit, and total time dependence, e.g. $\frac{\partial \rho}{\partial t}$ and $\frac{d \rho} {dt}$ ? I know one is a partial derivative and the other is a total derivative. But physically I cannot distinguish between them. I have a clue that my doubt really might be understanding the difference between implicit, explicit and total time dependence.
You are essentially asking about the material derivative when discussing a total derivative with respect to time. Let's say you are looking at the velocity of the air in your room. There is a different velocity everywhere, and it changes with time, so $$v = v(x,y,z,t)$$ When you take a derivative like $$\frac{\partial v}{\partial t}$$ you are saying "I will keep sampling the wind velocity at the exact same point in my room, and find how quickly that velocity changes." If, on the other hand, you take $$\frac{\textrm{d}v}{\textrm{d}t}$$ you are now saying, "keep following one particular little bit of air, and see how quickly its velocity changes (i.e. find its acceleration)." (note: Marek has made a nice clarification about the difference between these two uses of $t$ in the comments to this answer.) They are related by the chain rule $$\frac{\textrm{d}v}{\textrm{d}t} = \frac{\partial v}{\partial t} + \frac{\partial v}{\partial x}\frac{\textrm{d}x}{\textrm{d}t} + \frac{\partial v}{\partial y}\frac{ \textrm{d}y}{\textrm{d}t} + \frac{\partial v}{\partial z}\frac{\textrm{d}z}{\textrm{d}t}$$ This says that the if you look at one particular little air particle, its velocity is changing partially because the entire velocity field is changing. But even if the entire velocity field weren't changing, the particle's velocity would still change because it moves to a new spot, and the velocity is different at that spot, too. As another example, say there is an ant crawling over a hill. It has a height that is a function of two-dimensional position $$h = h(x,y)$$ If we look at $\partial h/\partial x$, we're looking at the slope in the x-direction. You find it by moving a little bit in the x-direction while keeping y the same, finding the change in z, and dividing by how far you moved. On the other hand, since we're tracking the ant, we might want to know how much its height changes when it moves a little bit in the x-direction. But the ant is traveling along its own convoluted path, and when it moves on the x-direction, it winds up changing its y-coordinate as well. The total change in the ant's height is the change in its height due to moving in the x-direction plus the change due to moving in the y-direction. The distance the ant moves in the y-direction in turn depends on the x-direction movement. So now we have $$\frac{\textrm{d}h}{\textrm{d}x} = \frac{\partial h}{\partial x} + \frac{\partial h}{\partial y}\frac{\textrm{d}y}{\textrm{d}x}$$ On the right hand side of that equation, the first term corresponds to the change in height due to moving in the x-direction. The second term is the change in height due to moving in the y-direction. The first part of that, $\partial h/\partial y$ is the change in height due to changing y, while the second part, $\textrm{d}y/\textrm{d}x$ describes how much y itself actually changes as you change x, and depends on the particulars of the ant's movement. Edit I now see that you're specifically concerned with the quantum mechanics equation $$\frac{\textrm{d}}{\textrm{d}t}\langle A \rangle = -\frac{\imath}{\hbar}\langle[A,H]\rangle + \langle \partial A/\partial t \rangle$$ Here, $\langle \partial A/\partial t\rangle$ is the expectation value of the partial derivative of the operator $A$ with respect to time. For example, if $A$ is the Hamiltonian for a particle in a time-dependent electric field, that operator would contain time explicitly. We begin by formally differentiating the operator itself, then taking the expectation value. On the other hand $\langle A \rangle$ is simply a real-valued function of time (if $A$ is Hermitian), so $\textrm{d} \langle A \rangle / \textrm{d} t$ is the usual derivative of a real function of a single variable.
{ "source": [ "https://physics.stackexchange.com/questions/9122", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3239/" ] }
9,194
I understand the mathematics of commutation relations and anti-commutation relations, but what does it physically mean for an observable (self-adjoint operator) to commute with another observable (self-adjoint operator) in quantum mechanics? E.g. an operator $A$ with the Hamiltonian $H$?
Let us first restate the mathematical statement that two operators $\hat A$ and $\hat B$ commute with each other. It means that $$\hat A \hat B - \hat B \hat A = 0,$$ which you can rearrange to $$\hat A \hat B = \hat B \hat A.$$ If you recall that operators act on quantum mechanical states and give you a new state in return, then this means that with $\hat A$ and $\hat B$ commuting, the state you obtain from letting first $\hat A$ and then $\hat B$ act on some initial state is the same as if you let first $\hat B$ and then $\hat A$ act on that state: $$\hat A \hat B | \psi \rangle = \hat B \hat A | \psi \rangle.$$ This is not a trivial statement. Many operations, such as rotations around different axes, do not commute and hence the end-result depends on how you have ordered the operations. So, what are the important implications? Recall that when you perform a quantum mechanical measurement, you will always measure an eigenvalue of your operator, and after the measurement your state is left in the corresponding eigenstate. The eigenstates to the operator are precisely those states for which there is no uncertainty in the measurement: You will always measure the eigenvalue, with probability $1$. An example are the energy-eigenstates. If you are in a state $|n\rangle$ with eigenenergy $E_n$, you know that $H|n\rangle = E_n |n \rangle$ and you will always measure this energy $E_n$. Now what if we want to measure two different observables, $\hat A$ and $\hat B$? If we first measure $\hat A$, we know that the system is left in an eigenstate of $\hat A$. This might alter the measurement outcome of $\hat B$, so, in general, the order of your measurements is important. Not so with commuting variables! It is shown in every textbook that if $\hat A$ and $\hat B$ commute, then you can come up with a set of basis states $| a_n b_n\rangle$ that are eigenstates of both $\hat A$ and $\hat B$. If that is the case, then any state can be written as a linear combination of the form $$| \Psi \rangle = \sum_n \alpha_n | a_n b_n \rangle$$ where $|a_n b_n\rangle$ has $\hat A$-eigenvalue $a_n$ and $\hat B$-eigenvalue $b_n$. Now if you measure $\hat A$, you will get result $a_n$ with probability $|\alpha_n|^2$ (assuming no degeneracy; if eigenvalues are degenerate, the argument still remains true but just gets a bit cumbersome to write down). What if we measure $\hat B$ first? Then we get result $b_n$ with probability $|\alpha_n|^2$ and the system is left in the corresponding eigenstate $|a_n b_n \rangle$. If we now measure $\hat A$, we will always get result $a_n$. The overall probability of getting result $a_n$, therefore, is again $|\alpha_n|^2$. So it didn't matter that we measure $\hat B$ before, it did not change the outcome of the measurement for $\hat A$. EDIT Now let me expand even a bit more. So far, we have talked about some operators $\hat A$ and $\hat B$. We now ask: What does it mean when some observable $\hat A$ commutes with the Hamiltonian $H$? First, we get all the result from above: There is a simultaneous eigenbasis of the energy-eigenstates and the eigenstates of $\hat A$. This can yield a tremendous simplification of the task of diagonalizing $H$. For example, the Hamiltonian of the hydrogen atom commutes with $\hat L$, the angular momentum operator, and with $\hat L_z$, its $z$-component. This tells you that you can classify the eigenstates by an angular- and magnetic quantum number $l$ and $m$, and you can diagonalize $H$ for each set of $l$ and $m$ independently. There are more examples of this. Another consequence is that of time dependence. If your observable $\hat A$ has no explicit time dependency introduced in its definition, then if $\hat A$ commutes with $\hat H$, you immediately know that $\hat A$ is a constant of motion. This is due to the Ehrenfest Theorem $$\frac{d}{dt} \langle \hat A \rangle = \frac{-i}{\hbar} \langle [\hat A, \hat H] \rangle + \underbrace{\left\langle \frac{\partial \hat A}{\partial t} \right\rangle}_{=\;0\,\text{by assumption}}$$
{ "source": [ "https://physics.stackexchange.com/questions/9194", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1123/" ] }
9,302
I think I saw in a video that if dark matter wasn't repulsive to dark matter, it would have formed dense massive objects or even black holes which we should have detected. So, could dark matter be repulsive to dark matter? If so, what are the reasons? Could it be like the opposite pole of gravity that attracts ordinary matter and repulses dark matter?
Lubos Motl's answer is exactly right. Dark matter has "ordinary" gravitational properties: it attracts other matter, and it attracts itself (i.e., each dark matter particle attracts each other one, as you'd expect). But it's true that dark matter doesn't seem to have collapsed into very dense structures -- that is, things like stars and planets. Dark matter does cluster, collapsing gravitationally into clumps, but those clumps are much larger and more diffuse than the clumps of ordinary matter we're so familiar with. Why not? The answer seems to be that dark matter has few ways to dissipate energy. Imagine that you have a diffuse cloud of stuff that starts to collapse under its own weight. If there's no way for it to dissipate its energy, it can't form a stable, dense structure. All the particles will fall in towards the center, but then they'll have so much kinetic energy that they'll pop right back out again. In order to collapse to a dense structure, things need the ability to "cool." Ordinary atomic matter has various ways of dissipating energy and cooling, such as emitting radiation, which allow it to collapse and not rebound. As far as we can tell, dark matter is weakly interacting: it doesn't emit or absorb radiation, and collisions between dark matter particles are rare. Since it's hard for it to cool, it doesn't form these structures.
{ "source": [ "https://physics.stackexchange.com/questions/9302", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1927/" ] }
9,314
Why is the speed of light defined as $299792458$ $m/s$? Why did they choose that number and no other number? Or phrased differently: Why is a metre $1/299792458$ of the distance light travels in a sec in a vacuum?
The speed of light is 299 792 458 m/s because people used to define one meter as 1/40,000,000 of the Earth's meridian - so that the circumference of the Earth was 40,000 kilometers. Also, they used to define one second as 1/86,400 of a solar day so that the day may be divided to 24 hours each containing 60 minutes per 60 seconds. In our Universe, it happens to be the case that light is moving by speed so that in 1 second defined above, it moves approximately by 299,792,458 meters defined above. In other words, during one solar day, light changes its position by $$ \delta x = 86,400 \times 299,792,458 / 40,000,000\,\,{\rm circumferences\,\,of\,\,the\,\,Earth}.$$ The number above is approximately 647,552. Try it. Instruct light to orbit along the surface of the Earth and you will find out that in between two noons, it will complete 647,552 orbits. Why it is exactly this number? Well, it was because how the Earth was created and evolved. If it didn't hit a big rock called Megapluto about 4,701,234,567.31415926 years ago, it would have been a few percent larger and it would be rotating with a frequency smaller by 1.734546346 percent, so 647,552 would be replaced by 648,243.25246 - but because we hit Megapluto, the ratio eventually became what I said. (There were about a million of similarly important big events that I skip, too.) The Earth's size and speed of spinning were OK for a while but they're not really regular or accurate so people ultimately switched to wavelengths and durations of some electromagnetic waves emitted by atoms. Spectroscopy remains the most accurate way in which we can measure time and distances. They chose the new meter and the new second as a multiple of the wavelength or periodicity of the photons emitted by various atoms - so that the new meter and the new second agreed with the old ones - those defined from the circumference of the Earth and from the solar day - within the available accuracy. For some time, people would use two independent electromagnetic waves to define 1 meter and 1 second. In those units, they could have measured the speed of light and find out that it was 299,792,458 plus minus 1.2 meters per second or so. (The accuracy was not that great for years, but the error of 1.2 meters was the final accuracy achieved in the early 1980s.) Because the speed of light is so fundamental - adult physicists use units in which $c=1$, anyway - physicists decided in the 1980s to redefine the units so that both 1 meter and 1 second use the same type of electromagnetic wave to be defined. 1 meter was defined as 1/299,792,458 of light seconds which, once again, agreed with the previous definition based on two different electromagnetic waves within the accuracy. The advantage is that the speed of light is known to be accurately, by definition, today. Up to the inconvenient numerical factor of 299,792,458 - which is otherwise convenient to communicate with ordinary and not so ordinary people, especially those who have been trained to use meters and seconds based on the solar day and meridian - it is as robust as the $c=1$ units.
{ "source": [ "https://physics.stackexchange.com/questions/9314", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3290/" ] }
9,415
We all learn in grade school that electrons are negatively-charged particles that inhabit the space around the nucleus of an atom, that protons are positively-charged and are embedded within the nucleus along with neutrons, which have no charge. I have read a little about electron orbitals and some of the quantum mechanics behind why electrons only occupy certain energy levels. However... How does the electromagnetic force work in maintaining the positions of the electrons? Since positive and negative charges attract each other, why is it that the electrons don't collide with the protons in the nucleus? Are there ever instances where electrons and protons do collide, and, if so, what occurs?
In fact the electrons (at least those in s-shells) do spend some non-trivial time inside the nucleus. The reason they spend a lot of time outside the nucleus is essentially quantum mechanical. To use too simple an explanation their momentum is restricted to a range consistent with being captured (not free to fly away), and as such there is a necessary uncertainty in their position. An example of physics arising because they spend some time in the nucleus is so called "beta capture" radioactive decay in which $$ e + p \to n + \nu $$ occurs within the nucleus. The reason this does not happen in most nuclei is also quantum mechanical and is related to energy levels and Fermi-exclusion. To expand on this picture a little bit, let's appeal to de Broglie and Bohr. Bohr's picture of the electron orbits being restricted to a set of finite energies $E_n \propto 1/n^2$ and frequencies can be given a reasonably natural explanation in terms of de Broglie's picture of all matter as being composed of waves of frequency $f = E/h$ by requiring that a integer number of waves fit into the circular orbit. This leads to a picture of the atom in which all the electrons occupy neat circular orbits far away from the nucleus, and provides one explanation of why the electrons don't just fall into the nucleus under the electrostatic attraction. But it's not the whole story for a number of reasons; for our purposes the most important one is that Bohr's model predicts a minimum angular momentum for the electrons of $\hbar$ when the experimental value is 0. Pushing on, we can solve the three dimensional Schrödinger equation in three dimensions for Hydrogen-like atoms: $$ \left( i\hbar\frac{\partial}{\partial t} - \hat{H} \right) \Psi = 0 $$ for electrons in a $1/r^2$ electrostatic potential to determine the wavefunction $\Psi$ . The wave function is related to the probability $P(\vec{x})$ of finding an electron at a point $\vec{x}$ in space by $$ P(\vec{x}) = \left| \Psi(\vec{x}) \right|^2 = \Psi^{*}(\vec{x}) \Psi(\vec{x}) $$ where $^{*}$ means the complex conjugate. The solutions are usually written in the form $$ \Psi(\vec{x}) = Y^m_l(\theta,\phi) L^{2l+1}_{n-l-1}(r) e^{-r/2} * \text{normalizing factors} $$ Here the $Y$ 's are the spherical harmonics and the $L$ 's are the generalized Laguerre polynomials. But we don't care for the details. Suffice it to say that these solutions represent a probability density for the electrons that is smeared out over a wide area near around the nucleus. Also of note, for $l=0$ states (also known as s orbitals) there is a non-zero probability at the center, which is to say in the nucleus (this fact arises because these orbital have zero angular momentum, which you might recall was not a feature of the Bohr atom).
{ "source": [ "https://physics.stackexchange.com/questions/9415", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3395/" ] }
9,419
How can the universe become infinite in spatial extent if it started as a singularity, wouldn't it take infinite time to expand into an infinite universe?
If the Universe is spatially infinite, it always had to be spatially infinite, even though the distances were shortened by an arbitrary factor right after the Big Bang. In the case of a spatially infinite Universe, one has to be careful that the singularity doesn't necessarily mean a single point in space. It is a place - the whole Universe - where quantities such as the density of matter diverge. In general relativity, people use the so-called Penrose (causal) diagrams of spacetime in which the light rays always propagate along diagonal lines tilted by 45 degrees. If you draw the Penrose diagram for an old-fashioned Big Bang cosmology, the Big Bang itself is a horizontal line - suggesting that the Big Bang was a "whole space worth of points" and not just a point. This is true whether or not the space is spatially infinite. At the popular level - and slightly beyond - these issues are nicely explained in Brian Greene's new book, The Hidden Reality .
{ "source": [ "https://physics.stackexchange.com/questions/9419", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3396/" ] }
9,663
From my recent experience teaching high school students I've found that they are taught that the strong force between nucleons is mediated by virtual-pion exchange, whereas between quarks it's gluons. They are not, however, taught anything about colour or quark-confinement. At a more sophisticated level of physics, is it just that the maths works equally well for either type of boson, or is one (type of boson) in fact more correct than the other?
Dear qftme, I agree that your question deserves a more expansive answer. The answer, "pions" or "gluons", depends on the accuracy with which you want to describe the strong force. Historically, people didn't know about quarks and gluons in the 1930s when they began to study the forces in the nuclei for the first time. In 1935, Hideki Yukawa made the most important early contribution of Japanese science to physics when he proposed that there may be short-range forces otherwise analogous to long-range electromagnetism whose potential is $$V(r) = K\frac{e^{-\mu r}}{r} $$ The Fourier transform of this potential is simply $1/(p^2+\mu^2)$ which is natural - an inverted propagator of a massless particle. (The exponential was added relatively to the Coulomb potential; and in the Fourier transform, it's equivalent to the addition of $\mu^2$ in the denominator.) The Yukawa particle (a spinless boson) was mediating a force between particles that was only significantly nonzero for short enough distances. The description agreed with the application to protons, neutrons, and the forces among them. So the mediator of the strong force was thought to be a pion and the model worked pretty well. (In the 1930s, people were also confusing muons and pions in the cosmic rays, using names that sound bizarre to the contemporary physicists' ears - such as a mesotron, a hybrid of pion and muon, but that's another story.) The pion model was viable even when the nuclear interactions were understood much more quantitatively in the 1960s. The pions are "pseudo-Goldstone bosons". They're spinless (nearly) massless bosons whose existence is guaranteed by the existence of a broken symmetry - in this case, it was the $SU(3)$ symmetry rotating the three flavors we currently know as flavors of the $u,d,s$ light quarks. The symmetry is approximate which is why the pseudo-Goldstone bosons, the pions (and kaons), are not exactly massless. But they're still significantly lighter than the protons and neutrons. However, the theory with the fundamental pion fields is not renormalizable - it boils down to the Lagrangian's being highly nonlinear and complicated. It inevitably produces absurd predictions at short enough distances or high enough energies - distances that are shorter than the proton radius. A better theory was needed. Finally, it was found in Quantum Chromodynamics that explains all protons, neutrons, and even pions and kaons (and hundreds of others) as bound states of quarks (and gluons and antiquarks). In that theory, all the hadrons are described as complicated composite particles and all the forces ultimately boil down to the QCD Lagrangian where the force is due to the gluons. So whenever you study the physics at high enough energy or resolution so that you see "inside" the protons and you see the quarks, you must obviously use gluons as the messengers. Pions as messengers are only good in approximate theories in which the energies are much smaller than the proton mass. This condition also pretty much means that the velocities of the hadrons have to be much smaller than the speed of light.
{ "source": [ "https://physics.stackexchange.com/questions/9663", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1696/" ] }
9,686
The action $$S=\int L \;\mathrm{d}t$$ is an important physical quantity. But can it be understood more intuitively? The Hamiltonian corresponds to the energy, whereas the action has dimension of energy × time, the same as angular momentum. I've heard the action being described as a measure of change , although I don't know how this description can be justified.
The action $S$ is not a well-known object for the laymen; however, when one seriously works as a physicist, it becomes as important and natural as the energy $H$ . So the action is probably unintuitive for the inexperienced users - and there's no reason to hide it - but it is important for professional physicists, especially in particle and theoretical physics. The OP's statement that the Hamiltonian corresponds to energy is a vacuous tautology because the Hamiltonian is a technical synonym for energy. In the same way, one might say that the action intuitively corresponds to Wirkung (a German name) because it's the same thing, too. Because it now has two names, it becomes more natural :-) and the OP could also blame the energy for having "unnatural" units of action per unit time. In other words, the question assumes that energy (and its unit) is more fundamental and intuitive than the action (and its unit) - so it shouldn't be surprising that using his assumptions, the OP may also "deduce" the conclusion that the energy is more fundamental and intuitive than the action. ;-) But is the assumption = conclusion right? Well, energy is intuitive because it's conserved, and the action is intuitive because it's minimized - so there is no qualitative difference in their importance. Of course, the only difference is that non-physicists don't learn to use the action at all. The energy may be imagined as "potatoes" which every can do; the action is an abstract score on the history that is only useful once we start to derive differential equations out of it - which almost no layman can imagine. If the laymen's experience with a concept measures whether something is "intuitive", then the action simply is less intuitive and there is no reason to pretend otherwise. However, physicists learn that it's in some sense more fundamental than the energy. Well, the Hamiltonian is the key formula defining time evolution in the Hamiltonian picture while the action is the key formula to define the evolution in the nicer, covariant, "spacetime" picture, which is why HEP physicists use it all the time. What the action is in general Otherwise, the main raison d'etre for the action is the principle of least action , which is what everyone should learn if they want to know anything about the action itself. Historically, this principle - and the concept of action - generalized various rules for the light rays that minimize time to get somewhere, and so on. It makes no sense to learn about a quantity without learning about the defining "application" that makes it important in physics. Energy is defined so that it's conserved whenever the laws of Nature are time-translational symmetric; and action is defined as whatever is minimized by the history that the system ultimately takes to obey the same laws. The energy is a property of a system at a fixed moment of time - and because it's usually conserved, it has the same values at all moments. On the other hand, the action is not associated with the state of a physical object; it is associated with a history. There is one point I need to re-emphasize. For particular systems, there may exist particular "defining" formulae for the Hamiltonian or the action, such as $E=mv^2/2$ or $S = \int dt(mv^2/2-kx^2/2)$ . However, they're not the most universal and valid definitions of the concepts. These formulae don't explain why they were chosen in this particular way, what they're good for, and how to generalize them in other systems. And one shouldn't be surprised that one may derive the right equations of motion out of these formulae for $H$ or $S$ . Instead, the energy is universally defined in such a way that it is conserved as a result of the time-translational symmetry; and the action is defined in such a way that the condition $\delta S = 0$ (stationarity of the action) is equivalent to the equations of motion. These are the general conditions that define the concepts in general and that make them important; particular formulae for the energy or action are just particular applications of the general rules. In the text above, I was talking about classical i.e. non-quantum physics. In quantum physics, the action doesn't pick the only allowed history; instead, one calculates the probability amplitudes as sums over all histories weighted by $\exp(iS/\hbar)$ which may be easily seen to reduce to the classical predictions in the classical limit. A stationary action of a history means that the nearby histories have a similar phase and they constructively interfere with each other, making the classically allowed history more important than others.
{ "source": [ "https://physics.stackexchange.com/questions/9686", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2726/" ] }
9,720
On a quantum scale the smallest unit is the Planck scale , which is a discrete measure. There several question that come to mind: Does that mean that particles can only live in a discrete grid-like structure, i.e. have to "magically" jump from one pocket to the next? But where are they in between? Does that even give rise to the old paradox that movement as such is impossible (e.g. Zeno's paradox )? Does the same hold true for time (i.e. that it is discrete) - with all the ensuing paradoxes? Mathematically does it mean that you have to use difference equations instead of differential equations? (And sums instead of integrals?) From the point of view of the space metric do you have to use a discrete metric (e.g. the Manhattan metric ) instead of good old Pythagoras? Thank you for giving me some answers and/or references where I can turn to. Update: I just saw this call for papers - it seems to be quite a topic after all: Is Reality Digital or Analog? FQXi Essay Contest, 2011. Call for papers (at Wayback Machine) , All essays , Winners . One can find some pretty amazing papers over there.
The answer to all questions is No. In fact, even the right reaction to the first sentence - that the Planck scale is a "discrete measure" - is No. The Planck length is a particular value of distance which is as important as $2\pi$ times the distance or any other multiple. The fact that we can speak about the Planck scale doesn't mean that the distance becomes discrete in any way. We may also talk about the radius of the Earth which doesn't mean that all distances have to be its multiples. In quantum gravity, geometry with the usual rules doesn't work if the (proper) distances are thought of as being shorter than the Planck scale. But this invalidity of classical geometry doesn't mean that anything about the geometry has to become discrete (although it's a favorite meme promoted by popular books). There are lots of other effects that make the sharp, point-based geometry we know invalid - and indeed, we know that in the real world, the geometry collapses near the Planck scale because of other reasons than discreteness. Quantum mechanics got its name because according to its rules, some quantities such as energy of bound states or the angular momentum can only take "quantized" or discrete values (eigenvalues). But despite the name, that doesn't mean that all observables in quantum mechanics have to possess a discrete spectrum. Do positions or distances possess a discrete spectrum? The proposition that distances or durations become discrete near the Planck scale is a scientific hypothesis and it is one that may be - and, in fact, has been - experimentally falsified. For example, these discrete theories inevitably predict that the time needed for photons to get from very distant places of the Universe to the Earth will measurably depend on the photons' energy. The Fermi satellite has showed that the delay is zero within dozens of milliseconds http://motls.blogspot.com/2009/08/fermi-kills-all-lorentz-violating.html which proves that the violations of the Lorentz symmetry (special relativity) of the magnitude that one would inevitably get from the violations of the continuity of spacetime have to be much smaller than what a generic discrete theory predicts. In fact, the argument used by the Fermi satellite only employs the most straightforward way to impose upper bounds on the Lorentz violation. Using the so-called birefringence, http://arxiv.org/abs/1102.2784 one may improve the bounds by 14 orders of magnitude! This safely kills any imaginable theory that violates the Lorentz symmetry - or even continuity of the spacetime - at the Planck scale. In some sense, the birefringence method applied to gamma ray bursts allows one to "see" the continuity of spacetime at distances that are 14 orders of magnitude shorter than the Planck length. It doesn't mean that all physics at those "distances" works just like in large flat space. It doesn't. But it surely does mean that some physics - such as the existence of photons with arbitrarily short wavelengths - has to work just like it does at long distances. And it safely rules out all hypotheses that the spacetime may be built out of discrete, LEGO-like or any qualitatively similar building blocks.
{ "source": [ "https://physics.stackexchange.com/questions/9720", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/171/" ] }
9,731
I used to read the term "pure energy" in the context of matter-antimatter annihilation. Is the "pure energy" spoken of photons? Is it some form of heat? Some kind of particles with mass? Basically, what does "pure energy" in the context of matter-antimatter annihilation refer to?
If I ruled the world, I would ban the phrase "pure energy" in contexts like this. There's no such thing as pure energy! When particles and antiparticles annihilate, the resulting energy can take many different forms -- one of the basic principles of quantum physics is that any process that's not forbidden (say, because of violation of some sort of conservation law) has some probability of happening. So when a proton and an antiproton annihilate, they can produce photons, or pairs of other particles and antiparticles, such as a neutrino-antineutrino pair, or a positron-electron pair. Although all sorts of things are possible, by far the most common product of matter-antimatter annihilation is photons, especially if the collision occurs at low energy. One reason is that lower-mass particles are easier to create than high-mass particles, and nothing has less mass than a photon. (Other particles, particularly neutrinos, have so little mass that they are "effectively massless," but neutrinos are weakly interacting particles, which means that the probability of producing them is lower.)
{ "source": [ "https://physics.stackexchange.com/questions/9731", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1916/" ] }
9,814
I found a pair of polarizing "3D glasses" lying around, and tried to look at myself in the mirror while wearing them. To my utter confusion, when closing the left eye and only looking through the right eye, I could not see the right eye in the mirror. The light could not pass through the same polarized lens twice. (I could, however, see the closed left eye clearly.) I would expect the opposite to be true, as light going out the right lens with polarization X and coming back in with the same polarization X should pass through unaffected.
See the Wiki article on Polarized 3D glasses . Most likely, you have a pair of circularly polarized glasses. The mirror reverses the circular polarization. The article on Circular polarization does it better than I would be likely to achieve in less than an hour or two. Or Hyperphysics , or Google .
{ "source": [ "https://physics.stackexchange.com/questions/9814", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3528/" ] }
10,052
I've read on NASA's page on neutron star that one teaspoonful of that star would weigh over 20 billion tonnes on Earth. If it was somehow possible to bring it to earth would it: Burn and disappear during Earth atmosphere entry? Assuming we have 20 billion tonnes of mass occupying the volume of a teaspoon here on Earth, would it fall through the ground under its own weight?
The reason that the density is so high is because the pressures are so immense. If we somehow teleported a teaspoonful of neutron star material to earth, it would very rapidly inflate because the pressures aren't high enough to crush it into its dense form. This would effectively be an enormous explosion. It is difficult to describe what it would inflate out into - the neutron star material can be imagined as an incredibly dense soup of neutrons with some protons and leptons in small numbers. The protons and leptons would make neutron-rich elements like deuterium , but most of the matter would consist of free neutrons. These free neutrons would undergo beta decay to produce neutrinos, protons, and electrons, which would likely recombine to make very large amounts of hydrogen, some helium, and a few heavier atoms. In all of these cases, the atoms would be neutron-rich isotopes, though. The behavior would look most like a very rapidly expanding gas. It would explode with such force that it wouldn't even need to "fall through the ground" - it would obliterate the floor entirely.
{ "source": [ "https://physics.stackexchange.com/questions/10052", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3605/" ] }
10,078
Hubble's constant $a(t)$ appears to be changing over time. The fine stucture constant $\alpha$, like many others in QFT, is a running constant that varies, proportional to energy being used to measure it. Therefore, it could be agued that all running constants have 'evolved' over time as the Universe has expanded and cooled. Both the local and global curvature of the Universe changes over time implying that so too does the numerical value of $\pi$. All these things are however constants (well, let's say parameters since they are not really 'constant'.) In a discussion with astronomer Sir Fred Hoyle, Feynman said " what today, do we not consider to be part of physics, that may ultimately become part of physics? " He then goes on to say " ..it's interesting that in many other sciences there's a historical question, like in geology - the question how did the Earth evolve into the present condition? In biology - how did the various species evolve to get to be the way they are? But the one field that hasn't admitted any evolutionary question - is physics. " So have the laws of physics remained form-invariant over the liftetime of the Universe? Does the recent understanding of the aforementioned not-so-constant constants somehow filter into the actual form of the equations being used? Has advances in astronomical observations, enabling us to peer back in time as far back as the CMB, given us any evidence to suggest that the laws of nature have evolved? If Feynman thinks that " It might turn out that they're not the same all the time and that there is a historical, evolutionary question. " then this is surely a question worth asking. NB/ To be clear: this is a question concerning purely physics, whether the equations therein change as the Universe ages, and whether there is any observational evidence for this. It is not intended as an oportunity for a philosophical discussion.
For many (most? all?) physicists, it's something like an axiom (or an article of faith, if you prefer) that the true laws don't change over time. If we find out that one of our laws does change, we start looking for a deeper law that subsumes the original and that can be taken to be universal in time and space. A good example is Coulomb's Law, or more generally the laws of electromagnetism. In a sense, you could say that Coulomb's Law changed form over time: in the early Universe, when the energy density was high enough that electroweak symmetry was unbroken, Coulomb's Law wasn't true in any meaningful or measurable sense. If you thought that Coulomb's Law today was a fundamental law of nature, then you'd say that that law changed form over time: it didn't use to be true, but now it is. But of course that's not the way we usually think of it. Instead, we say that Coulomb's Law was never a truly correct fundamental law of nature; it was always just a special case of a more general law, valid in certain circumstances. A more interesting example, along the same lines: Lots of theories of the early Universe involve the idea that the Universe in the past was in a "false vacuum" state, but then our patch of the Universe decayed to the "true vacuum" (or maybe just another false vacuum!). If you were around then, you'd definitely perceive that as a complete change in the laws of physics: the particles that existed, and the ways those particles interacted, were completely different before and after the decay. But we tend not to think of that as a change in the laws of physics, just as a change in the circumstances within which we apply the laws. The point is just that when you try to ask a question about whether the fundamental laws change over time, you have to be careful to distinguish between actual physics questions and merely semantic questions. Whether the Universe went through one of these false vacuum decays is (to me, anyway) a very interesting physics question. I care much less whether we describe such a decay as a change in the laws of physics.
{ "source": [ "https://physics.stackexchange.com/questions/10078", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1696/" ] }
10,230
Operators can be cyclically interchanged inside a trace: $${\rm Tr} (AB)~=~{\rm Tr} (BA).$$ This means the trace of a commutator of any two operators is zero: $${\rm Tr} ([A,B])~=~0.$$ But what about the commutator of the position and momentum operators for a quantum particle? On the one hand: $${\rm Tr}([x,p])~=~0,$$ while on the other hand: $$[x,p]~=~i\hbar.$$ How does this work out?
$x$ and $p$ do not have finite-dimensional representations. In particular, $xp$ and $px$ are not "trace-class". Loosely, this means that the traces of $xp$ and $px$ are both infinite, although it's best to take them both to be undefined. Again loosely, if you subtract $\infty-\infty$, you can certainly get $i\hbar$. But you shouldn't. Everything works out if you think of $p$ as a complex multiple of the derivative operator, for which $\frac{\partial}{\partial x}$ and $x$ act on the infinite dimensional space of polynomials in $x$.
{ "source": [ "https://physics.stackexchange.com/questions/10230", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/975/" ] }
10,362
I read that the non-commutativity of the quantum operators leads to the uncertainty principle . What I don't understand is how both things hang together. Is it that when you measure one thing first and than the other you get a predictably different result than the measuring the other way round? I know what non-commutativity means (even the minus operator is non-commutative) and I think I understand the uncertainty principle (when you measure one thing the measurement of the other thing is kind of blurred - and vice versa) - but I don't get the connection. Perhaps you could give a very easy everyday example with non-commuting operators (like subtraction or division) and how this induces uncertainty and/or give an example with commuting operators (addition or multiplication) and show that there would be no uncertainty involved.
There is a fair amount of background mathematics to this question, so it will be a while before the punch line. In quantum mechanics, we aren't working with numbers to represent the state of a system. Instead we use vectors . For the purpose of a simple introduction, you can think of a vector as a list of several numbers. Therefore, a number itself is a vector if we let the list length be one. If the list length is two, then $(.6, .8)$ is an example vector. The operators aren't things like plus, minus, multiply, divide. Instead, they are functions; they take in one vector and put out another vector. Multiplication isn't an operator, but multiplication by two is. An operator acts on a vector. For example, if the operator "multiply by two" acts on the vector $(.6, .8)$, we get $(1.2, 1.6)$. Commutativity is a property of two operators considered together. We cannot say "operator $A$ is non-commutative", because we're not comparing it to anything. Instead, we can say "operator $A$ and operator $B$ do not commute". This means that the order you apply them matters. For example, let operator $A$ be "switch the two numbers in the list" and operator $B$ be "subtract the first one from the second". To see whether these operators commute, we take the general vector $(a,b)$ and apply the operators in different orders. As an example of notation, if we apply operator $A$ to $(a,b)$, we get $(b,a)$. This can be written $A(a,b) = (b,a)$. $$BA(a,b) = (b,a-b)$$ $$AB(a,b) = (b-a,a)$$ When we apply the operators in the different orders, we get a different result. Hence, they do not commute. The commutator of the operators is defined by $$\textrm{commutator}(A,B) = [A,B] = AB - BA$$ This is a new operator. Its output for a given input vector is defined by taking the input vector, acting on it with $B$, then acting on the result with $A$, then going back to the original vector and doing the same in opposite order, then subtracting the second result from the first. If we apply this composite operator (to wit: the commutator) to $(a,b)$, we get (by subtraction using the two earlier results) $$(AB - BA)(a,b) = (-a,b)$$ So the commutator of $A$ and $B$ is the operator that multiplies the first entry by minus one. An eigenvector of an operator is a vector that is unchanged when acted on by that operator, except that the vector may be multiplied by a constant. Everything is an eigenvector of the operator "multiply by two". The eigenvectors of the switch operator $A$ are $\alpha(1,1)$ and $\beta(1,-1)$, with $\alpha$ and $\beta$ any numbers. For $(1,1)$, switching the entries does nothing, so the vector is unchanged. For $(1,-1)$, switching the entries multiplies by negative one. On the other hand if we switch the entries in $(.6,.8)$ to get $(.8,.6)$, the new vector and the old one are not multiples of each other, so this is not an eigenvector. The number that the eigenvector is multiplied by when acted on by the operator is called its eigenvalue. The eigenvalue of $(1,-1)$ is $-1$, at least when we're talking about the switching operator. In quantum mechanics, there is uncertainty for a state that is not an eigenvector, and certainty for a state that is an eigenvector. The eigenvalue is the result of the physical measurement of the operator. For example, if the energy operator acts on a state (vector) with no uncertainty in the energy, we must find that that state is an eigenvector, and that its eigenvalue is the energy of the state. On the other hand, if we make an energy measurement when the system is not in an eigenvector state, we could get different possible results, and it is impossible to predict which one it will be. We will get an eigenvalue, but it's the eigenvalue of some other state, since our state isn't an eigenvector and doesn't even have an eigenvalue. Which eigenvalue we get is up to chance, although the probabilities can be calculated. The uncertainty principle states roughly that non-commuting operators cannot both have zero uncertainty at the same time because there cannot be a vector that is an eigenvector of both operators. (Actually, we will see in a moment that is is not precisely correct, but it gets the gist of it. Really, operators whose commutators have a zero-dimensional null space cannot have a simultaneous eigenvector.) The only eigenvector of the subtraction operator $B$ is $\gamma(0,1)$. Meanwhile, the only eigenvectors of the switch operator $A$ are $\alpha(1,1)$ and $\beta(1,-1)$. There are no vectors that are eigenvectors of both $A$ and $B$ at the same time (except the trivial $(0,0)$), so if $A$ and $B$ represented physical observables, we could not be certain of them both $A$ and $B$ at the same time. ($A$ and $B$ are not actually physical observables in QM, I just chose them as simple examples.) We would like to see that this works in general - any time two operators do not commute (with certain restrictions), they do not have any simultaneous eigenvectors. We can prove it by contradiction. Suppose $(a,b)$ is an eigenvector of $A$ and $B$. Then $A(a,b) = \lambda_a(a,b)$, with $\lambda_a$ the eigenvalue. A similar equation holds for $B$. $$AB(a,b) = \lambda_a\lambda_b(a,b)$$ $$BA(a,b) = \lambda_b\lambda_a(a,b)$$ Because $\lambda_a$ and $\lambda_b$ are just numbers being multiplied, they commute, and the two values are the same. Thus $$(AB-BA)(a,b) = (0,0)$$ So the commutator of $A$ and $B$ gives zero when it acts on their simultaneous eigenvector. Many commutators can't give zero when they act on a non-zero vector, though. (This is what it means to have a zero-dimensional null space, mentioned earlier.) For example, our switch and subtract operators had a commutator that simply multiplied the first number by minus one. Such a commutator can't give zero when it acts on anything that isn't zero already, so our example $A$ and $B$ can't have a simultaneous eigenvector, so they can't be certain at the same time, so there is an "uncertainty principle" for them. If the commutator had been the zero operator, which turns everything into zero, then there's no problem. $(a,b)$ can be whatever it wants and still satisfy the above equation. If the commutator had been something that turns some vectors into the zero vector, those vectors would be candidates for zero-uncertainty states, but I can't think of any examples of this situation in real physics. In quantum mechanics, the most famous example of the uncertainty principle is for the position and momentum operators. Their commutator is the identity - the operator that does nothing to states. (Actually it's the identity times $i \hbar$.) This clearly can't turn anything into zero, so position and momentum cannot both be certain at the same time. However, since their commutator multiplies by $\hbar$, a very small number compared to everyday things, the commutator can be considered to be almost zero for large, energetic objects. Therefore position and momentum can both be very nearly certain for everyday things. On the other hand, the angular momentum and energy operators commute, so it is possible for both of these to be certain. The most mathematically accessible non-commuting operators are the spin operators, represented by the Pauli spin matrices . These deal with vectors with only two entries. They are slightly more complicated than the $A$ and $B$ operators I described, but they do not require a complete course in the mathematics of quantum mechanics to explore. In fact, the uncertainty principle says more than I've written here - I left parts out for simplicity. The uncertainty of a state can be quantified via the standard deviation of the probability distribution for various eigenvalues. The full uncertainty principle is usually stated $$\Delta A \Delta B \geq \frac{1}{2}\mid \langle[A,B]\rangle \mid$$ where $\Delta A$ is the uncertainty in the result of a measurement in the observable associated with the operator $A$ and the brackets indicate finding an expectation value . If you would like some details on this, I wrote some notes a while ago that you can access here .
{ "source": [ "https://physics.stackexchange.com/questions/10362", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/171/" ] }
10,464
Often, I'll be driving down the road on a summer day, and as I look ahead toward the horizon, I notice that the road looks like there's a puddle of water on it, or that it was somehow wet. Of course, as I get closer, the effect disappears. I know that it is some kind of atmospheric effect. What is it called, and how does it work?
The phenomenon is called Mirage (EDIT: I called it Fata Morgana earlier, but a Fata Morgana is a special case of mirage that's a bit more complex). The responsible effect is the dependence of the refractive index of air on the density of air, which, in turn, depends on the temperature of the air (hot air being less dense than cold air). A non-constant density leads to refraction of light. If there's a continuous gradient in the density, you get a bent curve (i) as opposed to light coming straight at you (d). Your eye does not know, of course, that the light (i) coming at it was bent, so your eye/brain continues the incoming light in a straight line (v). This mirroring of the car (or other objects) then tricks you into thinking the road is wet, because a wet street would also lead to a reflection. In addition, the air wobbles (i.e. density fluctuations), causing the mirror image to wobble as well, which adds to the illusion of water.
{ "source": [ "https://physics.stackexchange.com/questions/10464", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3395/" ] }
10,470
When the sun is out after a rain, I can see what appears to be steam rising off a wooden bridge nearby. I'm pretty sure this is water turning into a gas. However, I thought water had to reach 100 degrees C to be able to turn into a gas. Is there an edge case, for small amounts of water perhaps, that allows it to evaporate?
Evaporation is a different process to boiling. The first is a surface effect that can happen at any temperature, while the latter is a bulk transformation that only happens when the conditions are correct. Technically the water is not turning into a gas, but random movement of the surface molecules allows some of them enough energy to escape from the surface into the air. The rate at which they leave the surface depends on a number of factors - for instance the temperature of both air and water, the humidity of the air, and the size of the surface exposed. When the bridge is 'steaming': the wood is marginally warmer than the air (due to the sun shine), the air is very humid (it has just been raining) and the water is spread out to expose a very large surface area. In fact, since the air is cooler and almost saturated with water, the molecules of water are almost immediately condensing into micro-droplets in the air - which is why you can see them. BTW - As water vapour is a gas, it is completely transparent. If you can see it then it is steam, which consists of tiny water droplets (basically water vapour that has condensed). Consider a kettle boiling - the white plume only occurs a short distance above the spout. Below that it is water vapour, above it has cooled into steam. Steam disappears after a while, as it has evaporated once again.
{ "source": [ "https://physics.stackexchange.com/questions/10470", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3751/" ] }
10,690
Are there any analytical proofs for the 2nd law of thermodynamics ? Or is it based entirely on empirical evidence?
It's simple to "roughly prove" the second law in the context of statistical physics. The evolution $A\to B$ of macrostate $A$, containing $\exp(S_A)$ microstates, to macrostate $B$, containing $\exp(S_B)$ microstates, is easily shown by the formula for the probability "summing over final outcomes, averaging over initial states", to be $\exp(S_B-S_A)$ higher than the probability of the inverse process (with velocities reversed). Because $S_B-S_A$ is supposed to be macroscopic, such as $10^{26}$ for a kilogram of matter, the probability in the wrong direction is the exponential of minus this large difference and is zero for all practical purposes. The more rigorous versions of this proof are always variations of the 1872 proof of the so-called H-theorem by Ludwig Boltzmann: H-theorem This proof may be adjusted to particular or general physical systems, both classical ones and quantum ones. Please ignore the invasive comments on the Wikipedia about Loschmidt's paradoxes and similar stuff which is based on a misunderstanding. The H-theorem is a proof that the thermodynamic arrow of time - the direction of time in which the entropy increases - is inevitably aligned with the logical arrow of time - the direction in which one is allowed to make assumptions (the past) in order to evolve or predict other phenomena (in the future). Every Universe of our type has to have a globally well-defined logical arrow of time: it has to know that the future is being directly evolving (although probabilistically, but with objectively calculable probabilities) from the past. So any universe has to distinguish the future and the past logically, it has to have a logical arrow of time, which is also imprinted to our asymmetric reasoning about the past and the future. Given these qualitative assumptions that are totally vital for the usage of logic in any setup that works with a time coordinate, the H-theorem shows that a particular quantity can't be decreasing, at least not by macroscopic amounts, for a closed system.
{ "source": [ "https://physics.stackexchange.com/questions/10690", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1512/" ] }
10,800
The Hall effect can be used to determine the sign of the charge carriers, as a positive particle drifting along the wire and a negative particle drifting the other direction get deflected the same (as $F = q \vec{v}\times\vec{B} = (-q) (-\vec{v})\times\vec{B}$). But I don't understand how positive charge carriers are ever possible. As I've understood, a positive hole is nothing more than the absence of an electron. As all the electrons in the valence band are still negatively charged, why would this hole behave in a magnetic field as if it were positive? Also, a hole is created if an electron is excited into the conductance band. If there is always the same numbers of holes as electrons, how can any Hall effect ever occur? Thank you very much
There are two essential facts that make a hole a hole: Fact (1) The valence band is almost full of electrons (unlike the conduction band which is almost empty); Fact (2) The dispersion relation near the valence band maximum curves in the opposite direction to a normal electron or a conduction-band electron. Fact (2) is often omitted in simplistic explanations, but it's crucial, so I'll elaborate. STEP 1: Dispersion relation determines how electrons respond to forces (via the concept of effective mass) EXPLANATION: A dispersion relation is the relationship between wavevector (k-vector) and energy in a band, part of the band structure. Remember, in quantum mechanics, the electrons are waves, and energy is the wave frequency. A localized electron is a wavepacket, and the motion of an electron is given by the formula for the group velocity of a wave. An electric field affects an electron by gradually shifting all the wavevectors in the wavepacket, and the electron moves because its wave group velocity changes. Again, the way an electron responds to forces is entirely determined by its dispersion relation. A free electron has the dispersion relation $E=\frac{\hbar^2k^2}{2m}$, where m is the (real) electron mass. In the conduction band, the dispersion relation is $E=\frac{\hbar^2k^2}{2m^*}$ ($m^*$ is the "effective mass"), so the electron responds to forces as if it had the mass $m^*$. STEP 2: Electrons near the top of the valence band behave like they have negative mass. EXPLANATION: The dispersion relation near the top of the valence band is $E=\frac{\hbar^2k^2}{2m^*}$ with negative effective mass. So electrons near the top of the valence band behave like they have negative mass. When a force pulls the electrons to the right, these electrons actually move left!! I want to emphasize again that this is solely due to Fact (2) above, not Fact (1) . If you could somehow empty out the valence band and just put one electron near the valence band maximum (an unstable situation of course), this electron would really move the "wrong way" in response to forces. STEP 3: What is a hole, and why does it carry positive charge? EXPLANATION: Here we're finally invoking Fact (1) . A hole is a state without an electron in an otherwise-almost-full valence band. Since a full valence band doesn't do anything (can't carry current), we can calculate currents by starting with a full valence band and subtracting the motion of the electrons that would be in the hole state if it wasn't a hole. Subtracting the current from a negative charge moving is the same as adding the current from a positive charge moving on the same path. STEP 4: A hole near the top of the valence band move the same way as an electron near the top of the valence band would move. EXPLANATION: This is blindingly obvious from the definition of a hole. But many people deny it anyway, with the "parking lot example". In a parking lot, it is true, when a car moves right, an empty space moves left. But electrons are not in a parking lot. A better analogy is a bubble underwater in a river: The bubble moves the same direction as the water, not opposite. STEP 5: Put it all together. From Steps 2 and 4, a hole responds to electromagnetic forces in the exact opposite direction that a normal electron would. But wait, that's the same response as it would have if it were a normal particle with positive charge. Also, from Step 3, a hole in fact carries a positive charge. So to sum up, holes (A) carry a positive charge, and (B) respond to electric and magnetic fields as if they have a positive charge. That explains why we can completely treat them as real mobile positive charges in their response to both electric and magnetic fields. So it's no surprise that the Hall effect can show the signs of mobile positive charges.
{ "source": [ "https://physics.stackexchange.com/questions/10800", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3234/" ] }
10,827
From high school, I remember that Aluminium has 13 electrons and thus has an unpaired electron in the 3p shell. This should make Aluminium magnetic. However, the wiki page of Aluminium says its non-magnetic at one place(with a citation needed tag though) and at another place says it's paramagnetic. Doing a google search shows up some contradictory results. So what is the truth? Note:The context of the question is this answer on scifi.SE about magneto .
It really depends on what you mean by "magnetic," because there are different kinds of magnetic properties. Materials like iron are ferromagnetic , which means that once you align the individual magnetic dipoles in the material, they will tend to stay aligned even without an external magnetic field. Ferromagnetic materials are the ones that permanent magnets are made out of, and they are probably what most people think of when they imagine a magnetic material. There are only three elements (as far as I know) that are ferromagnetic: iron, cobalt, and nickel, although other elements can be combined to make ferromagnetic polyatomic crystals. Other materials that aren't ferromagnetic can (and typically do) have interesting magnetic properties, though - in other words, just because a material isn't a ferromagnet doesn't mean it doesn't interact magnetically at all. Paramagnetism is one such interaction. When you put a paramagnetic material in a magnetic field, its individual dipoles tend to align with the magnetic field, and thus with each other, thereby making the material magnetic. When this happens, the paramagnetic material is attracted to the magnetic field. The difference is that when you take the external magnetic field away, the individual dipoles in a paramagnetic material don't retain their orientation. Instead, thermal motion takes over and reorients them randomly. So a paramagnetic material only has a net magnetic moment while it is in an external magnetic field. If Magneto is able to control magnetic fields, then that would potentially allow him to control all sorts of magnetic materials - not just ferromagnets (iron etc.) but also all paramagnetic and perhaps diamagnetic materials, since he can create the external field necessary to magnetize those materials. In fact, all materials, even non-metals, are diamagnetic to some (small) extent. However, paramagnetism and especially diamagnetism are generally much weaker effects than ferromagnetism, so it stands to reason that Magneto would have a harder time controlling non-ferromagnetic materials. The closest thing to a scientific explanation for Magneto's abilities that I can come up with is that he's able to generate magnetic fields that are strong enough to have a significant effect on ferromagnetic and some of the more paramagnetic materials, but with diamagnetic materials, the magnetic fields he can produce are not strong enough to override other natural forces acting on those materials. Of course, I'm sure that wouldn't really hold up if you really look at the comics or the movie closely... but with comic books you probably don't want to ask too many questions ;-)
{ "source": [ "https://physics.stackexchange.com/questions/10827", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3417/" ] }
10,933
In history we are taught that the Catholic Church was wrong, because the Sun does not move around the Earth, instead the Earth moves around the Sun. But then in physics we learn that movement is relative, and it depends on the reference point that we choose. Wouldn't the Sun (and the whole universe) move around the Earth if I place my reference point on Earth? Was movement considered absolute in physics back then?
Imagine two donut-shaped spaceships meeting in deep space. Further, suppose that when a passenger in ship A looks out the window, they see ship B rotating clockwise. That means that when a passenger in B looks out the window, they see ship A rotating clockwise as well (hold up your two hands and try it!). From pure kinematics, we can't say "ship A is really rotating, and ship B is really stationary", nor the opposite. The two descriptions, one with A rotating and the other with B, are equivalent. (We could also say they are both rotating a partial amount.) All we know, from a pure kinematics point of view, is that the ships have some relative rotation. However, physics does not agree that the rotation of the ships is purely relative. Passengers on the ships will feel artificial gravity . Perhaps ship A feels lots of artificial gravity and ship B feels none. Then we can say with definity that ship A is the one that's really rotating. So motion in physics is not all relative. There is a set of reference frames, called inertial frames, that the universe somehow picks out as being special. Ships that have no angular velocity in these inertial frames feel no artificial gravity. These frames are all related to each other via the Poincare group . In general relativity, the picture is a bit more complicated (and I will let other answerers discuss GR, since I don't know much), but the basic idea is that we have a symmetry in physical laws that lets us boost to reference frames moving at constant speed, but not to reference frames that are accelerating. This principle underlies the existence of inertia , because if accelerated frames had the same physics as normal frames, no force would be needed to accelerate things. For the Earth going around the sun and vice versa, yes, it is possible to describe the kinematics of the situation by saying that the Earth is stationary. However, when you do this, you're no longer working in an inertial frame. Newton's laws do not hold in a frame with the Earth stationary. This was dramatically demonstrated for Earth's rotation about its own axis by Foucalt's pendulum , which showed inexplicable acceleration of the pendulum unless we take into account the fictitious forces induced by Earth's rotation. Similarly, if we believed the Earth was stationary and the sun orbited it, we'd be at a loss to explain the Sun's motion, because it is extremely massive, but has no force on it large enough to make it orbit the Earth. At the same time, the Sun ought to be exerting a huge force on Earth, but Earth, being stationary, doesn't move - another violation of Newton's laws. So, the reason we say that the Earth goes around the sun is that when we do that, we can calculate its orbit using only Newton's laws. In fact, in an inertial frame, the sun moves slightly due to Earth's pull on it (and much more due to Jupiter's), so we really don't say the sun is stationary. We say that it moves much less than Earth. (This answer largely rehashes Lubos' above, but I was most of the way done when he posted, and our answers are different enough to complement each other, I think.)
{ "source": [ "https://physics.stackexchange.com/questions/10933", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3935/" ] }
10,956
I'm a little familiar with the physics and thermodynamics of the wind chill effect, but this question seems to come up from time to time: Why, given two temperature sensors or thermometers in the same environment, do both report the same temperature if one is exposed to wind when the other is shielded from it? People often ask because the temperature reported by, for example, their vehicle, doesn't seem to change as they drive at different speeds, etc. (Other than of course the changes from one actual temperature to another as they change geography.) My understanding is that inert devices aren't endothermic like we are, so the effects of wind chill don't apply. Can you explain this in layman's terms?
It’s really pretty simple: The thermometer measures temperature, wind chill measures heat loss for a body warmer than the air. Wind makes more unheated air available to conduct heat away from a hot body, but with a body at air temperature no heat is being conducted away from the thermometer. Say you asked a secondary question, “When I put a thermometer outside, how long must I wait until it has reached air temperature so that its reading is meaningful?” In that case, the windy conditions will decrease the time it takes to equilibrate.
{ "source": [ "https://physics.stackexchange.com/questions/10956", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1638/" ] }
11,136
Ok so the universe is in constant expansion, that has been proven, right? And that means that it was smaller in the past.. But what's the smallest size we can be sure the universe has ever had? I just want to know what's the oldest thing we are sure about.
Spencer's comment is right: we never "prove" anything in science. This may sound like a minor point, but it's worth being careful about. I might rephrase the question like this: What's the smallest size of the Universe for which we have substantial observational evidence in support of the standard big-bang picture? People can disagree about what constitutes substantial evidence, but I'll nominate the epoch of nucleosynthesis as an answer to this question. This is the time when deuterium, helium, and lithium nuclei were formed via fusion in the early Universe. The observed abundances of those elements match the predictions of the theory, which is evidence that the theory works all the way back to that time. The epoch of nucleosynthesis corresponds to a redshift of about $z=10^9$. The redshift (actually $1+z$) is just the factor by which the Universe has expanded in linear scale since the time in question, so nucleosynthesis occurred when the Universe was about a billion times smaller than it is today. The age of the Universe (according to the standard model) at that time was about one minute. Other people may nominate different epochs for the title of "earliest epoch we are reasonably sure about." Even a hardened skeptic shouldn't go any later than the time of formation of the microwave background ($z=1100$, $t=400,000$ years). In the other direction, even the most credulous person shouldn't go any earlier than the time of electroweak symmetry breaking ($z=10^{15}$, $t=10^{-12}$ s.) I vote for the nucleosynthesis epoch because I think it's the earliest period for which we have reliable astrophysical evidence. The nucleosynthesis evidence was controversial as recently as about 10 or 15 years ago, but I don't think it is anymore. One way to think about it is that the theory of big-bang nucleosynthesis depends on essentially one parameter, namely the baryon density. If you use the nucleosynthesis observations to "measure" that parameter, you get the same answer as is obtained by quite a variety of other techniques. The argument for an earlier epoch such as electroweak symmetry breaking is that we think we have a good understanding of the fundamental physical laws up to that energy scale. That's true, but we don't have direct observational tests of the cosmological application of those laws. I'd be very surprised if our standard theory turns out to be wrong on those scales, but we haven't tested it back to those times as directly as we've tested things back to nucleosynthesis.
{ "source": [ "https://physics.stackexchange.com/questions/11136", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3935/" ] }
11,154
One of the major impediments to the widespread adoption of electric cars is a shortage of lithium for the batteries. I read an article a while back that says that there is simply not enough lithium available on the entire planet to make enough batteries to replace every gasoline-powered car with one electric car. And that confuses the heck out of me. The Big Bang theory says that in the beginning, there was a whole bunch of hydrogen, and then lots of hydrogen started to clump together and form stars, and those stars produced lots of helium through fusion, and then after helium, all the rest of the elements. That's why hydrogen is the most common element in the universe by far, and helium is the second most common. Well, lithium is #3 on the periodic table. By extrapolation, there ought to be several times more lithium around than, say, iron or aluminum, which there is definitely enough of for us to build plenty of cars with. So why do we have a scarcity of lithium?
Actually, what you've read about the production of nuclei is not quite correct. There are several different processes by which atomic nuclei are produced: Big Bang nucleosynthesis is the fusion of hydrogen nuclei to form heavier elements in the early stages of the universe, as it cooled from the big bang. There are rather specific thermal requirements for this process to occur, so there was only a short time window in which heavier elements could form, meaning that the only fusion to actually happen in significant amounts was the conversion of hydrogen (and deuterium) to helium, and an extremely tiny amount of lithium. Stellar nucleosynthesis is the fusion of hydrogen and other nuclei in the cores of stars. This is something separate from big bang cosmology, since stars didn't form until millions of years into the universe's lifetime. Now, contrary to what you might have read, not all elements are formed in stellar nucleosynthesis. There are specific "chains" of nuclear reactions that occur, and only the elements that are produced by those reactions will exist in a star in appreciable quantities. Most stars produce their energy using either the proton-proton chain (in lighter stars) or the CNO cycle (in heavier stars), both of which consume hydrogen and form helium. Once most of the hydrogen has been consumed, the star's temperature will increase and it will start to fuse helium into carbon. When the helium runs out, it will fuse carbon into oxygen, then oxygen into silicon, then silicon into iron. (Of course the actual process is more complicated - see the Wikipedia articles for details.) Several other elements are produced or involved along the way, including neon, magnesium, phosphorous, and others, but lithium is not among them. In fact, stars have a tendency to consume lithium , rather than producing it, so stars actually tend to have only small amounts of lithium. Supernova nucleosynthesis is the fusion of atomic nuclei due to the high-pressure, high-energy conditions that arise when a large star explodes in a type II supernova. There are certain similarities between this and big bang nucleosynthesis, namely the high temperatures and pressures, but the main difference is that an exploding star will have "reserves" of heavy elements built up from a lifetime of nuclear fusion. So instead of just forming a lot of helium as occurred just after the big bang, a supernova will form a whole spectrum of heavy elements. In fact supernovae are the only natural source of elements heavier than iron, since it actually requires an input of energy to produce those elements as fusion products. I believe some amount of lithium would be formed in a supernova along with all the other elements, but since a large star would have used up its hydrogen and helium in the central region where most of the action takes place, lithium is probably not a particularly common reaction product.
{ "source": [ "https://physics.stackexchange.com/questions/11154", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4039/" ] }
11,194
Why commercial wind generators usually have just 2-3 blades? Having more blades would allow to increase power OR decrease diameter. Decreased diameter would also reduce stress due to different wind speed on different height... But despite that commercial generators have few blades...
More blades give you more cost, but very little increase in efficiency. Three blades turns out to be the optimum. With four or more blades, costs are higher, with insufficient extra efficiency to compensate. This is more expensive per unit electricity generated, if you go for more, but shorter, blades: if you have 4 shorter blades (rather than three longer ones), the blades are sweeping through a smaller volume of air (i.e. an amount of air with a lot less energy), swept area being proportional to the square of the radius. And the efficiency is only a few percent higher. You get higher mechanical reliability with three blades than with two: with two blades, the shadowing effect of tower & blade puts a lot of strain on the bearings. So although it costs more to make a three-bladed turbine, they tend to have a longer life, lower maintenance needs, and thus on balance reduce the unit cost of electricity generated, as the increased availability and reduced maintenance costs outweigh the extra cost of the third blade. For the nitty-gritty of wind-turbine aerodynamics, wikipedia isn't a bad place to start: http://en.wikipedia.org/w/index.php?title=Wind_turbine_aerodynamics&oldid=426555179
{ "source": [ "https://physics.stackexchange.com/questions/11194", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/930/" ] }
11,321
I'm far from being a physics expert and figured this would be a good place to ask a beginner question that has been confusing me for some time. According to Galileo, two bodies of different masses, dropped from the same height, will touch the floor at the same time in the absence of air resistance. BUT Newton's second law states that $a = F/m$, with $a$ the acceleration of a particle, $m$ its mass and $F$ the sum of forces applied to it. I understand that acceleration represents a variation of velocity and velocity represents a variation of position. I don't comprehend why the mass, which is seemingly affecting the acceleration, does not affect the "time of impact". Can someone explain this to me? I feel pretty dumb right now :)
it is because the Force at work here (gravity) is also dependent on the mass gravity acts on a body with mass m with $$F = mg$$ you will plug this in to $$F=ma$$ and you get $$ma = mg$$ $$a = g$$ and this is true for all bodies no matter what the mass is. Since they are accelerated the same and start with the same initial conditions (at rest and dropped from a height h) they will hit the floor at the same time. This is a peculiar aspect of gravity and underlying this is the equality of inertial mass and gravitational mass (here only the ratio must be the same for this to be true but Einstein later showed that they're really the same, i.e. the ratio is 1)
{ "source": [ "https://physics.stackexchange.com/questions/11321", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4104/" ] }
11,398
Let's say we have $2$ particles facing each other and each traveling (almost) at speed of light. Let's say I'm sitting on #$1$ particle so in my point of view #$2$ particle's speed is (almost) $c+c=2c$, double light speed? Please say why I am incorrect :) EDIT : About sitting me is just example, so in point of view of #1 particle, the second one moves at (almost) $c+c=2c$ speed?
One of the results of special relativity is that a particle moving at the speed of light does not experience time, and thus is unable to make any measurements. In particular, it cannot measure the velocity of another particle passing it. So, strictly speaking, your question is undefined. Particle #1 does not have a "point of view," so to speak. (More precisely: it does not have a rest frame because there is no Lorentz transformation that puts particle #1 at rest, so it makes no sense to talk about the speed it would measure in its rest frame.) But suppose you had a different situation, where each particle was moving at $0.9999c$ instead, so that that issue I mentioned isn't a problem. Another result of special relativity is that the relative velocity between two particles is not just given by the difference between their two velocities. Instead, the formula (in one dimension) is $$v_\text{rel} = \frac{v_1 - v_2}{1 - \frac{v_1v_2}{c^2}}$$ If you plug in $v_1 = 0.9999c$ and $v_2 = -0.9999c$, you get $$v_\text{rel} = \frac{1.9998c}{1 + 0.9998} = 0.99999999c$$ which is still less than the speed of light.
{ "source": [ "https://physics.stackexchange.com/questions/11398", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3992/" ] }
11,449
Is mass converted into energy in exothermic chemical/nuclear reactions? My (A Level) knowledge of chemistry suggests that this isn't the case. In a simple burning reaction, e.g. $$\mathrm{C + O_2 \to CO_2}$$ energy is released by the $\mathrm{C-O}$ bonds forming; the atoms lose potential energy when they pull themselves towards each other, in the same way that a falling object converts gravitational potential energy to kinetic energy. There are the same number of protons, electrons etc. in both the reactants and products. I would have assumed that this reasoning extends to nuclear fission/fusion as well, but one physics textbook repeatedly references very small amounts of mass being converted into energy in nuclear reactions. So I just wanted to know if I was wrong about either of these types of reactions, and if so, what mass is lost exactly?
This is actually a more complex question than you might think, because the distinction between mass and energy kind of disappears once you start talking about small particles. So what is mass exactly? There are two common definitions: The quantity that determines an object's resistance to a change in motion, the $m$ in $\sum F = ma$ The quantity that determines an object's response to a gravitational field, the $m$ in $F_g = mg$ (or equivalently, in $F_g = GMm/r^2$) The thing is, energy actually satisfies both of these definitions. An object that has more energy - of any form - will be harder to accelerate, and will also respond more strongly to a given gravitational field. So technically , when computing the value of $m$ to plug into $\sum F = ma$ or $F_g = mg$ or any other formula that involves mass, you do need to take into account the chemical potential energy, thermal energy, gravitational binding energy, and many other forms of energy. In this sense it turns out that the "mass" we talk about in chemical and nuclear reactions is effectively just a word for the total energy of an object (well, divided by a constant factor: $m_\text{eff} = E/c^2$). In special relativity, elementary particle physics, and quantum field theory, mass has a completely different definition. That's not relevant here, though. If mass is just another word for energy, why do we even talk about it? Well, for one thing, people got used to using the word "mass" before anyone knew about all its subtleties ;-) But seriously: if you really look into all the different forms of energy that exist, you'll find that figuring out how much energy an object actually has can be very difficult. For instance, consider a chemical compound - $\mathrm{CO}_2$ for example. You can't just figure out the energy of a $\mathrm{CO}_2$ molecule by adding up the energies of one carbon atom and two oxygen atoms; you also have to take into account the energy required to make the chemical bond, any thermal energy stored in vibrational modes of the molecule or nuclear excitations of the atoms, and even slight adjustments to the molecular structure due to the surrounding environment. For most applications, though, you can safely ignore all those extra energy contributions because they're extremely small compared to the energies of the atoms. For example, the energy of the chemical bonds in carbon dioxide is one ten-billionth of the total energy of the molecule. Even if adding the energies of the atoms doesn't quite get you the exact energy of the molecule, it's often close enough. When we use the term "mass", it often signifies that we're working in a domain where those small energy corrections don't matter, so adding the masses of the parts gets close enough to the mass of the whole. Obviously, whether the "extra" energies matter or not depends on what sort of process you're dealing with, and specifically what energies are actually affected by the process. In chemical reactions, the only changes in energy that really take place are those due to breaking and forming of chemical bonds, which as I said are a miniscule contribution to the total energy of the particles involved. But on the other hand, consider a particle accelerator like the LHC, which collides protons with each other. In the process, the chromodynamic "bonds" between the quarks inside the protons are broken, and the quarks then recombine to form different particles. In a sense, this is like a chemical reaction in which the quarks play the role of the atoms, and the protons (and other particles) are the compounds, but in this case the energy involved in the bonds (by this I mean the kinetic energy of the gluons, not what is normally called the "binding energy") is fully half of the energy of the complete system (the protons) - in other words, about half of what we normally consider the "mass" of the proton actually comes from the interactions between the quarks, rather than the quarks themselves. So when the protons "react" with each other, you could definitely say that the mass (of the proton) was converted to energy, even though if you look closely, that "mass" wasn't really mass in the first place. Nuclear reactions are kind of in the middle between the two extremes of chemical reactions and elementary particle reactions. In an atomic nucleus, the binding energy contributes anywhere from 0.1% up to about 1% of the total energy of the nucleus. This is a lot less than with the color force in the proton, but it's still enough that it needs to be counted as a contribution to the mass of the nucleus. So that's why we say that mass is converted to energy in nuclear reactions: the "mass" that is being converted is really just binding energy, but there's enough of this energy that when you look at the nucleus as a particle, you need to factor in the binding energy to get the right mass. That's not the case with chemical reactions; we can just ignore the binding energy when calculating masses, so we say that chemical reactions do not convert mass to energy.
{ "source": [ "https://physics.stackexchange.com/questions/11449", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4140/" ] }
11,542
Why is the gravitational force always attractive? Is there another way to explain this without the curvature of space time? PS: If the simple answer to this question is that mass makes space-time curve in a concave fashion, I can rephrase the question as why does mass make space-time always curve concavely?
Gravity is mediated by a spin two particle. Electromagnetism by spin 1. Here is a link that answers your question: even and odd spin do differ in that they require a product of charges with different signs to get attraction or repulsion: spin even: $q_1 q_2 > 0$ : attractive $q_1 q_2 < 0$ : repulsive spin odd: $q_1 q_2 < 0$ : attractive $q_1 q_2 > 0$ : repulsive In the case of gravity, mediated by spin 2 particles, charge is mass, which is always positive. Thus, $q_1 q_2$ is always greater than zero, and gravity is always attractive. For spin 0 force mediators, however, there is no restriction on the charges and you can very well have repulsive forces. A better rephrasing of the question is: "Why do particles of odd spin generate repulsive forces between like charges, while particles of even spin generate attractive forces between like charges?" Goes on to derive this
{ "source": [ "https://physics.stackexchange.com/questions/11542", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3714/" ] }
11,826
Why should two sub-atomic (or elementary particle) - say electrons need to have identical static properties - identical mass, identical charge? Why can't they differ between each other by a very slight degree? Is there a theory which proves that? Imagine an alien of size of order of Milky-way galaxy examining our solar system with a probe of size of 10's of solar system dimension and concludes that all planets are identical.
One good piece of evidence that all particles of a given type are identical is the exchange interaction . The exchange symmetry (that one can exchange any two electrons and leave the Hamiltonian unchanged) results in the Pauli exclusion principle for fermions. It also is responsible for all sorts of particle statistics effects (particles following the Fermi-Dirac or Bose-Einstein distributions) depending on whether the particles are fermions or bosons. If the particles were even slightly non-identical, it would have large, observable effects on things like the allowed energies of the Helium atom.
{ "source": [ "https://physics.stackexchange.com/questions/11826", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3636/" ] }
11,905
I thought the Hamiltonian was always equal to the total energy of a system but have read that this isn't always true. Is there an example of this and does the Hamiltonian have a physical interpretation in such a case?
In an ideal, holonomic and monogenic system (the usual one in classical mechanics), Hamiltonian equals total energy when and only when both the constraint and Lagrangian are time-independent and generalized potential is absent. So the condition for Hamiltonian equaling energy is quite stringent. Dan's example is one in which Lagrangian depends on time. A more frequent example would be the Hamiltonian for charged particles in electromagnetic field $$H=\frac{\left(\vec{P}-q\vec{A}\right)^2}{2m}+q\varphi$$ The first part equals kinetic energy($\vec{P}$ is canonical, not mechanical momentum), but the second part IS NOT necessarily potential energy, as in general $\varphi$ can be changed arbitrarily with a gauge.
{ "source": [ "https://physics.stackexchange.com/questions/11905", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4208/" ] }
11,940
From my understanding of light, you are always looking into the past based on how much time it takes the light to reach you from what you are observing. For example when you see a star burn out, if the star was 5 light years away then the star actually burnt out 5 years ago. So I am 27 years old, if I was 27 light years away from Earth and had a telescope strong enough to view Earth, could I theoretically view myself being born?
Yes, you can. And you do not even need to leave the Earth to do it. You are always viewing things in the past, just as you are always hearing things in the past. If you see someone do something, who is 30 meters away, you are seeing what happened $(30\;\mathrm{m})/(3\times10^8\;\mathrm{m}/\mathrm{s}) = 0.1\;\mu\mathrm{s}$ in the past. If you had a mirror on the moon (about 238K miles away), you could see about 2.5 seconds into earth's past. If that mirror was on Pluto, you could see about 13.4 hours into Earth's past. If you are relying on hearing, you hear an event at 30 m away about 0.1 s after it occurs. That is why runners often watch the starting pistol at an event, because they can see a more recent picture of the past than they can hear. To more directly answer the intent of your question: Yes, if you could magically be transported 27 lightyears away, or had a mirror strategically placed 13.5 lightyears away, you could see yourself being born.
{ "source": [ "https://physics.stackexchange.com/questions/11940", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/947/" ] }
11,949
We have a structure that is sloped at 20 Degrees to the horizontal and weights 160N. We then have a force (wind) which is acting along the horizontal plane which has been calculated to be of about 8000N. The problem we are having is that we are unable to find what amount of ballast that needs to be added at the bottom of the structure to stop the structure from being blown away. The structure will not be bolted to the ground. At the moment we do not have an exact amount for the coefficient of friction, but the structure will be placed on a flat concrete surface. We have tried to find a solution by breaking the forces along the vertical and horizontal components, but we where unsure if we where heading in the right direction. Below is an image which describes the setting: Could someone please advise us as to which mathematical model we should use? Thanks in advance
Yes, you can. And you do not even need to leave the Earth to do it. You are always viewing things in the past, just as you are always hearing things in the past. If you see someone do something, who is 30 meters away, you are seeing what happened $(30\;\mathrm{m})/(3\times10^8\;\mathrm{m}/\mathrm{s}) = 0.1\;\mu\mathrm{s}$ in the past. If you had a mirror on the moon (about 238K miles away), you could see about 2.5 seconds into earth's past. If that mirror was on Pluto, you could see about 13.4 hours into Earth's past. If you are relying on hearing, you hear an event at 30 m away about 0.1 s after it occurs. That is why runners often watch the starting pistol at an event, because they can see a more recent picture of the past than they can hear. To more directly answer the intent of your question: Yes, if you could magically be transported 27 lightyears away, or had a mirror strategically placed 13.5 lightyears away, you could see yourself being born.
{ "source": [ "https://physics.stackexchange.com/questions/11949", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4293/" ] }
12,122
I am following the first volume of the course of theoretical physics by Landau. So, whatever I say below mainly talks regarding the first 2 chapters of Landau and the approach of deriving Newton's laws from Lagrangian principle supposing Hamilton's principle of extremum action. Please keep this view in mind while reading and answering my queries and kindly neglect the systems to which Action Principle is not applicable: If we use homogeneity of space in Euler-Lagrange equations, we obtain a remarkable result i.e. the conservation of momentum for a closed system. Now, this result, using the form of Lagrange for a closed system of particles, transforms into $ \Sigma F = 0 $ . Now, how from this can we conclude that the internal forces that particles exert come in equal and opposite pairs? Is it because for 2 particles this comes out as $ F_{1} + F_{2} = 0 $ and we take the forces exerted by particles on one other to be independent of other particles (i.e. Superposition Principle) as an experimental fact? I doubt it as whole of Newtonian Mechanics is derivable from Lagrangian Mechanics and supposed Symmetries. So, according to me, a fact like Newton's Third Law should be derivable from it without using an additional experimental fact. I have an idea to prove it rigorously. Consider two particles $i$ and $j$. Let the force on $i$ by $j$ be $F_{ij}$ and on $j$ by $i$ be $k_{ij}F_{ij}$. Now the condition becomes $\Sigma (1+k_{ij})F_{ij}=0$ where the terms to be included and rejected in summation understood. As this must be true for any value of $F_{ij}$, we get $k_{ij}=-1$. I don't know if this argument or refinement of such an argument holds or not. I can see many questions arising in this argument and it's not very convincing to me. I would like to hear from you people as to if it is an experimental result used or not? If not, then is the method given above right or wrong? If wrong, how can we prove it? Addendum My method of proof uses the fact of superposition of forces itself, so it is flawed. I have assumed that the coefficients $k_{ij}$ are constants and don't change in the influence of all other particles which is exactly what superposition principle says. As the superposition of Forces can be derived by superposition of potential energies at a point in space and potential energy being more fundamental in Lagrangian Mechanics, I restate my question as follows: Is the principle of superposition of potential energies by different sources at a point in space derivable from the inside of Lagrangian Mechanics or is it an experimental fact used in Lagrangian Mechanics? I, now, doubt this to be derivable as the fundamental assumption about potential energy is only that it is a function of coordinates of particles and this function may or may not respect superposition.
The derivation in Landau and Lifschitz is making some additional implicit assumptions. They assume that all forces come from pair-interactions, and that the pair forces are rotationally invariant. With these two assumptions, the potential function in the Lagrangian is $$V(x_1,\ldots,x_n) = \sum_{\langle i,j\rangle} V(|x_i - x_j|)$$ And then it is easy to prove Newton's third law, because the derivative of the distance function is equal and opposite for each pair of particles. This type of derivation is reasonable from a physical point of view for macroscopic objects, but it is not mathematically ok, because it omits important examples. No rotational invariance, no third law Dropping the assumption of rotational invariance, but keeping the assumption of pair-wise interaction, one gets the following counterexample in 2 dimensions, with two particles (A,B) with position vectors $(A_x,A_y)$ $(B_x,B_y)$ respectively: $$V(A_x,A_y,B_x,B_y) = f(A_x-B_x) + f(A_y - B_y) $$ where $f$ is any function other than $f(x)=x^2$. This pair potential leads to equal and opposite forces, but not collinear ones. Linear momentum and energy are conserved, but angular momentum is not, except when both particles are on the lines $y=\pm x$ relative to each other. The potential is un-physical of course, in the absence of a medium like a lattice that breaks rotational invariance. Many-body direct interactions, no reflection symmetry, no third law There is another class of counterexamples which is much more interesting, because they do not break angular momentum or center of mass conservation laws, and so they are physically possible interactions in vacuum, but they do break Newton's third law. This is the chiral three-body interaction. Consider 3 particles A,B,C in two dimensions whose potential function is equal to the signed area of the triangle formed by the points A,B,C. $$V(A,B,C) = B_x C_y - A_x C_y -B_x A_y - C_x B_y + C_x A_y + A_x B_y$$ If all 3 particles are collinear, the forces for this 3-body potential are perpendicular to the common line they lie on. The derivative of the area is maximum by moving the points away from the common line. So you obviously cannot write the force as any sum of pairwise interactions along the line of separation, equal and opposite or not. The forces and torques still add up to zero, since this potential is translationally and rotationally invariant. Many body direct interaction, spatial reflection symmetry, crappy third law If the force on k particles is reflection invariant, it never gets out of the subspace spanned by their mutual separation. This is because if they lie in a lower dimensional subspace, the system is invariant with respect to reflections perpendicular to that subspace, so the forces must be as well. This means that you can always cook up equal and opposite forces between the particles that add up to the total force, and pretend that these forces are physically meaningful. This allows you to salvage Newton's third law, in a way. But it gives nonsense forces. To see that this is nonsense, consider the three-particle triangle area potential from before, but this time take the absolute value. The result is reflection invariant, but contains a discontinuity in the derivative when the particles become collinear. Near collinearity, the forces perpendicular have a finite limit. But in order to write these finite forces as a sum of equal and opposite contributions from the three-particles, you need the forces between the particles to diverge at collinearity. Three body interactions are natural There is natural physics that gives such a three-body interaction. You can imagine the three bodies are connected by rigid frictionless struts that are free to expand and contract like collapsible antennas, and a very high-quality massless soap bubble is stretched between the struts. The soap bubble prefers to have less area according to its nonzero surface tension. If the dynamics of the soap bubble and the struts are fast compared to the particles, you can integrate out the soap bubble degrees of freedom and you will get just such a three-body interaction. Then the reason the bodies snap together near collinearity with a finite transverse force is clear--- the soap bubble wants to collapse to zero area, so it pulls them in. It is then obvious that there is no sense in which they have any diverging pairwise forces, or any pairwise forces at all. Other cases where you get three body interactions directly is when you have a non-linear field between the three objects, and the field dynamics are fast. Consider a cubically self-interacting massive scalar field (with cubic coupling $\lambda$) sourced by classical stationary delta-function sources of strength g. The leading non-linear contribution to the classical potential is a tree-level, classical, three-body interaction of the form $$V(x,y,z) \propto g^3 \lambda \int \,\mathrm d^3k_1\mathrm d^3k_2 { e^{i(k_1\cdot (x-z) + k_2\cdot(y-z))} \over (k_1^2 + m^2) (k_2^2 + m^2)((k_1+k_2)^2 + m^2)}$$ which heuristically goes something like ${e^{-mr_{123}}r_{123}\over r_{12}r_{23}r_{13}}$ where the r's are the side lengths of the triangle and $r_{123}$ is the perimeter (this is just a scaling estimate). For nucleons, many body potentials are significant. The forces from the crappy third law are not integrable If you still insist on a Newton's third law description of three-body interactions like the soap bubble particles, and you give a pairwise force for each pair of particles which adds up to the full many-body interaction, these pairwise forces cannot be thought of as coming from a potential function. They are not integrable. The example of the soap-bubble force makes it clear--- if A,B,C are nearly collinear with B between A and C, closer to A, you can slide B away from A towards C very very close to collinearity, and bring it back less close to collinear. The A-B force is along the line of separation, and it diverges at collinearity, so the integral of the force along this loop cannot be zero. The force is still conservative of course, it comes from a three-body potential after all. This means that the two-body A-B force plus the two-body B-C force is integrable. It's just that the A-C two body force is not. So the separation is completely silly. Absence of multi-body interactions for macroscopic objects in empty space The interactions of macroscopic objects are through contact forces, which are necessarily pairwise since all other contacts are far away, and electromagnetic and gravitational fields, which are very close to linear at these scales. The electromagnetic and gravitational forces end up being linearly additive between pairs, and the result is a potential of the form Landau and Lifschitz consider--- pairwise interactions which are individually rotationally invariant. But for close packed atoms in a crystal, there is no reason to ignore 3-body potentials. It is certainly true that in the nucleus three-body and four-body potentials are necessary, but in both cases you are dealing with quantum systems. So I don't think the third law is particularly fundamental. As a philosophical thing, that nothing can act without being acted upon, it's as valid as any other general principle. But as a mathematical statement of the nature of interactions between particles, it is completely dated. The fundamental things are the conservation of linear momentum, angular momentum, and center of mass, which are independent laws, derived from translation invariance, rotational invariance, and Galilean invariance respectively. The pair-wise forces acting along the separation direction are just an accident.
{ "source": [ "https://physics.stackexchange.com/questions/12122", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
12,169
By the "No Hair Theorem", three quantities "define" a black hole; Mass, Angular Momentum, and Charge. The first is easy enough to determine, look at the radius of the event horizon and you can use the Schwarzschild formula to compute the mass. Angular Momentum can be found using the cool little ergosphere Penrose "discovered". However, I don't know how to determine the charge of the black hole. How can an electromagnetic field escape the event horizon of a Reissner-Nordström black hole? Is there any experiment we could theoretically do to a black hole to determine its charge?
A charged black hole does produce an electric field. In fact, at great distances (much larger than the horizon), the field strength is $Q/(4\pi\epsilon_0 r^2)$, just like any other point charge. So measuring the charge is easy. As for how the electric field gets out of the horizon, the best answer is that it doesn't: it was never in the horizon to begin with! A charged black hole formed out of charged matter. Before the black hole formed, the matter that would eventually form it had its own electric field lines. Even after the material collapses to form a black hole, the field lines are still there, a relic of the material that formed the black hole. A long time ago, back when the American Journal of Physics had a question-and-answer section, someone posed the question of how the electric field gets out of a charged black hole. Matt McIrvin and I wrote an answer , which appeared in the journal. It pretty much says the same thing as the above, but a bit more formally and carefully. Actually, I just noticed a mistake in what Matt and I wrote. We say that the Green function has support only on the past light cone. That's actually not true in curved spacetime: the Green function has support in the interior of the light cone as well. But fortunately that doesn't affect the main point, which is that there's no support outside the light cone.
{ "source": [ "https://physics.stackexchange.com/questions/12169", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3018/" ] }
12,435
What's the cleanest/quickest way to go between Einstein's postulates [ 1 ] of Relativity : Physical laws are the same in all inertial reference frames. Constant speed of light : "... light is always propagated in empty space with a definite speed $c$ which is independent of the state of motion of the emitting body." to Minkowski's idea [ 2 ] that space and time are united into a 4D spacetime with the indefinite metric $ds^2 = \vec{dx^2} - c^2 dt^2$ . Related to the question of what is the best derivation of the correspondence are: Is the correspondence 1:1? (Does the correspondence go both ways?) and are there any hidden/extra assumptions? Edit Marek's answer is really good (I suggest you read it and its references now!), but not quite what I was thinking of. I'm looking for an answer (or a reference) that shows the correspondence using only/mainly simple algebra and geometry. An argument that a smart high school graduate would be able to understand.
I will first describe the naive correspondence that is assumed in usual literature and then I will say why it's wrong (addressing your last question about hidden assumptions) :) The postulate of relativity would be completely empty if the inertial frames weren't somehow specified. So here there is already hidden an implicit assumption that we are talking only about rotations and translations (which imply that the universe is isotropic and homogenous), boosts and combinations of these. From classical physics we know there are two possible groups that could accomodate these symmetries: the Gallilean group and the Poincaré group (there is a catch here I mentioned; I'll describe it at the end of the post). Constancy of speed of light then implies that the group of automorphisms must be the Poincaré group and consequently, the geometry must be Minkowskian. [ Sidenote: how to obtain geometry from a group? You look at its biggest normal subgroup and factor by it; what you're left with is a homogeneous space that is acted upon by the original group. Examples: $E(2)$ (symmetries of the Euclidean plane) has the group of (improper) rotations $O(2)$ as the normal subgroup and $E(2) / O(2)$ gives ${\mathbb R}^2$. Similarly $O(1,3) \ltimes {\mathbb R}^4 / O(1,3)$ gives us Minkowski space.] The converse direction is trivial because it's easy to check that the Minkowski space satisfies both of Einstein postulates. Now to address the catch: there are actually not two but eight kinematical groups that describe isotropic and uniform universes and are also consistent with quantum mechanics. They have classified in the Bacry, Lévy-Leblond . The relations among them is described in the Dyson's Missed opportunities (p. 9). E.g., there is a group that has absolute space (instead of absolute time that we have in classical physics) but this is ruled out by the postulate of constant speed of light. In fact, only two groups remain after Einstein's postulate have been taken into account: besides the Poincaré group, we have the group of symmetries of the de Sitter space (and in terms of the above geometric program it is $O(1,4) / O(1,3)$). Actually, one could also drop the above mentioned restriction to groups that make sense in quantum mechanics and then we could also have an anti de Sitter space ($O(2,3) / O(1,3)$). In fact, this shouldn't be surprising as general relativity is a natural generalization of the special relativity so that the Einstein's postulates are actually weak enough that they describe maximally symmetric Lorentzian manifolds (which probably wasn't what Einstein intented originally).
{ "source": [ "https://physics.stackexchange.com/questions/12435", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/429/" ] }
12,559
Noether's Theorem is used to relate the invariance of the action under certain continuous transformations to conserved currents. A common example is that translations in spacetime correspond to the conservation of four-momentum. In the case of angular momentum, the tensor (in special relativity) has 3 independent components for the classical angular momentum, but 3 more independent components that, as far as I know, represent Lorentz boosts. So, what conservation law corresponds to invariance under Lorentz boosts?
Warning: this is a long and boring derivation. If you are interested only in the result skip to the very last sentence. Noether's theorem can be formulated in many ways. For the purposes of your question we can comfortably use the special relativistic Lagrangian formulation of a scalar field. So, suppose we are given an action $$S[\phi] = \int {\mathcal L}(\phi(x), \partial_{\mu} \phi(x), \dots) {\rm d}^4x.$$ Now suppose the action is invariant under some infinitesimal transformation $m: x^{\mu} \mapsto x^{\mu} + \delta x^{\mu} = x^{\mu} + \epsilon a^{\mu}$ (we won't consider any explicit transformation of the fields themselves). Then we get a conserved current $$J^{\mu} = {\partial {\mathcal L} \over \partial \phi_{,\mu}} \phi^{,\nu} a_{\nu} - {\mathcal L} a^{\mu} = \left ({\partial {\mathcal L} \over \partial \phi_{,\mu}} \phi^{,\nu} - {\mathcal L} g^{\mu \nu} \right) a_{\nu} .$$ We obtain a conserved charge from it by letting $Q \equiv \int J^0 {\rm d}^3x$ since from $\partial_{\mu}J^{\mu} =0$ we have that $$ {\partial Q \over \partial t} = \int {\rm Div}{\mathbf J}\, {\rm d}^3 x = 0$$ which holds any time the currents decay sufficiently quickly. If the transformation is given by translation $m_{\nu} \leftrightarrow \delta x^{\mu} = \epsilon \delta^{\mu}_{\nu}$ we get four conserved currents $$J^{\mu \nu} = {\partial {\mathcal L} \over \partial \phi_{\mu}} \phi^{\nu} - {\mathcal L} g^{\mu \nu} .$$ This object is more commonly known as stress energy tensor $T^{\mu \nu}$ and the associated conserved currents are known as momenta $p^{\nu}$. Also, in general the conserved current is simply given by $J^{\mu} = T^{\mu \nu} a_{\nu}$. For a Lorentz transformation we have $$m_{\sigma \tau} \leftrightarrow \delta x^{\mu} = \epsilon \left(g^{\mu \sigma} x^{\tau} - g^{\mu \tau} x^{\sigma} \right)$$ (notice that this is antisymmetric and so there are just 6 independent parameters of the transformation) and so the conserved currents are the angular momentum currents $$M^{\sigma \tau \mu} = x^{\tau}T^{\mu \sigma} - x^{\sigma}T^{\mu \tau}.$$ Finally, we obtain the conserved angular momentum as $$M^{\sigma \tau} = \int \left(x^{\tau}T^{0 \sigma} - x^{\sigma}T^{0 \tau} \right) {\rm d}^3 x . $$ Note that for particles we can proceed a little further since their associated momenta and angular momenta are not given by an integral. Therefore we have simply that $p^{\mu} = T^{\mu 0}$ and $M^{\mu \nu} = x^{\mu} p^{\nu} - x^{\nu} p^{\mu}$. The rotation part of this (written in the form of the usual pseudovector) is $${\mathbf L}_i = {1 \over 2}\epsilon_{ijk} M^{jk} = ({\mathbf x} \times {\mathbf p})_i$$ while for the boost part we get $$M^{0 i} = \left(t {\mathbf p} - {\mathbf x} E \right)^i $$ which is nothing else than the center of mass at $t=0$ (we are free to choose $t$ since the quantity is conserved) multiplied by $\gamma$ since we have the relations $E = \gamma m$, ${\mathbf p} = \gamma m {\mathbf v}$. Note the similarity to the ${\mathbf E}$, $\mathbf B$ decomposition of the electromagnetic field tensor $F^{\mu \nu}$.
{ "source": [ "https://physics.stackexchange.com/questions/12559", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4551/" ] }
12,599
Take a metal disc and cut a small, circular hole in the center. When you heat the whole thing, will the hole's diameter increase or decrease? and why? What will happen to the diameter of disc?
Instead of a circular hole, let's think of a square hole. You can get a square hole two ways, you can cut it out of a complete sheet, or you can get one by cutting a sheet into 9 little squares and throwing away the center one. Since the 8 outer squares all get bigger when heat it, the inner square (the hole) also has to get bigger: Same thing happens with a round hole. This is confusing to people because the primary experience they have with stuff getting larger when heated is by cooking. If you leave a hole in the middle of a cookie and cook it, yes, the cookie gets bigger and the hole gets smaller. But the reason for this is that the cookie isn't so solid. It's more like a liquid, it's deforming. And as Ilmari Karonen points out, the cookie sheet isn't expanding much so there are frictional forces at work.
{ "source": [ "https://physics.stackexchange.com/questions/12599", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4621/" ] }
12,611
From what I remember in my undergraduate quantum mechanics class, we treated scattering of non-relativistic particles from a static potential like this: Solve the time-independent Schrodinger equation to find the energy eigenstates. There will be a continuous spectrum of energy eigenvalues. In the region to the left of the potential, identify a piece of the wavefunction that looks like $Ae^{i(kx - \omega t)}$ as the incoming wave. Ensure that to the right of the potential, there is not piece of the wavefunction that looks like $Be^{-i(kx + \omega t)}$, because we only want to have a wave coming in from the left. Identify a piece of the wavefunction to the left of the potential that looks like $R e^{-i(kx + \omega t)}$ as a reflected wave. Identify a piece of the wavefunction to the right of the potential that looks like $T e^{i(kx - \omega t)}$ as a transmitted wave. Show that $|R|^2 + |T|^2 = |A|^2$. Interpret $\frac{|R|^2}{|A|^2}$ as the probability for reflection and $\frac{|T|^2}{|A|^2}$ as the probability for transmission. This entire process doesn't seem to have anything to do with a real scattering event - where a real particle is scattered by a scattering potential - we do all our analysis on a stationary waves. Why should such a naive procedure produce reasonable results for something like Rutherford's foil experiment, in which alpha particles are in motion as they collide with nuclei, and in which the wavefunction of the alpha particle is typically localized in a (moving) volume much smaller than the scattering region?
This is fundamentally no more difficult than understanding how quantum mechanics describes particle motion using plane waves. If you have a delocalized wavefunction $\exp(ipx)$ it describes a particle moving to the right with velocity p/m. But such a particle is already everywhere at once, and only superpositions of such states are actually moving in time. Consider $$\int \psi_k(p) e^{ipx - iE(p) t} dp$$ where $\psi_k(p)$ is a sharp bump at $p=k$, not a delta-function, but narrow. The superposition using this bump gives a wide spatial waveform centered about at x=0 at t=0. At large negative times, the fast phase oscillation kills the bump at x=0, but it creates a new bump at those x's where the phase is stationary, that is where $${\partial\over\partial p}( p x - E(p)t ) = 0$$ or, since the superposition is sharp near k, where $$ x = E'(k)t$$ which means that the bump is moving with a steady speed as determined by Hamilton's laws. The total probability is conserved, so that the integral of psi squared on the bump is conserved. The actual time-dependent scattering event is a superposition of stationary states in the same way. Each stationary state describes a completely coherent process, where a particle in a perfect sinusoidal wave hits the target, and scatters outward, but because it is an energy eigenstate, the scattering is completely delocalized in time. If you want a collision which is localized, you need to superpose, and the superposition produces a natural scattering event, where a wave-packet comes in, reflects and transmits, and goes out again. If the incoming wavepacked has an energy which is relatively sharply defined, all the properties of the scattering process can be extracted from the corresponding energy eigenstate. Given the solutions to the stationary eigenstate problem $\psi_p(x)$ for each incoming momentum $p$, so that at large negative x, $\psi_p(x) = exp(ipx) + A \exp(-ipx)$ and $\psi_p(x) = B\exp(ipx)$ at large positive x, superpose these waves in the same way as for a free particle $$\int dp \psi_k(p) \psi_p(x) e^{-iE(p)t}$$ At large negative times, the phase is stationary only for the incoming part, not for the outgoing or reflected part. This is because each of the three parts describes a free-particle motion, so if you understand where free particle with that momentum would classically be at that time, this is where the wavepacket is nonzero. So at negative times, the wavepacket is centered at $$ x = E'(k)t$$ For large positive t, there are two places where the phase is stationary--- those x where $$ x = - E'(k) t$$ $$ x = E_2'(k) t$$ Where $E_2'(k)$ is the change in phase of the transmitted k-wave in time (it can be different than the energy if the potential has an asymptotically different value at $+\infty$ than at $-\infty$). These two stationary phase regions are where the reflected and transmitted packet are located. The coefficient of the reflected and transmitted packets are A and B. If A and B were of unit magnitude, the superposition would conserve probability. So the actual transmission and reflection probability for a wavepacket is the square of the magnitude of A and of B, as expected.
{ "source": [ "https://physics.stackexchange.com/questions/12611", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/74/" ] }
12,664
In almost all proofs I've seen of the Lorentz transformations one starts on the assumption that the required transformations are linear. I'm wondering if there is a way to prove the linearity: Prove that any spacetime transformation $\left(y^0,y^1,y^2,y^3\right)\leftrightarrow \left(x^0,x^1,x^2,x^3\right)$ that preserves intervals, that is, such that $$\left(dy^0\right)^2-\left(dy^1\right)^2-\left(dy^2\right)^2-\left(dy^3\right)^2=\left(dx^0\right)^2-\left(dx^1\right)^2-\left(dx^2\right)^2-\left(dx^3\right)^2$$ is linear (assuming that the origins of both coordinates coincide). That is, show that $\frac{\partial y^i}{\partial x^j}=L_j^i$ is constant throughout spacetime (that is, show that $\frac{\partial L_j^i}{\partial x^k}=0$). Thus far all I've been able to prove is that $g_{ij}L_p^iL_q^j=g_{pq}$ (where $g_{ij}$ is the metric tensor of special relativity) and that $\frac{\partial L_j^i}{\partial x^k}=\frac{\partial L_k^i}{\partial x^j}$. Any further ideas?
In hindsight, here is a short proof. The metric $g_{\mu\nu}$ is the flat constant metric $\eta_{\mu\nu}$ in both coordinate systems. Therefore, the corresponding (uniquely defined) Levi-Civita Christoffel symbols $$ \Gamma^{\lambda}_{\mu\nu}~=~0$$ are zero in both coordinate systems. It is well-known that the Christoffel symbol does not transform as a tensor under a local coordinate transformation $x^{\mu} \to y^{\rho}=y^{\rho}(x)$, but rather with an inhomogeneous term, which is built from the second derivative of the coordinate transformation, $$\frac{\partial y^{\tau}}{\partial x^{\lambda}} \Gamma^{(x)\lambda}_{\mu\nu} ~=~\frac{\partial y^{\rho}}{\partial x^{\mu}}\, \frac{\partial y^{\sigma}}{\partial x^{\nu}}\, \Gamma^{(y)\tau}_{\rho\sigma}+ \frac{\partial^2 y^{\tau}}{\partial x^{\mu} \partial x^{\nu}}. $$ Hence all the second derivatives are zero, $$ \frac{\partial^2 y^{\tau}}{\partial x^{\mu} \partial x^{\nu}}~=~0, $$ i.e. the transformation $x^{\mu} \to y^{\rho}=y^{\rho}(x)$ is affine.
{ "source": [ "https://physics.stackexchange.com/questions/12664", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3064/" ] }
13,060
In log-plots a quantity is plotted on a logarithmic scale. This got me thinking about what the logarithm of a unit actually is. Suppose I have something with length $L = 1 \:\mathrm{km}$. $\log L = \log \mathrm{km}$ It seems that the unit of $\log L$ is $\log \mathrm{km}$, but I can also say $L = 1000 \mathrm{\:m}$ and now: $\log L = 3 + \log \mathrm{m}$ This doesn't appear to have any units at all. This suggests that $\log \mathrm{km}$ and $\log \mathrm{m}$ are actually dimensionless numbers. But wait, I can do this with any unit! Does it actually make sense to talk about the logarithm of a unit, or some other function for that matter?
This is a fun question. I have a hard time getting a good grip on the transformation that is $ln$ so I'll write things in terms of exponents. $$\mathrm{value} = \ln(10\ \mathrm{ km})$$ $$e^{\mathrm{value}} = 10\ \mathrm{ km}$$ The number $e$ is, of course, unit-less. If I raise a number to a power, what are the permissible units of the power? If I write $x^2$, I have an intuitive assumption that $2$ has no units, because it is just a count used to express $x \times x = x^2$. Thus, I have convinced myself of Carl's answer, and I would require a logarithm to have a reference to make sense. For example: $$e^{\mathrm{value}} = \frac{ 10\ \mathrm{ km} }{1\ \mathrm{ km}}$$ The previous alternative of $e$ raised to a power equaling a physical quantity with real units seems like the perfect example of something that is nonsensical. log plots I have another question that stemmed from your question and I will try to answer it here. I specifically remember taking the derivative of log-log and linear-log plots in engineering classes. We had some justification for that, but it would appear to be nonsensical on the surface, so let's dive in. Here is an example of a log-log plot. I'll show the graph and then offer an equation of the line that is being represented. Image source: Wikipedia I'll start writing things from the basic $y=mx+b$ form, then change things as necessary. Since I'm using an arbitrary constant, I'll fudge it whenever necessary. $$\log(p) = a \log(m) + b = a ( \log(m) + b' ) = a \log( b'' m ) = \log( b''^a m^a ) = \log\bigg( \frac{p_0}{m_0^a} m^a \bigg) $$ $$p = p_0 \left( \frac{m}{m_0} \right)^a$$ Like magic, a recognizable form comes through. Observing a linear relation in a log-log plot really means you're observing a power fit, not a linear fit. A student may still ask "but what are a and b", which is a bit more difficult. Firstly, I did no manipulation of $a$, so you can take the meaning directly from the final form, which is to say it's an exponent and thus unitless. For b: $$b = a b' = a \log(b'') = a \log\bigg( \frac{p_0^{1/a}}{m_0} \bigg) = \log\left( \frac{p_0}{m_0^a} \right) $$ This shows that $b$ is also unitless, but it also gives interpretation to $p_0$, which is the reference y-value at some reference x-value ($m_0$). I'll move on to linear-log plot, or a semi-log scale. Image source: J. Exp. Med. 103 , 653 (1956). I'll denote $f$ for "surviving fraction" and $d$ for dose. The equation for a regression that appears linear on the above plot will be the following. $$\log(f) = a d + b$$ $$f = e^{a d + b} = e^b e^{a d} = f_0 e^{a d}$$ It's important to note here that $b$ had dubious units all along, just like in the log-log case, but it doesn't really matter because a more useful form comes out of the mathematics naturally. The value $f_0$ would be the baseline value (100% in this case) at $d=0$. Summary: assuming a linear relation in log plots really makes the assumption that the actual relation follows some nonlinear form, and the units will work out once you do the mathematics, but the interpretation of the values may be nontrivial.
{ "source": [ "https://physics.stackexchange.com/questions/13060", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4770/" ] }
13,157
There was a reason why I constantly failed physics at school and university, and that reason was, apart from the fact I was immensely lazy, that I mentally refused to "believe" more advanced stuff until I understand the fundamentals (which I, eventually, never did). As such, one of the most fundamental things in physics that I still don't understand (a year after dropping out from the university) is the concept of field . No one cared to explain what a field actually is , they just used to throw in a bunch of formulas and everyone was content. The school textbook definition for a field (electromagnetic in this particular case, but they were similar), as I remember it, goes like: An electromagnetic field is a special kind of substance by which charged moving particles or physical bodies with a magnetic moment interact. A special kind of substance , are they for real? This sounds like the authors themselves didn't quite understand what a field is so they decided to throw in a bunch of buzzwords to make it sounds right. I'm fine with the second half but a special kind of substance really bugs me, so I'd like to focus on that. Is a field material ? Apparently, it isn't. It doesn't consist of particles like my laptop or even the light. If it isn't material, is it real or is it just a concept that helps to explain our observations ? While this is prone to speculations, I think we can agree that in scope of this discussion particles actually do exist and laws of physics don't (the latter are nothing but human ideas so I suspect Universe doesn't "know" a thing about them, at least if we're talking raw matter and don't take it on metalevel where human knowledge, being a part of the Universe, makes the Universe contain laws of physics). Any laws are only a product of human thinking while the stars are likely to exist without us homo sapiens messing around. Or am I wrong here too? I hope you already see why I hate physics. Is a field not material but still real ? Can something "not touchable" by definition be considered part of our Universe by physicists? I used to imagine that a "snapshot" of our Universe in time would contain information about each particle and its position, and this would've been enough to "de seralize " it but I guess my programmer metaphors are largely off the track. (Oh, and I know that the uncertainty principle makes such (de)serialization impossible — I only mean that I thought the Universe can be "defined" as the set of all material objects in it). Is such assumption false? At this point, if fields indeed are not material but are part of the Universe, I don't really see how they are different from the whole Hindu pantheon except for perhaps a more geeky flavor. When I talked about this with the teacher who helped me to prepare for the exams (which I did pass, by the way, it was before I dropped out), she said to me that, if I wanted hardcore definitions, a field is a function that returns a value for a point in space. Now this finally makes a hell lot of sense to me but I still don't understand how mathematical functions can be a part of the Universe and shape the reality.
I'm going to go with a programmer metaphor for you. The mathematics (including "A field is a function that returns a value for a point in space" ) are the interface: they define for you exactly what you can expect from this object. The "what is it, really, when you get right down to it" is the implementation. Formally you don't care how it is implemented. In the case of fields they are not matter (and I consider "substance" an unfortunate word to use in a definition, even though I am hard pressed to offer a better one) but they are part of the universe and they are part of physics. What they are is the aggregate effect of the exchange of virtual particles governed by a quantum field theory (in the case of E&M) or the effect of the curvature of space-time (in the case of gravity, and stay tuned to learn how this can be made to get along with quantum mechanics at the very small scale...). Alas I can't define how these things work unless you simply accept that fields do what the interface says and then study hard for a few years. Now, it is very easy to get hung up on this "Is it real or not" thing, and most people do for at least a while, but please just put it aside. When you peer really hard into the depth of the theory, it turns out that it is hard to say for sure that stuff is "stuff". It is tempting to suggest that having a non-zero value of mass defines "stuffness", but then how do you deal with the photo-electric effect (which makes a pretty good argument that light comes in packets that have enough "stuffness" to bounce electrons around)? All the properties that you associate with stuff are actually explainable in terms of electro-magnetic fields and mass (which in GR is described by a component of a tensor field!). And round and round we go.
{ "source": [ "https://physics.stackexchange.com/questions/13157", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4809/" ] }
13,184
Obviously this is impossible in relativity; however, if we ignore relativity and use only Newtonian mechanics, is this possible? How (or why not)?
The answer is yes in some unintersting senses: Take two gravitational attracting point particles and set them at rest. They will attract each other and their velocity will go to $\infty$ in finite time. Note this doesn't contradict conservation of energy since the gravitational potential energy is proportional to $-1/r$. This isn't so interesting since it's just telling you that things under gravity collide. But its technically important in dealing with the problem of gravitationally attracting bodies. Now a more intersting question: Is there a situation where the speed of a particle goes to infinity without it just being a collision of two bodies? Suprisingly, the answer to this question is yes, even in a very natural setting. The great example is given Xia in 1995 (Z. Xia, “The Existence of Noncollision Singularities in Newtonian Systems,” Annals Math. 135, 411-468, 1992). His example is five bodies gravitational interacting. With the right initial conditions one of the bodies can be made to oscillate faster with the frequency and amplitude going to infinity in finite amount of time. Added Here is an image. The four masses $M$ are paired into two binary systems which rotate in opposite directions. The little mass $m$ oscillates up and down faster. It's behavior becomes singular in finite time.
{ "source": [ "https://physics.stackexchange.com/questions/13184", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/175/" ] }
13,480
A typical problem in quantum mechanics is to calculate the spectrum that corresponds to a given potential. Is there a one to one correspondence between the potential and its spectrum? If the answer to the previous question is yes, then given the spectrum, is there a systematic way to calculate the corresponding potential?
In general, the answer is no. This type of inverse problem is sometimes referred to as: "Can one hear the shape of a drum" . An extensive exposition by Beals and Greiner ( Anal. Appl. 7 , 131 (2009) ; eprint ) discusses various problems of this type. Despite the fact that one can get a lot of geometrical and topological information from the spectrum or even its asymptotic behavior, this information is not complete even for systems as simple as quantum mechanics along a finite interval. For additional details, see Apeiron 9 no. 3, 20 (2002) , or also Phys. Rev. A 40 , 6185 (1989) , Phys. Rev. A 82 , 022121 (2010) , or Phys. Rev. A 55 , 2580 (1997) . For a more experimental view, you can actually have particle-in-a-box problems with differently-shaped boxes in two dimensions that have the same spectra; this follows directly from the Gordon-Webb isospectral drums ( Am. Sci. 84 no. 1, 46 (1996) ; jstor ), and it was implemented by the Manoharan lab in Stanford ( Science 319 , 782 (2008) ; arXiv:0803.2328 ), to striking effect:
{ "source": [ "https://physics.stackexchange.com/questions/13480", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4521/" ] }
13,557
Nowadays it seems to be popular among physics educators to present Newton's first law as a definition of inertial frames and/or a statement that such frames exist. This is clearly a modern overlay. Here is Newton's original statement of the law (Motte's translation): Law I. Every body perseveres in its state of rest, or of uniform motion in a right line, unless it is compelled to change that state by forces impressed thereon. The text then continues: Projectiles persevere in their motions, so far as they are not retarded by the resistance of the air, or impelled downwards by the force of gravity. A top, whose parts by their cohesion are perpetually drawn aside from rectilinear motion, does not cease its rotation, otherwise than as it is retarded by the air. The greater bodies of the planets and comets, meeting with less resistance in more free spaces, preserve their motions both progressive and circular for a much longer time. And then the second law is stated. There is clearly nothing about frames of reference here. In fact, the discussion is so qualitative and nonmathematical that many modern physics teachers would probably mark it wrong on an exam. I have a small collection of old physics textbooks, and one of the more historically influential ones is Elements of Physics by Millikan and Gale, 1927. (Millikan wrote a long series of physics textbooks with various titles.) Millikan and Gale give a statement of the first law that reads like an extremely close paraphrase of the Mott translation. There is no mention of frames of reference, inertial or otherwise. A respected and influential modern textbook, aimed at a much higher level than Millikan's book, is Kleppner and Kolenkow's 1973 Introduction to Mechanics. K&K has this: ...it is always possible to find a coordinate system with respect to which isolated bodies move uniformly. [...] Newton's first law of motion is the assertion that inertial systems exist. Newton's first law is part definition and part experimental fact. Isolated bodies move uniformly in inertial systems by virtue of the definition of an inertial system. In contrast, that inertial systems exist is a statement about the physical world. Newton's first law raises a number of questions, such as what we mean by an 'isolated body,' [...] There is a paper on this historical/educational topic: Galili and Tseitlin, "Newton's First Law: Text, Translations, Interpretations and Physics Education," Science & Education Volume 12, Number 1, 45-73, DOI: 10.1023/A:1022632600805 . I had access to it at one time, and it seemed very relevant. Unfortunately it's paywalled now. The abstract, which is not paywalled, says, Normally, NFL is interpreted as a special case: a trivial deduction from Newton's Second Law. Some advanced textbooks replace NFL by a modernized claim, which abandons its original meaning. Question 1 : Does anyone know more about when textbooks begain to claim that the first law was a statement of the definition and/or existence of inertial frames? There seem to be several possible interpretations of the first law: A. Newton consciously wrote the laws of motion in the style of an axiomatic system, possibly emulating Euclid. However, this is only a matter of style. The first law is clearly a trivial deduction from the second law. Newton presented it as a separate law merely to emphasize that he was working in the framework of Galileo, not medieval scholasticism. B. Newton's presentation of the first and second laws is logically defective, but Newton wasn't able to do any better because he lacked the notion of inertial and noninertial frames of reference. Modern textbook authors can tell Newton, "there, fixed that for you." C. It is impossible to give a logically rigorous statement of the physics being described by the first and second laws, since gravity is a long-range force, and, as pointed out by K&K, this raises problems in defining the meaning of an isolated body. The best we can do is that in a given cosmological model, such as the Newtonian picture of an infinite and homogeneous universe full of stars, we can find some frame, such as the frame of the "fixed stars," that we want to call inertial. Other frames moving inertially relative to it are also inertial. But this is contingent on the cosmological model. That frame could later turn out to be noninertial, if, e.g., we learn that our galaxy is free-falling in an external gravitational field created by other masses. Question 2 : Is A supported by the best historical scholarship? For extra points, would anyone like to tell me that I'm an idiot for believing in A and C, or defend some other interpretation on logical or pedagogical grounds? [EDIT] My current guess is this. I think Ernst Mach's 1919 The Science Of Mechanics gradually began to influence presentations of the first law. Influential textbooks such as Millikan's only slightly postdated Mach's book, and were aimed at an audience that would have been unable to absorb Mach's arguments. Later, texts such as Kleppner, which were aimed at a more elite audience, began to incorporate Mach's criticism and reformulation of Newton. Over time, texts such as Halliday, which were aimed at less elite audiences, began to mimic treatments such as Kleppner's.
I did not do more than read Newton, and a few commentators, so my insight on this is probably meager. But I am sure that you are right that the inertial frame interpretation of the first law is only a modern ex-post-facto justification for making it separate from the second law. Newton certainly never used the first law to define an inertial frame, he just assumed you had one in mind, since inertial frames were not the focus of his investigation. I think that the statements of the laws of motion are unfortunately following Aristotle more than Euclid. Since physics is no longer regarded as philosophy, we value independence of axioms over clarity of philosophical expounding, and this makes the first law redundant. But if you are stating a philosophical position--- that things maintain their state of motion unless acted upon--- Newton's first law is a neat summary of the foundation of the world-system. Note that Newton does not state it as "a body in linear motion continues moving linearly". He includes rotational motion too, even though this is a different idea. I think he conflates the two to fix in mind the philosophical position that uniform motion is the natural state of all objects. In Aristotle, the natural state of massive stuff like "earth" is to be down at the center of universe, and of light stuff like "fire" to be up in the heavens, leading to gravity and levity. Newton is replacing this notion with a different notion of natural state. Then the second law talks about deviations from the natural state, and is a separate philosophical idea (although not a separate axiom in the mathematical sense). The influence of Aristotle has (thankfully) declined through the centuries, making Newtons laws a little anachronistic. I think that we don't have to be so slavish to Newton nowadays. Newton was aware of the importance of linear momentum and angular momentum conservation. One other way of understanding and his first law can be thought of as making the conservation laws primary. This point of view is both closer to Newton's thinking (it is what makes his "natural states" natural), and it is also a better fit with modern understanding. So it might be nice to restate the first law as "linear momentum and angular momentum are conserved". All this is based on personal speculation, not on sound historical research, so take with a grain of salt.
{ "source": [ "https://physics.stackexchange.com/questions/13557", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
13,624
And in what sense are they 'non-local'?
The higher the number of derivatives the more initial data you have to provide. If you have some Lagrangian that contains an infinite number of derivatives (or derivatives appearing non-polynomially, such as one over derivative) then you have to provide an infinite amount of initial data which amounts to non-local info, in the sense explained below. If you think in terms of Taylor expansions around your initial value, then you have to provide the full function (and thus non-local information) if you have an infinite number of derivatives. This is to be contrasted with cases where you provide only the field and its first derivative as initial values (and thus rather local information). Personally, I would not call any higher-derivative Lagrangian "non-local", but only those theories where the number of derivatives is formally infinite in the Lagrangian. In any discretization scheme you literally see the non-locality induced by higher derivatives: to define the first derivative you need to know the function on two adjacent lattice points, to define the second derivative on three and to define the n-th derivative on n+1 lattice points. Thus, the more derivatives the more non-locality. If you have an infinite number of derivatives you need to know the function on an infinite set of lattice points.
{ "source": [ "https://physics.stackexchange.com/questions/13624", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1458/" ] }
13,654
I know what anti-matter is and how when it collides with matter both are annihilated. However, what about anti-photons? Are there such things as anti-photons? I initially thought the idea preposterous. However I am curious because, if anti-photons don't exist, then anti-matter could theoretically transfer its energy to normal matter - through the mechanism of light. Is it right?
Well, they do and don't. Depends on your point of view. Here's the story. Quantum field theory requires for consistency reasons that every charged particle has its antiparticle. It also tells you what properties will the anti-particle have: it will have the same characteristic from the point of view of space-time (i.e. Poincaré group) which means equal mass and spin. And it will have all charges of opposite sign than a matter particle. If the particle is not charged then QFT doesn't impose any other constraint and so you don't need antiparticles for photons (since they are not charged). But you can still consider the same operation of keeping mass and spin and swapping charges and since this does nothing to photon, you can decide to identify it with an antiphoton. Your call.
{ "source": [ "https://physics.stackexchange.com/questions/13654", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2843/" ] }
13,728
It is well-known even among the lay public (thanks to popular books) that string theory first arose in the field of strong interactions where certain scattering amplitudes had properties that could be explained by assuming there were strings lurking around. Unfortunatelly, that's about as far as my knowledge reaches. Can you explain in detail what kind of objects that show those peculiar stringy properties are and what precisely those properties are? How did the old "string theory" explain those properties. Are those explanations still relevant or have they been superseded by modern perspective gained from QCD (which hadn't been yet around at the Veneziano's time)?
in the late 1960s, the strongly interacting particles were a jungle. Protons, neutrons, pions, kaons, lambda hyperons, other hyperons, additional resonances, and so on. It seemed like dozens of elementary particles that strongly interacted. There was no order. People thought that quantum field theory had to die. However, they noticed regularities such as Regge trajectories. The minimal mass of a particle of spin $J$ went like $$ M^2 = aJ + b $$ i.e. the squared mass is a linear function of the spin. This relationship was confirmed phenomenologically for a couple of the particles. In the $M^2$-$J$ plane, you had these straight lines, the Regge trajectories. Building on this and related insights, Veneziano "guessed" a nice formula for the scattering amplitudes of the $\pi+\pi \to \pi+\rho$ process, or something like that. It had four mesons and one of them was different. His first amplitude was the Euler beta function $$ M = \frac{\Gamma(u)\Gamma(v)}{\Gamma(u+v)}$$ where $\Gamma$ is the generalized factorial and $u,v$ are linear functions of the Mandelstam variables $s,t$ with fixed coefficients again. This amplitude agrees with the Regge trajectories because $\Gamma(x)$ has poles for all non-positive integers. These poles in the amplitude correspond to the exchange of particles in the $s,t$ channels. One may show that if we expand the amplitude to the residues, the exchanged particles' maximum spin is indeed a linear function of the squared mass, just like in the Regge trajectory. So why are there infinitely many particles that may be exchanged? Susskind, Nielsen, Yoneya, and maybe others realized that there has to be "one particle" of a sort that may have any internal excitations - like the Hydrogen atom. Except that the simple spacing of the levels looked much easier than the Hydrogen atom - it was like harmonic oscillators. Infinitely many of them were still needed. They ultimately realized that if we postulate that the mesons are (open) strings, you reproduce the whole Veneziano formula because of an integral that may be used to define it. One of the immediate properties that the "string concept" demystified was the "duality" in the language of the 1960s - currently called the "world sheet duality". The amplitude $M$ above us $u,v$-symmetric. But it can be expanded in terms of poles for various values of $u$; or various values of $v$. So it may be calculated as a sum of exchanges purely in the $s$-channel; or purely in the $t$-channel. You don't need to sum up diagrams with the $s$-channel or with the $t$-channel: one of them is enough! This simple principle, one that Veneziano actually correctly guessed to be a guiding principle for his search of the meson amplitude, is easily explained by string theory. The diagram in which 2 open strings merge into 1 open string and then split may be interpreted as a thickened $s$-channel graph; or a thick $t$-channel graph. There's no qualitative difference between them, so they correspond to a single stringy integral for the amplitude. This is more general - one stringy diagram usually reduces to the sum of many field-theoretical Feynman diagrams in various limits. String theory automatically resums them. Around 1970, many things worked for the strong interactions in the stringy language. Others didn't. String theory turned out to be too good - in particular, it was "too soft" at high energies (the amplitudes decrease exponentially with energies). QCD and quarks emerged. Around mid 1970s, 't Hooft wrote his famous paper on large $N$ gauge theory - in which some strings emerge, too. Only in 1997, these hints were made explicit by Maldacena who showed that string theory was the right description of a gauge theory (or many of them) at the QCD scale, after all: the relevant target space must however be higher-dimensional and be an anti de Sitter space. In AdS/CFT, much of the original strategies - e.g. the assumption that mesons are open strings of a sort - get revived and become quantitatively accurate. It just works. Of course, meanwhile, around mid 1970s, it was also realized that string theory was primarily a quantum theory of gravity because the spin 2 massless modes inevitably exist and inevitably interact via general relativity at long distances. In the early and mid 1980s, it was realized that string theory included the right excitations and interactions to describe all particle species and all forces we know in Nature and nothing could have been undone about this insight later. Today, we know that the original motivation of string theory wasn't really wrong: it was just trying to use non-minimal compactifications of string theory. Simpler vacua of string theory explain gravity in a quantum language long before they explain the strong interactions.
{ "source": [ "https://physics.stackexchange.com/questions/13728", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/329/" ] }
13,757
I read on Wikipedia how the numerical value of Avogadro's number can be found by doing an experiment, provided you have the numerical value of Faraday's constant; but it seems to me that Faraday's constant could not be known before Avogadro's number was as it's the electric charge per mole. (How could we know the charge of a single electron just by knowing the charge of a mole of electrons, without knowing the ratio of the number of particles in both?) I just want to know the method physically used, and the reasoning and calculations done by the first person who found the number $6.0221417930\times10^{23}$ (or however accurate it was first discovered to be). Note: I see on the Wikipedia page for Avogadro constant that the numerical value was first obtained by "Johann Josef Loschmidt who, in 1865, estimated the average diameter of the molecules in air by a method that is equivalent to calculating the number of particles in a given volume of gas;" but I can't access any of the original sources that are cited. Can somebody explain it to me, or else give an accessible link so I can read about what exactly Loschmidt did?
The first estimate of Avogadro's number was made by a monk named Chrysostomus Magnenus in 1646. He burned a grain of incense in an abandoned church and assumed that there was one 'atom' of incense in his nose at soon as he could faintly smell it; He then compared the volume of the cavity of his nose with the volume of the church. In modern language, the result of his experiment was $N_A \ge 10^{22}$ ... quite amazing given the primitive setup. Please remember that the year is 1646; the 'atoms' refer to Demokrit's ancient theory of indivisible units, not to atoms in our modern sense. I have this information from a physical chemistry lecture by Martin Quack at the ETH Zurich. Here are further references (see notes to page 4, in German): http://edoc.bbaw.de/volltexte/2007/477/pdf/23uFBK9ncwM.pdf The first modern estimate was made by Loschmidt in 1865. He compared the mean free path of molecules in the gas phase to their liquid phase. He obtained the mean free path by measuring the viscosity of the gas and assumed that the liquid consists of densely packed spheres. He obtained $N_A \approx 4.7 \times 10^{23}$ compared to the modern value $N_A = 6.022 \times 10^{23}$.
{ "source": [ "https://physics.stackexchange.com/questions/13757", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4715/" ] }
13,851
I was wondering if a photon is divisible. If you look at a photon as a particle, then you may be able to split it (in theory). Is it possible and how do you split it?
The photon cannot be split as one can split a nucleus. As it has zero mass it cannot decay. But it can interact with another particle lose part of its energy and thus change wavelength. It can be transmuted. Have a look at the compton scattering entry in wikipedia. Edit : Intrigued by the other answers I searched and found that within special crystals "splits" can happen, if one defines as a split that there can come out two photons whose energy adds up to the original energy of the photon. So in a collective crystal photon interaction there exists such a probability.
{ "source": [ "https://physics.stackexchange.com/questions/13851", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4867/" ] }
13,861
The great russian physicist Lev Landau developed a famous entry exam to test his students. This "Theoretical Minimum" contained everything he considered elementary for a young theoretical physicist. Despite its name, it was notoriously hard and comprehensive, and in Landau's time, only 43 students passed it. I wonder if anyone can provide the list of topics, or even a copy of the exam? (I'm sure I'd have no chance to pass, but I'd like to see it out of a sense of sportmanship ;-). Also, I think it would make quite a good curriculum of theoretical physics (at least pre 1960).)
The list of topics can be found here (in Russian, of course). Nowadays students are examined by collaborators of Landau Institute for Theoretical Physics . Each exam, as it was before, consists of problems solving. For every exam there is one or several examiners with whom you are supposed to contact with to inform that you're willing to pass this particular exam (they will make an appointment). Everyone can pass any exam in any order. Today Landau's theoretical minimum (not all 11 exams, but at least 6 of them) is included in the program for students of Department of General and Applied Physics ( Moscow Institute of Physics and Technology ). The program for each exam, as you can see from the link above, corresponds to the contents of volumes in the Course of Theoretical Physics by L&L (usually you have to master almost all paragraphs in the volume to pass the exam). Mathematics I . Integration, ordinary differential equations, vector algebra and tensor analysis. Mechanics . Mechanics, Vol. 1 , except §§ 27, 29, 30, 37, 51 (1988 russian edition) Field theory The Classical Theory of Fields, Vol. 2 , except §§ 50, 54-57, 59-61, 68, 70, 74, 77, 97, 98, 102, 106, 108, 109, 115-119 (1973 russian edition) Mathematics II . The theory of functions of a complex variable, residues, solving equations by means of contour integrals (Laplace's method), the computation of the asymptotics of integrals, special functions (Legendre, Bessel, elliptic, hypergeometric, gamma function) Quantum Mechanics . Quantum Mechanics: Non-Relativistic Theory, Vol. 3 , except §§ 29, 49, 51, 57, 77, 80, 84, 85, 87, 88, 90, 101, 104, 105, 106-110, 114, 138, 152 (1989 russian edition) Quantum electrodynamics . Relativistic Quantum Theory, Vol. 4 , except §§ 9, 14-16, 31, 35, 38-41, 46-48, 51, 52, 55, 57, 66-70, 82, 84, 85, 87, 89 - 91, 95-97, 100, 101, 106-109, 112, 115-144 (1980 russian edition) Statistical Physics I . Statistical Physics, Vol. 5 , except §§ 22, 30, 50, 60, 68, 70, 72, 79, 80, 84, 95, 99, 100, 125-127, 134-141, 150-153 , 155-160 (1976 russian edition) Mechanics of continua . Fluid Mechanics, Vol. 6 , except §§ 11, 13, 14, 21, 23, 25-28, 30-32, 34-48, 53-59, 63, 67-78, 80, 83, 86-88, 90 , 91, 94-141 (1986 russian edition); Theory of Elasticity, Vol. 7 , except §§ 8, 9, 11-21, 25, 27-30, 32-47 (1987 russian edition) Electrodynamics of Continuous Media . Electrodynamics of Continuous Media, Vol. 8 , except §§ 1-5, 9, 15, 16, 18, ​​25, 28, 34, 35, 42-44, 56, 57, 61-64, 69, 74, 79-81 , 84, 91-112, 123, 126 (1982 russian edition) Statistical Physics II . Statistical Physics, Part 2. Vol. 9 , only §§ 1-5, 7-18, 22-27, 29, 36-40, 43-48, 50, 55-61, 63-65, 69 (1978 russian edition) Physical Kinetics . Physical Kinetics. Vol. 10 , only §§ 1-8, 11, 12, 14, 21, 22, 24, 27-30, 32-34, 41-44, 66-69, 75, 78-82, 86, 101. Some real problems (Quantum Mechanics exam): The electron enters a straight pipe of circular cross section (radius $r$). The tube is bent at a radius $R \gg r$ by the angle $\alpha$ and then is aligned back again. Find the probability that the electron will jump out. A hemisphere lies on an infinite two-dimensional plane. The electron falls on the hemisphere, determine the scattering cross section in the Born approximation. The electron "sits" in the ground state in the cone-shaped "bag" under the influence of gravity. The lower end of the plastic bag is cut with scissors. Find the time for the electron to fall out (in the semi-classical approximation).
{ "source": [ "https://physics.stackexchange.com/questions/13861", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/825/" ] }
13,870
I have read before in one of Seiberg's articles something like, that gauge symmetry is not a symmetry but a redundancy in our description, by introducing fake degrees of freedom to facilitate calculations. Regarding this I have a few questions: Why is it called a symmetry if it is not a symmetry? what about Noether theorem in this case? and the gauge groups U(1)...etc? Does that mean, in principle, that one can gauge any theory (just by introducing the proper fake degrees of freedom)? Are there analogs or other examples to this idea, of introducing fake degrees of freedom to facilitate the calculations or to build interactions, in classical physics? Is it like introducing the fictitious force if one insists on using Newton's 2nd law in a noninertial frame of reference?
In order: Because the term "gauge symmetry" pre-dates QFT. It was coined by Weyl, in an attempt to extend general relativity. In setting up GR, one could start with the idea that one cannot compare tangent vectors at different spacetime points without specifying a parallel transport/connection; Weyl tried to extend this to include size, thus the name "gauge". In modern parlance, he created a classical field theory of a $\mathbb{R}$-gauge theory. Because $\mathbb{R}$ is locally the same as $U(1)$ this gave the correct classical equations of motion for electrodynamics (i.e. Maxwell's equations). As we will go into below, at the classical level, there is no difference between gauge symmetry and "real" symmetries. Yes. In fact, a frequently used trick is to introduce such a symmetry to deal with constraints. Especially in subjects like condensed matter theory, where nothing is so special as to be believed to be fundamental, one often introduces more degrees of freedom and then "glue" them together with gauge fields. In particular, in the strong-coupling/Hubbard model theory of high-$T_c$ superconductors, one way to deal with the constraint that there be no more than one electron per site (no matter the spin) is to introduce spinons (fermions) and holons (bosons) and a non-Abelian gauge field, such that really the low energy dynamics is confined --- thus reproducing the physical electron; but one can then go and look for deconfined phases and ask whether those are helpful. This is a whole other review paper in and of itself. (Google terms: "patrick lee gauge theory high tc".) You need to distinguish between forces and fields/degrees of freedom. Forces are, at best, an illusion anyway. Degrees of freedom really matter however. In quantum mechanics, one can be very precise about the difference. Two states $\left|a\right\rangle$ and $\left|b\right\rangle$ are "symmetric" if there is a unitary operator $U$ s.t. $$U\left|a\right\rangle = \left|b\right\rangle$$ and $$\left\langle a|A|a\right\rangle =\left\langle b|A|b\right\rangle $$ where $A$ is any physical observable. "Gauge" symmetries are those where we decide to label the same state $\left|\psi\right\rangle$ as both $a$ and $b$. In classical mechanics, both are represented the same way as symmetries (discrete or otherwise) of a symplectic manifold. Thus in classical mechanics these are not separate, because both real and gauge symmetries lead to the same equations of motion; put another way, in a path-integral formalism you only notice the difference with "large" transformations, and locally the action is the same. A good example of this is the Gibbs paradox of working out the entropy of mixing identical particles -- one has to introduce by hand a factor of $N!$ to avoid overcounting --- this is because at the quantum level, swapping two particles is a gauge symmetry. This symmetry makes no difference to the local structure (in differential geometry speak) so one cannot observe it classically. A general thing -- when people say "gauge theory" they often mean a much more restricted version of what this whole discussion has been about. For the most part, they mean a theory where the configuration variable includes a connection on some manifold. These are a vastly restricted version, but covers the kind that people tend to work with, and that's where terms like "local symmetry" tend to come from. Speaking as a condensed matter physicist, I tend to think of those as theories of closed loops (because the holonomy around a loop is "gauge invariant") or if fermions are involved, open loops. Various phases are then condensations of these loops, etc. (For references, look at "string-net condensation" on Google.) Finally, the discussion would be amiss without some words about "breaking" gauge symmetry. As with real symmetry breaking, this is a polite but useful fiction, and really refers to the fact that the ground state is not the naive vacuum. The key is commuting of limits --- if (correctly) takes the large system limit last (both IR and UV) then no breaking of any symmetry can occur. However, it is useful to put in by hand the fact that different real symmetric ground states are separately into different superselection sectors and so work with a reduced Hilbert space of only one of them; for gauge symmetries one can again do the same (carefully) commuting superselection with gauge fixing.
{ "source": [ "https://physics.stackexchange.com/questions/13870", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4521/" ] }
13,980
I am reading up on the Schrödinger equation and I quote: Because the potential is symmetric under $x\to-x$, we expect that there will be solutions of definite parity. Could someone kindly explain why this is true? And perhaps also what it means physically?
Good question! First you need to know that parity refers to the behavior of a physical system, or one of the mathematical functions that describe such a system, under reflection. There are two "kinds" of parity: If $f(x) = f(-x)$, we say the function $f$ has even parity If $f(x) = -f(-x)$, we say the function $f$ has odd parity Of course, for most functions, neither of those conditions are true, and in that case we would say the function $f$ has indefinite parity. Now, have a look at the time-independent Schrödinger equation in 1D: $$-\frac{\hbar^2}{2m}\frac{\mathrm{d}^2}{\mathrm{d}x^2}\psi(x) + V(x)\psi(x) = E\psi(x)$$ and notice what happens when you reflect $x\to -x$: $$-\frac{\hbar^2}{2m}\frac{\mathrm{d}^2}{\mathrm{d}x^2}\psi(-x) + V(-x)\psi(-x) = E\psi(-x)$$ If you have a symmetric (even) potential, $V(x) = V(-x)$, this is exactly the same as the original equation except that we've transformed $\psi(x) \to \psi(-x)$. Since the two functions $\psi(x)$ and $\psi(-x)$ satisfy the same equation, you should get the same solutions for them, except for an overall multiplicative constant; in other words, $$\psi(x) = a\psi(-x)$$ Normalizing $\psi$ requires that $|a| = 1$, which leaves two possibilities: $a = +1$ (even parity) and $a = -1$ (odd parity). As for what this means physically, it tells you that whenever you have a symmetric potential, you should be able to find a basis of eigenstates which have definite even or odd parity (though I haven't proved that here,* only made it seem reasonable). In practice, you get linear combinations of eigenstates with different parities, so the actual state may not actually be symmetric (or antisymmetric) around the origin, but it does at least tell you that if your potential is symmetric, you could construct a symmetric (or antisymmetric) state. That's not guaranteed otherwise. You'd probably have to get input from someone else as to what exactly definite-parity states are used for, though, since that's out of my area of expertise (unless you care about parity of elementary particles, which is rather weirder). *There is a parity operator $P$ that reverses the orientation of the space: $Pf(x) = f(-x)$. Functions of definite parity are eigenfunctions of this operator. I believe you can demonstrate the existence of a definite-parity eigenbasis by showing that $[H,P] = 0$.
{ "source": [ "https://physics.stackexchange.com/questions/13980", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5046/" ] }
14,056
Einstein's Cross has been attributed to gravitational lensing . However, most examples of gravitational lensing are crescents known as Einstein's rings . I can easily understand the rings and crescents, but I struggle to comprehend the explanation that gravitational lensing accounts for Einstein's cross. I found this explanation , but it was not satisfactory. (Image source) (Image source)
The middle galaxy in Einstein's cross has an elliptical mass distribution which is wider in the direction of the short leg of the cross (originally, this said long leg), with a center of mass where you see the galaxy. The object is slightly to the right of the center of the ellipse, in the direction of the long leg of the cross (the original answer had the direction reversed). This type of lensing is acheivable in such a configuration, when the lensing object is relatively close to us, so that the rays pass the central region, where the quadrupole moment asymmetry of the gravitational field is apparent. Lensing map Given a light source, call the line between us and the source the z axis, and parametrize outgoing light rays by the x-y coordinates of their intesection with an x-y plane one unit distance away from the source in our direction. This is a good parametrization for the tiny angles one is dealing with. The light rays are parametrized by a two dimensional vector v. These light rays then go through a lensing region, and come out going in another direction. Call their intersection point with the x-y plane going through our position v'. The lensing problem is completely determined once you know v' as a function of v. We can only see those rays that come to us, that is, those rays where v'(v) is zero. The number and type of images are entirely determined by the number and type of zeros of this vector field. This is why it is difficult to reconstruct the mass distribution from strong-lensing--- many different vector fields can have the same zeros. The only thing we can observe is the number of zeros, and the Jacobian of the vector field v' at the zero. The Jacobian tells you the linear map between the source and the observed image, the shear, magnification, inversion. The lensing map is always asymptotically linear, v'(v)= v for large v, because far away rays are not lensed, and the scale of v is adjusted to make this constant 1. Generic strong lensing In a generic strong lensing problem, the vector field v has only simple zeros. The Jacobian is a diagonalizable matrix with nonzero eigenvalues. This means that each image is perfectly well defined, not arced or smeared. The image is arced only in the infinitely improbable case that you have a singular Jacobian. But we see gravitational arcs all the time! The reason for this is that for the special case of a spherically symmetric source, the Jacobian is always singular. The source, the center of symmetry, and us make a plane, and this plane includes the z-axis, and necessarily includes the direction of the image. The Jacobian at a zero of v' always has a zero eigenvalue in the direction perpendicular to this plane. This means that the spherically-symmetric far-field of any compact source will produce arcs, or smears. When the lensing object is very far, the rays that get to us are far away from the source, and we see far-field arcs and smears. When the lensing galaxy is close, the lensing field has no special symmetry, and we see points with no smearing. So despite the intuition from point sources and everyday lenses, Einstein's cross is the generic case for lensing, the arcs and smears are special cases. You can see this by holding a pen-light next to a funhouse-mirror. Generically, at any distance, you will see the pen light reflected at multiple images, but only near special points do you get smearing or arcing. Topological considerations There is a simple topological theorem about this vector field v'. If you make a large circle in the v plane, and go around it counterclockwise, the value v'(v) along this circle makes a counterclockwise loop once around. This is the winding number of the loop. You can easily prove the following properties of the winding number: Every loop has a winding number if you divide a loop in two, the winding number of the two parts add up to the winding number of the loop. the winding number of a small circle is always 0, unless the vector field is zero at inside the circle. Together they tell you what type of zeros can occur in a vector field based on its behavior at infinity. The winding number of the vector field in a small circle around a zero is called its index. The index is always +1 or -1 generically, because any other index happens only when these types of index zeros collide, so it is infinitely improbable. I will call the +1 zeros "sources" although they can be sources sinks or rotation/spiral points. The -1 zeros are called "saddles". The images at saddles are reflected. The images at sources are not. These observations prove the zero theorem: the number of sources plus the number of saddles is equal to the winding number of a very large circle. This means that there are always an odd number of images in a generic vector field, and always one more source than saddle. A quick search reveals that this theorem is known as the "odd number theorem" in the strong lensing community. The odd number paradox This theorem is very strange, because it is exactly the opposite of what you always see! The generic images, like Einstein's cross, almost always have an even number of images. The only time you see an odd number of images is when you see exactly one image. What's the deal? The reason can be understood by going to one dimension less, and considering the one-dimensional vector field x'(x). In two dimensions, the light-ray map is defined by a zeros of a real valued function. These zeros also obey the odd-number theorem--- the asymptotic value of x'(x) is negative for negative x and positive for positive x, so there are an odd number of zero crossings. But if you place a point-source between you and the object, you generically see exactly two images! The ray above that is deflected down, and they ray below that is deflected up. You don't ever see an odd number. How does the theorem fail? The reason is that the point source has tremendously large deflections when you get close, so that the vector field is discontinuous there. Light rays that pass very close above the point are deflected very far down, and light rays that pass very close below are deflected far up. the discontinuity has a +1 index, and it fixes the theorem. If you smooth out the point source into a concentrated mass distribution, the vector field becomes continuous again, but one of the images is forced to be right behind the continuous mass distribution, with extremely small magnification. So the Einstein cross has five images: there are four visible images, and one invisible images right behind the foreground galaxy. This requires no fine tuning--- the fifth image occurs where the mass distribution is most concentrated, which is also where the galaxy is. Even if the galaxy were somehow transparent, the fifth image would be extremely dim, because it is where the gradient of the v field is biggest, and the smaller this gradien, the bigger the magnification. Einstein's cross After analyzing the general case, it is straightforward to work out qualitatively what is happening in Einstein's cross. There is a central mass, as in all astrophysical lenses, so there is an invisible central singularity/image with index +1. the remaining images must have 2 sources and 2 saddles. The most likely configuration is that the two sources are the left and right points on the long leg of the cross, and the two saddles are the top and bottom points (in my original answer, I had the orientation backwards. To justify the orientation choice, see the quantitative analysis below) You can fill in the qualitative structure of the v'(v) vector field by drawing its flow lines. The image below is the result. It is only a qualitative picture, but you get to see which way the light is deflecting (I changed the image to reflect the correct physics): alt text http://i55.tinypic.com/de0n0l.png The flow lines start at the two sources, and get deflected around the two saddles, with some lines going off to infinity and some lines going into the central singularity/sink. There is a special box going around source-saddle-source-saddle which cuts the plane in two, and inside the box, all the source flows end on the central singularity/image and outside all the source flows end at infinity. The flow shows that the apparent fourfold symmetry is not there at all. The two sources are completely different from the two saddles. The direction of light deflection is downward towards the long axis of the cross, and inward toward the center. This is the expected deflection from a source which is elliptical oriented along the long-direction of the galaxy. Model (The stuff in this section was wrong. The correct stuff is below) General Astrophysical Lensing The general problem is easy to solve, and gives more insight into what you can extract from strong lensing observations. The first thing to note is that the deflection of a particle moving at the speed of light past a point mass in Newton's theory, when the deflection is small is given by the integral of the force over a straight line, divided by the nearly constant speed c, and this straightforward integral gives a deflection which is: $$ \Delta\theta = -{R_s\over b}$$ where $R_s = {2GM\over c^2}$ is the Schwarzschild radius, $b$ is the impact parameter, the distance of closest approach, and everything is determined by dimensional analysis except the prefactor, which is as I gave it. The General Relativity deflection is twice this, because the space-space metric components contribute an equal amount, as is easiest to see in Schwarzschild coordinates in the large radius region, and this is a famous prediction of GR. When the deflections are small, and they always small fractions of a degree in the actual images, the total deflection is additive over the point masses that make up the lensing mass. Further, the path of the light ray from the distant light source is only near the lensing source for a very small fraction of the total transit, and this lensing region is much smaller than the distance to us, or the distance between the light source and the lensing mass. These two observations mean that you can squash all the material in the lensing mass into a single x-y plane, and get the same deflection, up to corrections which go as the ratio of the radius of a galaxy to the distance from us/source to the galaxy, both of which are safely infinitesimal. The radius of a galaxy and dark-matter cloud is a million light years, while the light source and us are a billion light years distant. You convert $\Delta\theta$ to $x-y$ plane coordinates I am using by multiplying by a unit distance. This gives the amount and direction of deflection from a given point mass. The total deflection of the light ray at distance B is given by the sum over all point masses in the galaxy and its associated dark-matter of this vector contribution, which is four time the mass (twice the Schwartschild radius) divided by the distance, pointing directly toward the mass. This sum is $\Delta v$ . What is important to note is that this sum is equal to the solution to a completely different problem, namely the 2-d gravitational field of (four times) the squashed planar mass. In 2d, gravity goes like $1/r$ . The planar gravitational field of the planar mass distribution gives $\Delta v$ , and it is most important to note that this means that $\Delta v$ is the gradient of the 2d gravitational potential: $$ \Delta V= - \nabla \phi$$ where $$\phi(x) = \int \rho(u) \ln(|x-u|) d^2u$$ where the two dimensional density $\rho(u)$ is the integral of the three dimensional density in the $z$ direction (times $4G/c^2$ ). This is important, because you can easily determine $\phi$ from the mass distribution by well known methods for solving Laplace's equation in 2d, and there are many exact solutions. The impact parameter $b$ is equal to $vR_1$ , the original direction the light ray goes times the distance from the light-source to the lensing object, and the position this light ray reaches when it gets to us is: $$ v'(v) = v(R_1 + R_2) + \Delta v(vR_1) R_2$$ Choosing a new normalization for $v$ so that $vR_1$ is the new $v$ , and choosing a normalization for $v'$ so that $v'(v)$ is $v$ at large distances: $$ v'(v) = v - {R_1\over R_1+R_2} \nabla \phi(v) $$ This is important, because it means that the whole thing is a gradient, the gradient of: $$ v'(v) = - \nabla(\phi'(v))$$ $$\phi'(v) = {R_1\over R_1+R_2} \phi(v) - {v^2\over 2} $$ The resulting potential also has a 2d interpretation--- it is the gravitational potential of the planar squashed mass distribution in a Newton-Hooke background, where objects are pushed outward by a force proportional to their distance. The 2-d gravity potential is easy to calculate, often in closed form, and to find the lensing profile, you just look for the maxima, minima, and saddles of the 2-d potential plus a quadratically falling potential. This solves the problem for all practical astrophysical situations. I found it remarkable that the deflection field is integrable, but perhaps there is a simpler way of understanding this. Point mass The 2d potential of a point mass is $$ \phi(v) = \ln(|v|)$$ and for an object directly behind it, you get $$ \phi'(v) = A \ln(|v|) - |v|^2$$ This gives a central singularity (or if you spread out the mass in the center, a dim image right on top of the mass) plus a perfect ring where $r=\sqrt{A}$ . This is the ring image. Moving the light source off center just shifts the relative position of the two potential centers. The new potential is: $$ \phi'(v) = {A\over 2} \ln(x^2 + y^2) - {(x-a)^2 + y^2\over 2}$$ Setting the x and y derivatives of the potential to zero, you find two critical points (not counting the singular behavior at x=y=0). The two points both have a singular Jacobian, so they give very large magnifications, and smears or arcs. The two images occur at $$y=0$$ , $$x= {a\over 2} \pm \sqrt{A^2 - {a\over 2}^2}$$ So the smear to the side where the object is at is moved further, at large values of a, the second image is right on top of the lesing mass, and at small values of a, the two images are moved in the direction of the displacement by half the amount of displacement. Quadrupole mass distribution Consider two masses of size {1\over 2} at position $\pm a$ . This gives a potential which is a superposition of the two masses: $$ \phi(x,y) = {1\over 4} \ln((x-a)^2 + y^2 ) + {1\over 4} \ln((x+a)^2 + y^2) = {1\over 2} \ln(r^2) + a^2 { x^2 - y^2\over 2r^2}$$ The part in addition to the ordinary $M\ln(r)$ potential of a point source is a quadrupole. Lensing in a quadrupole has a simple algebraic solution. Differentiating, and subtracting the linear part gives $$ {Ax\over r^2} ( 1 + {a^2\over r^4}(6y^2 - 2x^2) - {r^2\over A} ) =0$$ $$ {Ay\over r^2} ( 1 + {a^2\over r^4}(2y^2 - 6x^2) - {r^2\over A} ) =0$$ The x=0,y=0 point is at the singular position. The real critical points are at the other simultaneous solutions: $$ x=0, y= \pm\sqrt{A}\sqrt{{1\over 2} \pm \sqrt{{1\over 4} + {2a^2\over A}}}$$ $$ y=0, x= \pm\sqrt{A}\sqrt{{1\over 2} \pm \sqrt{{1\over 4} - {2a^2\over A}}}$$ Of these eight points, two are imaginary (taking the minus sign inside the square root for y), and two are outside the domain of validity of the solution (taking the minus sign inside the square root for x--- the point is $\sqrt{2}a$ ), which is right by the point masses making the quadrupole). This leaves four points. But they are all local maxima, none of these are saddles. The saddles are found by solving the nontrivial equations in parentheses for x and y. Taking the difference of the two equations reveals that $x=\pm y$ , which gives the four saddle solutions: $$ \pm x=\pm y = \sqrt{A}$$ There are eight images for a near-center source lensed by a quadrupole mass. For small values of a, The two images along the line of the two masses are pulled nearer by a fractional change which is $a^2\over A$ , the two images perpendicular to the line of the two masses are pulled apart by a fractional change of $a^2\over A$ , while the four images on the diagonals are at the location of the point-source disk. For me, this was surprising, but it is obvious in hindsight. The quadrupole field and the Newton-Hooke fields both point along the lines y=x on the diagonal, and their goes from inpointing near the origin to outpointing far away, so it must have a zero. The zeros are topological and stable to small deformations, so if you believe that the field of the galaxy is spherical plus quadrupole, the Einstein cross light source has to be far enough off-center to change the topology of the critical points. Quadrupole mass distribution/off-center source To analyze moving off center qualitatively, it helps to understand how saddles and sources respond to movement. If you move the light source, you move the Newton-Hooke center. The result is that the points that were previously sources and saddles now have a nonzero vector value. When the position of a source slowly gets a nonzero vector value, that means that the source is moving in the direction opposite this value. If a saddle gets a nonzero value, the saddle is moving in the direction of this value reflected in the attracting axis of the saddle. This means that if you start with a very asymmetric quadrupole, and you slide the source along the long-axis of the source-saddle-source-saddle-source-saddle-source-saddle ellipse towards one of the sources at the end of the long axis, one of the short axis sources and the short axis saddles approach each other. They annihilate when they touch, and they touch at a finite displacement, since the result must smoothly approach the spherically symmetric solution. Right after the sources and saddles annihilate, you get a cross, but it doesn't look too much like Einstein's cross--- the surviving two saddles and two sources are more asymmetric, and the narrow arm is much narrower than the wide arm. Line source For the lensing from a line source, you write the 2-d potential for a line oriented along the y-axis (it's the same as a plane source in 3d, a point source in 1d, or a d-hyperplane source in d+1 dimensions--- a constant field pointing toward the object on either side): $$ \phi(x) = B|x|$$ And subtract off the Newton-Hooke source part, with a center at $x=a$ . $$\phi'(x) = B|x| - {1\over 2} ((x-a)^2 + y^2)$$ The critical points are on the y-axis by symmetry, and they are very simple to find: $$ y=0, x= B+a$$ $$ y=0, x= -B+a$$ These are the two images from a long dark-matter filament, or any other linear extended source. Cosmic strings give the same sort of lensing, but the string-model of cosmic strings give ultra-relativistic sources which produce a conical deficit angle, and are technically not covered by the formalism here. But the result is the same--- doubled images. If you spread out the line source so that it is a uniform density between two lines parallel to the y-axis (this would come from squashing a square beam of uniform mass density into a plane), the lensing outside the two lines is unaffected, by the 2-d Gauss's Law. The interior is no longer singular, and you get a third image, as usual, at x=y=0. Elongated density plus point source The next model I will consider is a string plus a point. This is to model an elongated mass density with a concentration of mass in the center. The far-field is quadrupolar, and this was analyzed previously, but now I am interested in the case where the mass-density is comparable in length to the lensed image, or even longer. Spreading out the string into a strip does nothing to the lensing outside the strip, and spreading out the point to a sphere also does nothing to the lensing outside the sphere, so this is a good model of many astrophysical situations, where there is an elongated dark matter cloud, perhaps a filament, with a galaxy concentrated somewhere in the middle of the filament. The 2-d potential, plus on-center Newton Hooke is $$ \phi'(x) = {A\over 2} \ln(x^2 + y^2) + B|x| - {x^2 + y^2\over 2}$$ The solution to the critical point equations give images at $$ y=0, x= {B\over 2} + \sqrt{A + ({B\over 2})^2}$$ . $$ y=0, x= -{B\over 2} - \sqrt{A + ({B\over 2})^2}$$ Where one of the two solutions of each quadratic equation is unphysical. This lensing is obvious--- it's the same as the string because the light-source is right behind the center mass. Looking along the string itself, there are two more critical points: the x-direction field becomes zero (it is singular for an infinitely narrow string, but ignore that), and the gradient of the potential is in the y direction, by symmetry, and for y near zero, it is inward pointing, and for large y it is outward pointing, so there is a critical point. The string potential has a minimum on the string, so in the x-direction you have a minimum, but the Newton Hooke potential is taking over from the point source potential at the critical point, so in the y-direction these two points are potential maxima. These are two critical points. The two critical points are at: $$ x=0, y=\pm\sqrt{A}$$ And this is very robust to thickening the string and the point into strips/spheres, or blobs, so long as the shape is about the same. This is a generic source-saddle-source-saddle cross. In the string case, the two saddles become infinitely dim, because the Jacobian blows up, but in the physical case where the thickness of the string is comparable to the lensing region, the Jacobian is of the same order for the sources and the sink. Moving the light-source off center towards positive x, perpendicular to the string orientation pushes the left source inwards, the right point outward, and the two saddles back and out. This is exactly the Einstein cross configuration. Point/strip --- Best Fit Consider a strip of dark matter which is as wide or wider than the lensing configuration, with a point galaxy in the middle. This gives the lensing potential: $$\phi'(x,y) = {A\over 2} \ln(x^2 + y^2 ) + {B\over 2} x^2 - {(x-a)^2 + y^2\over 2}$$ valid inside the strip. Outside the strip, instead of quadratic growth, the potential grows linearly, like for the string. The strip is more useful, because it is simultaneously the simplest elongated model to solve for an off-center object, and also the most accurate Einstein cross. The parameter a tells you how far to the right of center the light-source is. The equations for the critical points are: $$ x({A\over r^2} - (1-B) ) + a = 0$$ $$ y({A\over r^2} - 1) =0$$ There are two solutions when y=0, at $$ x= {a\over 2(1-B)} \pm \sqrt{ {(a/2)^2\over (1-B)^2} +A^2}$$ These are the two sources, on the x axis, as in the string-point problem. There are two additional solutions when ${A\over r^2 -1}=0$ , and these are at $$ x= {-a\over B} , y = \pm \sqrt{A-{a^2\over B^2}}$$ And these are the usual saddles of line-string lensing. For a small a, the two saddles move to the right of the symmetry line, and the long-arm of the cross moves to the right. This is a perfect fit to Einstein's cross. To see how good a fit it is, look at the following plot of the lensing produced by $$ \phi' = \ln(x^2 + y^2) - {.9*(x-.04)^2 + y^2\over 2}$$ The black circle is the center of symmetry of the point/strip, the cross next to it is the true position of the quasar, and the four crosses are the locations of the critical points, while the contour-line density on the saddles/sources tell you the inverse brightness. This matches the data perfectly. Summary The quadrupole lensing has a hard time reproducing Einstein's cross exactly, although it can get cross-like patterns. The reason is the eight images for an on-center light source. This means that to get a cross, two saddle-source pairs have to annihilate. Once they do, the remaining saddles and source are not in such a nice cross, they tend to be too close together, not spread out nicely like the image. The quadrupole crosses are already approaching the asymptotic spherical limit, where the saddles and sources become the degenerate spherical arcs. The brightness of the saddles and sources are not approximately the same, the brightness of the far image on the long-leg of the cross is not approximately the same as the brightness of the near image, it is not a good model. This means that we should consider dark-matter around the galaxy extending in an elongated ellipse, the elongation is along the short-leg of the cross. The light-source is slightly to the right of center. This reproduces Einstein's cross exactly. This is almost surely the orientation of the dark-mass distribution in the galaxy, but the details of the distribution are not revealed just from the critical points, which is all strong lensing provides.
{ "source": [ "https://physics.stackexchange.com/questions/14056", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2843/" ] }
14,064
I'm trying to find out if black holes could be created by focusing enough light into a small enough volume. So far I have found (any or all may be incorrect): Maxwell's equations are linear, dictating no interaction of radiation. The Kerr effect and self-focusing has been observed in mediums, but not vacuums. Masses bending light have been observed per general relativity. Photons are said to have no rest mass, just energy and momentum (???). General relativity seems to provide for energy to energy interaction. This leads to a more specific question: Does electromagnetic radiation or energy curve space like mass curves space?
The answer to your first question is yes . Building on Demetrios Christodoulou's seminal work showing that black holes can form "generically" from focusing of gravitational waves starting from an initial space-time that is arbitrarily close to flat, Pin Yu has recently shown that one can also dynamically (and generically, in the sense that the formation is stable under small perturbations) form a black hole starting with only electromagnetic waves . Of course, the interaction between electromagnetism and gravity means that as soon as you set the thing in motion, you will pick up gravitational radiation. And also that since a precise covariant notion of local gravitational energy is not available, the idea that the space-time starts out with only electromagnetic waves is a specific, frame dependent mathematical definition; one should keep that in mind before trying to draw too much physical significance out of the casual statement of the theorem. For your specific second question, the answer is also yes . Einstein's equation specifies that $$ G_{\mu\nu} = T_{\mu\nu}$$ the left hand side, the Einstein tensor, is purely geometrical, and reflects the curvature of space-time. The right hand side comes from the energy-momentum contributions from the matter fields. The standard way of coupling electromagnetic waves to general relativity (Einstein-Maxwell theory) gives that the right hand side is zero only when the electromagnetic field vanishes. So the content of Einstein-Maxwell theory is based on that electromagnetic radiation can curve space-time.
{ "source": [ "https://physics.stackexchange.com/questions/14064", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5066/" ] }
14,074
This question recently appeared on Slashdot : Slashdot posts a fair number of physics stories. Many of us, myself included, don't have the background to understand them. So I'd like to ask the Slashdot math/physics community to construct a curriculum that gets me, an average college grad with two semesters of chemistry, one of calculus, and maybe 2-3 applied statistics courses, all the way to understanding the mathematics of general relativity. What would I need to learn, in what order, and what texts should I use? Before I get killed here, I know this isn't a weekend project, but it seems like it could be fun to do in my spare time for the next ... decade. It seems like something that would be a good addition to this site: I think it's specific enough to be answerable but still generally useful. The textbook aspect is covered pretty well by Book recommendations , but beyond that: What college-level subjects in physics and math are prerequisites to studying general relativity in mathematical detail?
First general relativity is typically taught at a 4th year undergraduate level or sometimes even a graduate level, obviously this presumes a good undergraduate training in mathematics and physics. Personally, I'm more of the opinion that one should go and learn other physics before tackling general relativity. A solid background in classical mechanics with exposure to Hamiltonians, Lagrangians, and action principles at least. A course in electromagnetism (at the level of Griffiths) I think is also a good thing to have. Mathematically, I think the pre-reqs are a bit higher and since the question asks about mathematical detail, I'll focus on that. I learnt relativity from a very differential geometry centric viewpoint (I was taught by a mathematician) and I found that my understanding of differential geometry was very helpful for understanding the physics. I've never been a fan of Hartle's book which I think is greatly lacking on the mathematical details but is good for physical intuition. However having worked in relativity for some time now I think it's better to teach from a more mathematical point of view so you can easily pick up the higher level concepts. Additionally, I think you really need to understand what is going on mathematically to understand why we must construct things the way we do. I'm going to have to disagree with nibot here and say that you'll need more then just linear algebra and college calculus. Calculus you must have at least seen up to vector calculus and be familiar with it. Linear algebra is something you should have a very good understanding considering that we are dealing with vectors. A good course in more abstract algebra dealing with vector spaces, inner products/orthogonality, and that sort of thing is a must. To my knowledge this is normally taught in a second year linear algebra course and is typically kept out of first year courses. Obviously a course in differential equations is required and probably a course in partial differential equations is required as well. I don't think a course in analysis is required, however since the question is more about the mathematical aspect, I'd say having a course in analysis up to topological spaces is a huge plus. That way if you're curious about the more mathematical nature of manifolds, you could pick up a book like Lee and be off to the races. If you want to study anything at a level higher, say Wald, then a course in analysis including topological spaces is a must. You could get away with it but I think it's better to have at the end of the day. I'd also say a good course in classical differential geometry (2 and 3 dimensional things) is a good pre-req to build a geometrical idea of what is going on, albeit the methods used in those types of courses do not generalise. Of course, there is also the whole bit about mathematical maturity. It's a funny thing that is impossible to quantify. I, despite having the right mathematical background, did not understand immediately the whole idea of introducing a tangent space on each point of a manifold and how $\{\partial_{i}\}$ form a basis for this vector space. It took me a bit longer to figure this out. You can always skip all this and get away with just the physicists classical index gymnastics (tensors are things that transform this certain way) however I think if you want to be a serious student of relativity you'd learn the more mathematical point of view. EDIT: On the suggestion of jdm, a course in classical field theory is good as well. There is a nice little Dover book appropriately titled Classical Field Theory that gets to general relativity right at the end. However I never took a course and I don't think many universities offer it anyway unfortunately. Also a good introduction if you want to go learn quantum field theory.
{ "source": [ "https://physics.stackexchange.com/questions/14074", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/124/" ] }
14,082
I get the physical significance of vector addition & subtraction. But I don't understand what do dot & cross products mean? More specifically, Why is it that dot product of vectors $\vec{A}$ and $\vec{B}$ is defined as $AB\cos\theta$ ? Why is it that cross product of vectors $\vec{A}$ and $\vec{B}$ is defined as $AB\sin\theta$ , times a unit vector determined from the right-hand rule? To me, both these formulae seem to be arbitrarily defined (although, I know that it definitely wouldn't be the case). If the cross product could be defined arbitrarily, why can't we define division of vectors? What's wrong with that? Why can't vectors be divided?
I get the physical significance of vector addition & subtraction. But I don't understand what do dot & cross products mean? Perhaps you would find the geometric interpretations of the dot and cross products more intuitive: The dot product of A and B is the length of the projection of A onto B multiplied by the length of B (or the other way around--it's commutative). The magnitude of the cross product is the area of the parallelogram with two sides A and B . The orientation of the cross product is orthogonal to the plane containing this parallelogram. Why can't vectors be divided? How would you define the inverse of a vector such that $\mathbf{v} \times \mathbf{v}^{-1} = \mathbf{1}$? What would be the "identity vector" $\mathbf{1}$? In fact, the answer is sometimes you can . In particular, in two dimensions, you can make a correspondence between vectors and complex numbers, where the real and imaginary parts of the complex number give the (x,y) coordinates of the vector. Division is well-defined for the complex numbers. The cross-product only exists in 3D. Division is defined in some higher-dimensional spaces too (such as the quaternions ), but only if you give up commutativity and/or associativity. Here's an illustration of the geometric meanings of dot and cross product, from the wikipedia article for dot product and wikipedia article for cross product :
{ "source": [ "https://physics.stackexchange.com/questions/14082", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/447/" ] }
14,116
Let $\hat{x} = x$ and $\hat{p} = -i \hbar \frac {\partial} {\partial x}$ be the position and momentum operators, respectively, and $|\psi_p\rangle$ be the eigenfunction of $\hat{p}$ and therefore $$\hat{p} |\psi_p\rangle = p |\psi_p\rangle,$$ where $p$ is the eigenvalue of $\hat{p}$. Then, we have $$ [\hat{x},\hat{p}] = \hat{x} \hat{p} - \hat{p} \hat{x} = i \hbar.$$ From the above equation, denoting by $\langle\cdot\rangle$ an expectation value, we get, on the one hand $$\langle i\hbar\rangle = \langle\psi_p| i \hbar | \psi_p\rangle = i \hbar \langle \psi_p | \psi_p \rangle = i \hbar$$ and, on the other $$\langle [\hat{x},\hat{p}] \rangle = \langle\psi_p| (\hat{x}\hat{p} - \hat{p}\hat{x}) |\psi_p\rangle = \langle\psi_p|\hat{x} |\psi_p\rangle p - p\langle\psi_p|\hat{x} |\psi_p\rangle = 0$$ This suggests that $i \hbar = 0$. What went wrong?
Both p and x operators as operators do not have eigenvectors in the strict sense. They have distributional eigenvectors which are only defined in a bigger space of functions than the space of square-normalizable wavefunctions, and which should be thought of as only meaningful when smeared a little bit by a smooth test function. The normalization for $\langle \psi_p | \psi_p \rangle $ is infinite, because the p-wave is extended over all space. Similarly, the normalization of the delta-function wavefunction, the x-operator eigenvector, is infinite, because the square of a delta function has infinite integral. You could state your paradox using $|x\rangle$ states too: $$i\hbar \langle x|x\rangle = \langle x| (\hat{x}\hat{p} - \hat{p}\hat{x})|x\rangle = x \langle x|\hat{p}|x\rangle - \langle x|\hat{p}|x\rangle x = 0$$ because $|x'\rangle$ is only defined when it is smeared a little, you need to use a seprate variable for the two occurences of x'. So write the full matrix out for this case: $$ i\hbar \langle x|y\rangle = x\langle x|\hat{p}|y\rangle - \langle x|\hat{p}|y\rangle y = (x-y)\langle x|\hat{p}|y\rangle$$ And now x and y are separate variables which can be smeared independently, as required. The p operator's matrix elements are the derivative of a delta function: $$ \langle x|\hat{p}|y\rangle = -i\hbar \delta'(x-y)$$ So what you get is $$ (x-y)\delta'(x-y)$$ And you are taking $x=y$ naively by setting the first factor to zero without noticing that the delta function factor is horribly singular, and the result is therefore ill defined without more careful evaluation. If you multiply by smooth test functions for x and y, to smear the answer out a little bit: $$ \int f(x) g(y) (x-y) \delta'(x-y) dx dy= \int f(x)g(x) dx = \int f(x) g(y) \delta(x-y) $$ Where the first identify comes from integrating by parts in x, and setting to zero all terms that vanish under the evaluation of the delta function. The result is that $$ (x-y)\delta'(x-y) = \delta(x-y)$$ And the result is not zero, it is in fact consistent with the commutation relation. This delta-function equation appears, with explanation, in the first mathematical chapter of Dirac's "The Principles of Quantum Mechanics". It is unfortunate that formal manipulations with distributions lead to paradoxes so easiy. For a related but different paradox, consider the trace of $\hat{x}\hat{p}-\hat{p}\hat{x}$.
{ "source": [ "https://physics.stackexchange.com/questions/14116", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5070/" ] }
14,140
When you spray gas from a compressed spray, the gas gets very cold, even though, the compressed spray is in the room temperature. I think, when it goes from high pressure to lower one, it gets cold, right? but what is the reason behind that literally?
The temperature of the gas that is sprayed goes down because it adiabatically expands. This is simply because there is no heat transferred to or from the gas as it is sprayed, for the process is too fast. (See this Wikipedia article for more details on adiabatic processes.) The mathematical explanation goes as follows: let the volume of the gas in the container be $V_i$, and its temperature $T_i$. After the gas is sprayed it occupies volume $V_f$ and has temperature $T_f$. In an adiabatic process $TV^{\,\gamma-1}=\text{constant}$ ($\gamma$ is a number bigger than one), and so $$ T_iV_i^{\,\gamma-1}=T_fV_f^{\,\gamma-1}, $$ or $$ T_f=T_i\left(\frac{V_i}{V_f}\right)^{\gamma-1}. $$ Since $\gamma>1$ and, clearly, $V_f>V_i$ (the volume available to the gas after it's sprayed is much bigger than the one in the container), we get that $T_f<T_i$, i.e. the gas cools down when it's sprayed. By the way, adiabatic expansion is the reason why you are able to blow both hot and cold air from your mouth. When you want to blow hot air you open your mouth wide, but when you want to blow cold air you tighten your lips and force the air through a small hole. That way the air goes from a small volume to the big volume around you, and cools down according to the equations above.
{ "source": [ "https://physics.stackexchange.com/questions/14140", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5088/" ] }
14,165
My professor told me recently that Area is a vector. A Google search gave me the following definition for a vector: Noun: A quantity having direction as well as magnitude, esp. as determining the position of one point in space relative to another. My question is - what is the direction of area? I can relate to the fact that velocity is a vector. The velocity of a moving motorbike for example, has a definite direction as well as a definite magnitude assuming that the bike is moving in a straight line & not accelerating. My friend gave me this explanation for the direction of Area vector. Consider a rectangular plane in space. He argued that the orientation of the plane in space can only be described by considering area as a vector & not a scalar. I still wasn't convinced. Suppose the plane was placed such that its faces were perpendicular to the directions, North & South for example. Now the orientation of the plane is the same irrespective whether the so called vector points to north or to the south. Further what is the direction of a sphere's area? Does considering area as a vector have any real significance? Please explain. Thanks in advance.
This might be more of a math question. This is a peculiar thing about three-dimensional space. Note that in three dimensions, an area such as a plane is a two dimensional subspace. On a sheet of paper you only need two numbers to unambiguously denote a point. Now imagine standing on the sheet of paper, the direction your head points to will always be a way to know how this plane is oriented in space. This is called the "normal" vector to this plane, it is at a right angle to the plane. If you now choose the convention to have the length of this normal vector equal to the area of this surface, you get a complete description of the two dimensional plane, its orientation in three dimensional space (the vector part) and how big this plane is (the length of this vector). Mathematically, you can express this by the "cross product" $$\vec c=\vec a\times\vec b$$ whose magnitude is defined as $|c| = |a||b|sin\theta$ which is equal to the area of the parallelogram that those two vectors (which really define a plane) span. To steal this picture from wikipedia's article on the cross product: As I said in the beginning this is a very special thing for three dimensions, in higher dimensions, it doesn't work as neatly for various reasons. If you want to learn more about this topic a keyword would be "exterior algebra" Update: As for the physical significance of this concept, prominent examples are vector fields flowing through surfaces. Take a circular wire. This circle can be oriented in various ways in 3D. If you have an external magnetic field, you might know that this can induce an electric current, proportional to the rate of change of the amount flowing through the circle (think of this as how much the arrows perforate the area). If the magnetic field vectors are parallel to the circle (and thus orthogonal to its normal vector) they do not "perforate" the area at all, so the flow through this area is zero. On the other hand, if the field vectors are orthogonal to the plane (i.e. parallel to the normal), the maximally "perforate" this area and the flow is maximal. if you change the orientation of between those two states you can get electrical current.
{ "source": [ "https://physics.stackexchange.com/questions/14165", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4711/" ] }
14,235
In my previous question I asked Please explain C14 half-life The OP mentioned that I was thinking of linear decay and C14 was measured in exponential decay. As I understand it, C14 is always in a state of decay. If we know the exact rate of decay then shouldn't it be linear? How do we know that C14 decay's exponentially compared to linear and have there been any studies to verify this?
It's also worth noting that there is nothing special about atoms. If you have any system where in every period of time an event has a certain chance of happening which only depends on internal effects of the object and no memory or communications with others - you will get the same decay curve. It's purely a matter of the statistics. If you have a handful of coins and every minute toss them all and remove all the heads into a separate pile - the number of coins remaining in the hand will decay with a half-life of 1 minute. What is special about carbon14 - and why it is useful for archeaology is that new carbon14 is being made all the time in the atmosphere, and while you are alive you take in this new carbon so the decays don't have any effect until you die. It's like tossing the coins, but while you are alive adding new random coins after each toss - but then when you die have somebody else start to remove the heads. If you assume you died with an equal number of heads and tails, you can work out how many tosses have happened since you died - and so how long ago the sample died.
{ "source": [ "https://physics.stackexchange.com/questions/14235", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5103/" ] }
14,639
I am trying to understand the saddle point approximation and apply it to a problem I have but the treatments I have seen online are all very mathematical and are not giving me a good qualitative description of the method and why it's used and for what it's used. So my question is, how is the saddle point approximation used in physics? What is it approximating? When can it be used?
In the simplest form the saddle point method is used to approximate integrals of the form $$I \equiv \int_{-\infty}^{\infty} dx\,e^{-f(x)}.$$ The idea is that the negative exponential function is so rapidly decreasing — $\;e^{-10}$ is $10000$ times smaller than $e^{-1}$ — that we only need to look at the contribution from where $f(x)$ is at its minimum. Lets say $f(x)$ is at its minimum at $x_0$ . Then we could approximate $f(x)$ the first terms of its Taylor expansion. $$f(x) \approx f(x_0) + \frac{1}{2}(x- x_0)^2 f''(x_0) +\cdots.$$ There is no linear term because $x_0$ is a minimum. This may be a terrible approximation to $f(x)$ when $x$ is far from $x_0$ , but if $f(x)$ is significantly bigger than its minimal value in this region then it doesn't really matter, since the contribution to the integral will be negligible either way. Anyway plugging this into our integral $$I \approx \int_{-\infty}^{+\infty} dx\, e^{-f(x_0) - \frac{1}{2}(x-x_0)^2 f''(x_0)}= e^{-f(x_0)}\int_{-\infty}^{\infty} dx\, e^{-\frac{1}{2}(x-x_0)^2 f''(x_0)}.$$ The gaussian integral can be evaluated to give you $$I = e^{-f(x_0)}\sqrt{\frac{2\pi}{f''(x_0)}}.$$ So where does this come up in physics? Probably the first example is Stirling's approximation. In statistical mechanics we are always counting configurations of things so we get all sorts of expressions involving $N!$ where $N$ is some tremendously huge number like $10^{23}$ . Doing analaytical manipulation with factorials is no fun, so it would be nice if there was some more tractable expression. Well we can use the fact that: $$N! =\int_0^\infty dx\, e^{-x}x^N = \int_0^\infty dx \exp(-x +N\ln x).$$ So now you can apply the saddle point approximation with $f(x) = x -N\ln x$ . You can work out the result yourself. You should also convince yourself that in this case the approximation really does become better and better as $N\rightarrow \infty$ . (Also you have to change the lower bound of the integral from $0$ to $-\infty$ .) There are lots of other examples, but I don't know your background so it's hard to say what will be a useful reference. The WKB approximation can be thought of as a saddle point approximation. A common example is in partition function/ path integrals where we want to calculate $$\mathcal{Z} = \int d\phi_i \exp(-\beta F[\phi_i]),$$ where the $\phi_i$ are some local variables and $F[\cdot]$ is the free energy functional. We do the same as before but now with multiple variables. Again we can find the set $\{\phi_i^{(0)}\}$ that minimizes $F$ and then expand $$F[\phi_i] = F[\phi_i^{(0)}] +\frac{1}{2}\sum_{ij}(\phi_i -\phi_i^{(0)})(\phi_j -\phi_j^{(0)})\frac{\partial^2F}{\partial\phi_i\partial\phi_j}.$$ This gives you the ground state contribution, times a Gaussian (free) theory which you can handle by the usual means. Following the earlier remarks we expect this to be good in the limit $\beta\rightarrow \infty$ , although your mileage may vary.
{ "source": [ "https://physics.stackexchange.com/questions/14639", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1939/" ] }
14,700
My friend and I have been wracking our heads with this one for the past 3 hours... We have 2 point masses, $m$ and $M$ in a perfect world separated by radius r. Starting from rest, they both begin to accelerate towards each other. So we have the gravitational force between them as: $$F_g ~=~ G\frac{Mm}{r^2}$$ How do we find out at what time they will collide? What we're having trouble with is this function being a function of $r$, but I have suspected it as actually a function of $t$ due to the units of $G$ being $N(m/kg)^2$. I've tried taking a number of integrals, which haven't really yielded anything useful. Any guidance? No, this is not an actual homework problem, we're just 2 math/physics/computer people who are very bored at work :)
You should be able to use energy conservation to write down the velocities of the bodies as a function of time. $$ \textrm{Energy conservation (KE = PE): } \frac{p^2}{2}\left( \frac{1}{m} + \frac{1}{M} \right) = GMm\left(\frac{1}{r} - \frac{1}{r_0}\right) $$ And $$ \frac{dr}{dt} = -(v + V) = -p\left( \frac{1}{m} + \frac{1}{M} \right) $$ Momentum conservation ensures that the magnitude of the momenta of both masses is the same. Does this help? Substituting into the second equation from the first you should be able to solve for: $$ \int_0^T dt = -\int_{r_0}^0 dr \sqrt{\frac{rr_0}{2G(M+m)(r_0-r)}} = \frac{\pi}{2\sqrt{2}}\frac{r_0^{3/2}}{\sqrt{G(M+m)}} $$
{ "source": [ "https://physics.stackexchange.com/questions/14700", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4459/" ] }
14,932
It is commonly asserted that no consistent, interacting quantum field theory can be constructed with fields that have spin greater than 2 (possibly with some allusion to renormalization). I've also seen (see Bailin and Love, Supersymmetry) that we cannot have helicity greater than 1, absenting gravity. I am yet to see an explanation as to why this is the case; so can anyone help?
Higher spin particles have to be coupled to conserved currents, and there are no conserved currents of high spin in quantum field theories. The only conserved currents are vector currents associated with internal symmetries, the stress-energy tensor current, the angular momentum tensor current, and the spin-3/2 supercurrent, for a supersymmetric theory. This restriction on the currents constrains the spins to 0,1/2 (which do not need to be coupled to currents), spin 1 (which must be coupled to the vector currents), spin 3/2 (which must be coupled to a supercurrent) and spin 2 (which must be coupled to the stress-energy tensor). The argument is heuristic, and I do not think it rises to the level of a mathematical proof, but it is plausible enough to be a good guide. Preliminaries: All possible symmetries of the S-matrix You should accept the following result of O'Raferteigh, Coleman and Mandula--- the continuous symmetries of the particle S-matrix, assuming a mass-gap and Lorentz invariance, are a Lie Group of internal symmetries, plus the Lorentz group. This theorem is true, given its assumptions, but these assumptions leave out a lot of interesting physics: Coleman-Mandula assume that the symmetry is a symmetry of the S-matrix, meaning that it acts nontrivially on some particle state. This seems innocuous, until you realize that you can have a symmetry which doesn't touch particle states, but only acts nontrivially on objects like strings and membranes. Such symmetries would only be relevant for the scattering of infinitely extended infinite energy objects, so it doesn't show up in the S-matrix. The transformations would become trivial whenever these sheets close in on themselves to make a localized particle. If you look at Coleman and Mandula's argument (a simple version is presented in Argyres' supersymmetry notes, which gives the flavor. There is an excellent complete presentation in Weinberg's quantum field theory book, and the original article is accessible and clear), it almost begs for the objects which are charged under the higher symmetry to be spatially extended. When you have extended fundamental objects, it is not clear that you are doing field theory anymore. If the extended objects are solitons in a renormalizable field theory, you can zoom in on ultra-short distance scattering, and consider the ultra-violet fixed point theory as the field theory you are studying, and this is sufficient to understand most examples. But the extended-object exception is the most important one, and must always be kept in the back of the mind. Coleman and Mandula assume a mass gap. The standard extension of this theorem to the massless case just extends the maximal symmetry from the Poincare group to the conformal group, to allow the space-time part to be bigger. But Coleman and Madula use analyticity properties which I am not sure can be used in a conformal theory with all the branch-cuts which are not controlled by mass-gaps. The result is extremely plausible, but I am not sure if it is still rigorously true. This is an exercise in Weinberg, which unfortunately I haven't done. Coleman and Mandula ignore supersymmetries. This is fixed by Haag–Lopuszanski–Sohnius, who use the Coleman mandula theorem to argue that the maximal symmetry structure of a quantum field theory is a superconformal group plus internal symmetries, and that the supersymmetry must close on the stress-energy tensor. What the Coleman Mandula theorem means in practice is that whenever you have a conserved current in a quantum field theory, and this current acts nontrivially on particles, then it must not carry any space-time indices other than the vector index, with the only exceptions being the geometric currents: a spinor supersymmetry current, $J^{\alpha\mu}$, the (Belinfante symmetric) stress-energy tensor $T^{\mu\nu}$, the (Belinfante) angular momentum tensor $S^{\mu\nu\lambda} = x^{\mu} T^{\nu\lambda} - x^\nu T^{\mu\lambda}$, and sometimes the dilation current $D^\mu = x^\mu T^\alpha_\alpha$ and conformal and superconformal currents too. The spin of the conserved currents is found by representation theory--- antisymmetric indices are spin 1, whether there are 1 or 2, so the spin of the internal symmetry currents is 1, and of the stress energy tensor is 2. The other geometric tensors derived from the stress energy tensor are also restricted to spin less then 2, with the supercurrent having spin 3/2. What is a QFT? Here this is a practical question--- for this discussion, a quantum field theory is a finite collection of local fields, each corresponding to a representation of the Poincare group, with a local interaction Lagrangian which couples them together. Further, it is assumed that there is an ultra-violet regime where all the masses are irrelevant, and where all the couplings are still relatively small, so that perturbative particle exchange is ok. I say pseudo-limit, because this isn't a real ultra-violet fixed point, which might not exist, and it does not require renormalizability, only unitarity in the regime where the theory is still perturbative. Every particle must interact with something to be part of the theory. If you have a noninteracting sector, you throw it away as unobservable. The theory does not have to be renormalizable, but it must be unitary, so that the amplitudes must unitarize perturbatively. The couplings are assumed to be weak at some short distance scale, so that you don't make a big mess at short distances, but you can still analyze particle emission order by order The Froissart bound for a mass-gap theory states that the scattering amplitude cannot grow faster than the logarithm of the energy. This means that any faster than constant growth in the scattering amplitude must be cancelled by something. Propagators for any spin The propagators for massive/massless particles of any spin follow from group theory considerations. These propagators have the schematic form $$ s^J\over s-m^2$$ And the all-important s scaling, with its J-dependence can be extracted from the physically obvious angular dependence of the scattering amplitude. If you exchange a spin-J particle with a short propagation distance (so that the mass is unimportant) between two long plane waves (so that their angular momentum is zero), you expect the scattering amplitude to go like $\cos(\theta)^J$, just because rotations act on the helicity of the exchanged particle with this factor. For example, when you exchange an electron between an electron and a positron, forming two photons, and the internal electron has an average momentum k and a helicity +, then if you rotate the contribution to the scattering amplitude from this exchange around the k-axis by an angle $\theta$ counterclockwise, you should get a phase of $\theta/2$ in the outgoing photon phases. In terms of Mandelstam variables, the angular amplitude goes like $(1-t)^J$, since t is the cosine of the scattering variable, up to some scaling in s. For large t, this grows as t^J, but "t" is the "s" of a crossed channel (up to a little bit of shifting), and so crossing t and s, you expect the growth to go with the power of the angular dependence. The denominator is fixed at $J=0$, and this law is determined by Regge theory. So that for $J=0,1/2$, the propagators shrink at large momentum, for $J=1$, the scattering amplitudes are constant in some directions, and for $J>1$ they grow. This schematic structure is of course complicated by the actual helicity states you attach on the ends of the propagator, but the schematic form is what you use in Weinberg's argument. Spin 0, 1/2 are OK That spin 0 and 1/2 are ok with no special treatment, and this argument shows you why: the propagator for spin 0 is $$ 1\over k^2 + m^2$$ Which falls off in k-space at large k. This means that when you scatter by exchanging scalars, your tree diagrams are shrinking, so that they don't require new states to make the theory unitary. Spinors have a propagator $$ 1\over \gamma\cdot k + m $$ This also falls off at large k, but only linearly. The exchange of spinors does not make things worse, because spinor loops tend to cancel the linear divergence by symmetry in k-space, leaving log divergences which are symptomatic of a renormalizable theory. So spinors and scalars can interact without revealing substructure, because their propagators do not require new things for unitarization. This is reflected in the fact that they can make renormalizable theories all by themselves. Spin 1 Introducing spin 1, you get a propagator that doesn't fall off. The massive propagator for spin 1 is $$ { g_{\mu\nu} - {k_\mu k_\nu\over m^2} \over k^2 + m^2 }$$ The numerator projects the helicity to be perpendicular to k, and the second term is problematic. There are directions in k-space where the propagator does not fall off at all! This means that when you scatter by spin-1 exchange, these directions can lead to a blow-up in the scattering amplitude at high energies which has to be cancelled somehow. If you cancel the divergence with higher spin, you get a divergence there, and you need to cancel that, and then higher spin, and so on, and you get infinitely many particle types. So the assumption is that you must get rid of this divergence intrinsically. The way to do this is to assume that the $k_\mu k_\nu$ term is always hitting a conserved current. Then it's contribution vanishes. This is what happens in massive electrodynamics. In this situation, the massive propagator is still ok for renormalizability, as noted by Schwinger and Feynman, and explained by Stueckelberg. The $k_\mu k_\nu$ is always hitting a $J^\mu$, and in x-space, it is proportional to the divergence of the current, which is zero because the current is conserved even with a massive photon (because the photon isn't charged). The same argument works to kill the k-k part of the propagator in Yang-Mills fields, but it is much more complicated, because the Yang-Mills field itself is charged, so the local conservation law is usually expressed in a different way, etc,etc. The heuristic lesson is that spin-1 is only ok if you have a conservation law which cancels the non-shrinking part of the numerator. This requires Yang-Mills theory, and the result is also compatible with renormalizability. If you have a spin-1 particle which is not a Yang-Mills field, you will need to reveal new structure to unitarize its longitudinal component, whose propagator is not properly shrinking at high energies. Spin 3/2 In this case, you have a Rarita Schwinger field, and the propagator is going to grow like $\sqrt{s}$ at large energies, just from the Mandelstam argument presented before. The propagator growth leads to unphysical growth in scattering exchanging this particle, unless the spin-3/2 field is coupled to a conserved current. The conserved current is the Supersymmetry current, by the Haag–Lopuszanski–Sohnius theorem, because it is a spinor of conserved currents. This means that the spin-3/2 particle should interact with a spin 3/2 conserved supercurrent in order to be consistent, and the number of gravitinos is (less then or equal to) the number of supercharges. The gravitinos are always introduced in a supermultiplet with the graviton, but I don't know if it is definitely impossible to introduce them with a spin-1 partner, and couple them to the supercurrent anyway. These spin-3/2/spin-1 multiplets will probably not be renormalizable barring some supersymmetry miracle. I haven't worked it out, but it might be possible. Spin 2 In this case, you have a perturbative graviton-like field $h_{\mu\nu}$, and the propagator contains terms growing linearly with s. In order to cancel the growth in the numerator, you need the tensor particle to be coupled to a conserved current to kill the parts with too-rapid growth, and produce a theory which does not require new particles for unitarity. The conserved quantity must be a tensor $T_{\mu\nu}$. Now one can appeal to the Coleman Mandula theorem and conclude that the conserved tensor current must be the stress energy tensor, and this gives general relativity, since the stress-tensor includes the stress of the h field too. There is a second tensor conserved quantity, the angular momentum tensor $S_{\mu\nu\sigma}$, which is also spin-2 (it might look like its spin 3, but its antisymmetric on two of its indices). You can try to couple a spin-2 field to the angular momentum tensor. To see if this works requires a detailed analysis, which I haven't done, but I would guess that the result will just be a non-dynamical torsion coupled to the local spin, as required by the Einstein-Cartan theory. Witten mentions yet another possiblity for spin 2 in chapter 1 of Green Schwarz and Witten, but I don't remember what it is, and I don't know whether it is viable. Summary I believe that these arguments are due to Weinberg, but I personally only read the sketchy summary of them in the first chapters of Green Schwarz and Witten. They do not seem to me to have the status of a theorem, because the argument is particle by particle, it requires independent exchange in a given regime, and it discounts the possiblity that unitary can be restored by some family of particles. Of course, in string theory, there are fields of arbitrarily high spin, and unitarity is restored by propagating all of them together. For field theories with bound states which lie on Regge trajectories, you can have arbitrarily high spins too, so long as you consider all the trajectory contributions together, to restore unitarity (this was one of the original motivations for Regge theory--- unitarizing higher spin theories). For example, in QCD, we have nuclei of high ground-state spin. So there are stable S-matrix states of high spin, but they come in families with other excited states of the same nuclei. The conclusion here is that if you have higher spin particles, you can be pretty sure that you will have new particles of even higher spin at higher energies, and this chain of particles will not stop until you reveal new structure at some point. So the tensor mesons observed in the strong interaction mean that you should expect an infinite family of strongly interacting particles, petering out only when the quantum field substructure is revealed. Some comments James said: It seems higher spin fields must be massless so that they have a gauge symmetry and thus a current to couple to A massless spin-2 particle can only be a graviton. These statements are as true as the arguments above are convincing. From the cancellation required for the propagator to become sensible, higher spin fields are fundamentally massless at short distances. The spin-1 fields become massive by the Higgs mechanism, the spin 3/2 gravitinos become massive through spontaneous SUSY breaking, and this gets rid of Goldstone bosons/Goldstinos. But all this stuff is, at best, only at the "mildly plausible" level of argument--- the argument is over propagator unitarization with each propagator separately having no cancellations. It's actually remarkable that it works as a guideline, and that there aren't a slew of supersymmetric exceptions of higher spin theories with supersymmetry enforcing propagator cancellations and unitarization. Maybe there are, and they just haven't been discovered yet. Maybe there's a better way to state the argument which shows that unitarity can't be restored by using positive spectral-weight particles. Big Rift in 1960s James askes Why wasn't this pointed out earlier in the history of string theory? The history of physics cannot be well understood without appreciating the unbelievable antagonism between the Chew/Mandelstam/Gribov S-matrix camp, and the Weinberg/Glashow/Polyakov Field theory camp. The two sides hated each other, did not hire each other, and did not read each other, at least not in the west. The only people that straddled both camps were older folks and Russians--- Gell-Mann more than Landau (who believed the Landau pole implied S-matrix), Gribov and Migdal more than anyone else in the west other than Gell-Mann and Wilson. Wilson did his PhD in S-matrix theory, for example, as did David Gross (under Chew). In the 1970s, S-matrix theory just plain died. All practitioners jumped ship rapidly in 1974, with the triple-whammy of Wilsonian field theory, the discovery of the Charm quark, and asymptotically freedom. These results killed S-matrix theory for thirty years. Those that jumped ship include all the original string theorists who stayed employed: notably Veneziano, who was convinced that gauge theory was right when t'Hooft showed that large-N gauge fields give the string topological expansion, and Susskind, who didn't mention Regge theory after the early 1970s. Everybody stopped studying string theory except Scherk and Schwarz, and Schwarz was protected by Gell-Mann, or else he would never have been tenured and funded. This sorry history means that not a single S-matrix theory course is taught in the curriculum today, nobody studies it except a few theorists of advanced age hidden away in particle accelerators, and the main S-matrix theory, string-theory, is not properly explained and remains completely enigmatic even to most physicists. There were some good reasons for this--- some S-matrix people said silly things about the consistency of quantum field theory--- but to be fair, quantum field theory people said equally silly things about S-matrix theory. Weinberg came up with these heuristic arguments in the 1960s, which convinced him that S-matrix theory was a dead end, or rather, to show that it was a tautological synonym for quantum field theory. Weinberg was motivated by models of pion-nucleon interactions, which was a hot S-matrix topic in the early 1960s. The solution to the problem is the chiral symmetry breaking models of the pion condensate, and these are effective field theories. Building on this result, Weinberg became convinced that the only real solution to the S-matrix was a field theory of some particles with spin. He still says this every once in a while, but it is dead wrong . The most charitable interpretation is that every S-matrix has a field theory limit, where all but a finite number of particles decouple, but this is not true either (consider little string theory). String theory exists, and there are non-field theoretic S-matrices, namely all the ones in string theory, including little string theory in (5+1)d, which is non-gravitational. Lorentz indices James comments: regarding spin, I tried doing the group theoretic approach to an antisymmetric tensor but got a little lost - doesn't an antisymmetric 2-form (for example) contain two spin-1 fields? The group theory for an antisymmetric tensor is simple: it consists of an "E" and "B" field which can be turned into the pure chiral representations E+iB, E-iB. This was also called a "six-vector" sometimes, meaning E,B making an antisymmetric four-tensor. You can do this using dotted and undotted indices more easily, if you realize that the representation theory of SU(2) is best done in indices--- see the "warm up" problem in this answer: Mathematically, what is color charge?
{ "source": [ "https://physics.stackexchange.com/questions/14932", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5201/" ] }
14,939
Gödel's incompleteness theorem prevents a universal axiomatic system for math. Is there any reason to believe that it also prevents a theory of everything for physics? Edit: I haven't before seen a formulation of Gödel that included time. The formulation I've seen is that any axiomatic systems capable of doing arithmetic can express statements that will be either 1) impossible to prove true or false or 2) possible to prove both true and false. This leads to the question: Are theories of (nearly) everything, axiomatic systems capable of doing arithmetic? (Given they are able to describe a digital computer, I think it's safe to say they are.) If so, it follows that such a theory will be able to describe something that the theory will be either unable to analyse or will result in an ambiguous result. (Might this be what forces things like the Heisenberg uncertainty principle?)
The answer is no, because although a "Theory of Everything" means a computational method of describing any situation, it does not allow you to predict the eventual outcome of the evolution an infinite time into the future, but only to plod along, predicting the outcome little by little as you go on. Gödel's theorem is a statement that it is impossible to predict the infinite time behavior of a computer program. Theorem: Given any precise way of producing statements about mathematics, that is, given any computer program which spits out statements about mathematics, this computer program either produces falsehoods, or else does not produce every true statement. Proof: Given the program "THEOREMS" which outputs theorems (it could be doing deductions in Peano Arithmetic, for example), write the computer program SPITE to do this: SPITE prints its own code into a variable R SPITE runs THEOREMS, and scans the output looking for the theorem "R does not halt" If it finds this theorem, it halts. If you think about it, the moment THEOREMS says that "R does not halt", it is really proving that "SPITE does not halt", and then SPITE halts, making THEOREMS into a liar. So if "THEOREMS" only outputs true theorems, SPITE does not halt, and THEOREMS does not prove it. There is no way around it, and it is really trivial. The reason it has a reputation for being complicated is due to the following properties of the logic literature: Logicians are studying formal systems, so they tend to be overly formal when they write. This bogs down the logic literature in needless obscurity, and holds back the development of mathematics. There is very little that can be done about this, except exhorting them to try to clarify their literature, as physicists strive to do. Logicians made a decision in the 1950s to not allow computer science language in the description of algorithms within the field of logic. They did this purposefully, so as to separate the nascent discipline of CS from logic, and to keep the unwashed hordes of computer programmers out of the logic literature. Anyway, what I presented is the entire proof of Gödel's theorem, using a modern translation of Gödel's original 1931 method. For a quick review of other results, and for more details, see this MathOverflow answer: https://mathoverflow.net/a/72151/36526 . As you can see, Gödel's theorem is a limitation on understanding the eventual behavior of a computer program, in the limit of infinite running time. Physicists do not expect to figure out the eventual behavior of arbitrary systems. What they want to do is give a computer program which will follow the evolution of any given system to finite time. A ToE is like the instruction set of the universe's computer. It doesn't tell you what the output is, only what the rules are. A ToE would be useless for predicting the future, or rather, it is no more useful for prediction than Newtonian mechanics, statistics, and some occasional quantum mechanics for day-to-day world. But it is extremely important philosophically, because when you find it, you have understood the basic rules, and there are no more surprises down beneath. Incorporating Comments There were comments which I will incorporate into this answer. It seems that comments are only supposed to be temporary, and some of these observations I think are useful. Hilbert's program was an attempt to establish that set theoretic mathematics is consistent using only finitary means. There is an interpretation of Gödel's theorem that goes like this: Gödel showed that no system can prove its own consistency Set theory proves the consistency of Peano Arithmetic Therefore Gödel kills Hilbert's program of proving the consistency of set theory using arithmetic. This interpretation is false, and does not reflect Hilbert's point of view, in my opinion. Hilbert left the definition of "finitary" open. I think this was because he wasn't sure exactly what should be admitted as finitary, although I think he was pretty sure of what should not be admitted as finitary: No real numbers, no analysis, no arbitrary subsets of $\Bbb Z$. Only axioms and statements expressible in the language of Peano Arithmetic. No structure which you cannot realize explicitly and constructively, like an integer. So no uncountable ordinals, for example. Unlike his followers, he did not say that "finitary" means "provable in Peano Arithmetic", or "provable in primitive recursive Arithmetic", because I don't think he believed this was strong enough. Hilbert had experience with transfinite induction, and its power, and I think that he, unlike others who followed him in his program, was ready to accept that transfinite induction proves more theorems than just ordinary Peano induction. What he was not willing to accept was axioms based on a metaphysics of set existence. Things like the Powerset axiom and the Axiom of choice. These two axioms produce systems which not only violate intuition, but are further not obviously grounded in experience, so that the axioms cannot be verified by intuition. Those that followed Hilbert interpreted finitary as "provable in Peano Arithmetic" or a weaker fragment, like PRA. Given this interpretation, Gödel's theorem kills Hilbert's program. But this interpretation is crazy, given what we know now. Hilbert wrote a book on the foundations of mathematics after Gödel's theorem, and I wish it were translated into English, because I don't read German. I am guessing that he says in there what I am about to say here. What Finitary Means The definition of finitary is completely obvious today, after 1936. A finitary statement is a true statement about computable objects, things that can be represented on a computer. This is equivalent to saying that a finitary statement is a proposition about integers which can be expressed (not necessarily proved ) in the language of Peano Arithmetic. This includes integers, finite graphs, text strings, symbolic manipulations, basically, anything that Mathematica handles, and it includes ordinals too. You can represent the ordinals up to $\epsilon_0$, for example, using a text string encoding of their Cantor Normal form. The ordinals which can be fully represented by a computer are limited by the Church-Kleene ordinal, which I will call $\Omega$. This ordinal is relatively small in traditional set theory, because it is a countable ordinal, which is easily exceeded by $\omega_1$ (the first uncountable ordinal), $\omega_\Omega$ (the Church-Kleene-th uncountable ordinal), and the ordinal of a huge cardinal. But it is important to understand that all the computational representations of ordinals are always less than this. So when you are doing finitary mathematics, it means that you are talking about objects you can represent on a machine, you should be restricting yourself to ordinals less than Church-Kleene. The following argues that this is no restriction at all, since the Church-Kleene ordinal can establish the consistency of any system. Ordinal Religion Gödel's theorem is best interpreted as follows: Given any (consistent, omega-consistent) axiomatic system, you can make it stronger by adding the axiom "consis(S)". There are several ways of making the system stronger, and some of them are not simply related to this extension, but consider this one. Given any system and a computable ordinal, you can iterate the process of strengthening up to a the ordinal. So there is a map from ordinals to consistency strength. This implies the following: Natural theories are linearly ordered by consistency strength. Natural theories are well-founded (there is no infinite descending chain of theories $A_k$ such that $A_k$ proves the consistency of $A_{k+1}$ for all k). Natural theories approach the Church Kleene ordinal in strength, but never reach it. It is natural to assume the following: Given a sequence of ordinals which approaches the Church-Kleene ordinal, the theories corresponding to this ordinal will prove every theorem of Arithmetic, including the consistency of arbitrarily strong consistent theories. Further, the consistency proofs are often carried out in constructive logic just as well, so really: Every theorem that can be proven, in the limit of Church-Kleene ordinal, gets a constructive proof. This is not a contradiction with Gödel's theorem, because generating an ordinal sequence which approaches $\Omega$ cannot be done algorithmically, it cannot be done on a computer. Further, any finite location is not really philosophically much closer to Church-Kleene than where you started, because there is always infinitely more structure left undescribed. So $\Omega$ knows all and proves all, but you can never fully comprehend it. You can only get closer by a series of approximations which you can never precisely specify, and which are always somehow infinitely inadequate. You can believe that this is not true, that there are statements that remain undecidable no matter how close you get to Church-Kleene, and I don't know how to convince you otherwise, other than by pointing to longstanding conjectures that could have been absolutely independent, but fell to sufficiently powerful methods. To believe that a sufficiently strong formal system resolves all questions of arithmetic is an article of faith, explicitly articulated by Paul Cohen in Set Theory and the Continuum Hypothesis . I believe it, but I cannot prove it. Ordinal Analysis So given any theory, like ZF, one expects that there is a computable ordinal which can prove its consistency. How close have we come to doing this? We know how to prove the consistency of Peano Arithmetic--- this can be done in PA, in PRA, or in Heyting Arithmetic (constructive Peano Arithmetic), using only the axiom Every countdown from $\epsilon_0$ terminates. This means that the proof theoretic ordinal of Peano Arithmetic is $\epsilon_0$. That tells you that Peano arithmetic is consistent, because it is manifestly obvious that $\epsilon_0$ is an ordinal, so all its countdowns terminate. There are constructive set theories whose proof-theoretic ordinal is similarly well understood, see here: "Ordinal analysis: Theories with larger proof theoretic ordinals" . To go further requires an advance in our systems of ordinal notation, but there is no limitation of principle to establishing the consistency of set theories as strong as ZF by computable ordinals which can be comprehended. Doing so would complete Hilbert's program--- it would removes any need for an ontology of infinite sets in doing mathematics. You can disbelieve in the set of all real numbers, and still accept the consistency of ZF, or of inaccessible cardinals (using a bigger ordinal), and so on up the chain of theories. Other interpretations Not everyone agrees with the sentiments above. Some people view the undecidable propositions like those provided by Gödel's theorem as somehow having a random truth value, which is not determined by anything at all, so that they are absolutely undecidable. This makes mathematics fundamentally random at its foundation. This point of view is often advocated by Chaitin. In this point of view, undecidability is a fundamental limitation to what we can know about mathematics, and so bears a resemblence to a popular misinterpretation of Heisenberg's uncertainty principle, which considers it a limitation on what we can know about a particle's simultaneous position and momentum (as if these were hidden variables). I believe that Gödel's theorem bears absolutely no resemblence to this misinterpretation of Heisenberg's uncertainty principle. The preferred interpretation of Gödel's theorem is that every sentence of Peano Arithmetic is still true or false, not random, and it should be provable in a strong enough reflection of Peano Arithmetic. Gödel's theorem is no obstacle to us knowing the answer to every question of mathematics eventually. Hilbert's program is alive and well, because it seems that countable ordinals less than $\Omega$ resolve every mathematical question. This means that if some statement is unresolvable in ZFC, it can be settled by adding a suitable chain of axioms of the form "ZFC is consistent", "ZFC+consis(ZFC) is consistent" and so on, transfinitely iterated up to a countable computable ordinal, or similarly starting with PA, or PRA, or Heyting arithmetic (perhaps by iterating up the theory ladder using a different step-size, like adding transfinite induction to the limit of all provably well-ordered ordinals in the theory). Gödel's theorem does not establish undecidability, only undecidability relative to a fixed axiomatization, and this procedure produces a new axiom which should be added to strengthen the system. This is an essential ingredient in ordinal analysis, and ordinal analysis is just Hilbert's program as it is called today. Generally, everyone gets this wrong except the handful of remaining people in the German school of ordinal analysis. But this is one of those things that can be fixed by shouting loud enough. Torkel Franzén There are books about Gödel's theorem which are more nuanced, but which I think still get it not quite right. Greg P says, regarding Torkel Franzén: I thought that Franzen's book avoided the whole 'Goedel's theorem was the death of the Hilbert program' thing. In any case he was not so simplistic and from reading it one would only say that the program was 'transformed' in the sense that people won't limit themselves to finitary reasoning. As far as the stuff you are talking about, John Stillwell's book "Roads to Infinity" is better. But Franzen's book is good for issues such as BCS's question (does Godel's theorem resemble the uncertainty principle). Finitary means computational, and a consistency proof just needs an ordinal of sufficient complexity. Greg P responded: The issue is then what 'finitary' is. I guess I assumed it excluded things like transfinite induction. But it looks like you call that finitary. What is an example of non-finitary reasoning then? When the ordinal is not computable, if it is bigger than the Church-Kleene ordinal, then it is infinitary. If you use the set of all reals, or the powerset of $\Bbb Z$ as a set with discrete elements, that's infinitary. Ordinals which can be represented on a computer are finitary, and this is the point of view that I believe Hilbert pushes in the Grundlagen , but it's not translated.
{ "source": [ "https://physics.stackexchange.com/questions/14939", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/754/" ] }
14,968
I was quite surprised to read this all over the news today: Elusive, nearly massive subatomic particles called neutrinos appear to travel just faster than light, a team of physicists in Europe reports. If so, the observation would wreck Einstein's theory of special relativity, which demands that nothing can travel faster than light. — source Apparently a CERN/Gran Sasso team measured a faster-than-light speed for neutrinos. Is this even remotely possible? If so, would it be a real violation of Lorentz invariance or an " almost, but not quite " effect? The paper is on arXiv ; a webcast is/was planned here . News conference video here
You have a few longer answers which were already updated, but here is a concise statement of the situation in mid-2014: An independent measurement by the ICARUS collaboration , also using neutrinos traveling from CERN to Gran Sasso but using independent detector and timing hardware, found detection times "compatible with the simultaneous arrival of all events with equal speed, the one of light." In an edited press release (and probably in the peer-reviewed literature as well), all four of the neutrino experiments at Gran Sasso report results consistent with relativity. The mumblings that begin a few months after the initial report, that a loose cable caused a timing chain error , have been accepted by the experimenters. Frédéric Grosshans links to a nice discussion by Matt Strassler which includes this image: You can clearly see that the timing offset was introduced in mid-2008 and not corrected until the end of 2011. It's important to remember the scale of the problem here. In vacuum, the speed of light is one foot per nanosecond. In copper/poly coaxial cable it's slower, about six inches per nanosecond, and in optical fiber it's comparable. A bad cable connector can take a beautiful digital logic signal and reflect part of it back to the emitter, in a time-dependent way, turning the received signal into an analog mess with a complicated shape. And a cable can go bad if somebody hits it the wrong way with their butt while they are working in the electronics room. (I actually had something similar happen to me on an experiment: I had an analog signal splitter "upstairs" that sent a signal echo back to my detectors "downstairs", and a runty little echoed pulse came back upstairs after about a microsecond and got processed like another event. I wound up spending several thousand dollars on signal terminators to swallow the echo downstairs. It was an unusual configuration and needed unusual termination hardware and I must have answered the question "but couldn't you just" a hundred times.) Gran Sasso is an underground facility for low-background experiments — the detectors can't see GPS satellites directly, because there's a mountain in the way, and their access to the surface is via a tunnel whose main purpose is to carry traffic for a major Italian motorway. I'm quite impressed that they had ~100 ns timing resolution between the two laboratories; the "discovery" came about because they were trying to do ten times better than that. As an experimentalist I don't begrudge the OPERA guys their error at all. I'm sure they spent an entire year shitting pineapples because they couldn't identify the problem. When they finally did release their result, they had the courage to report it at face value. The community was properly incredulous and the wide interest prompted a large number of other checks they could make. Independent measurements were performed. An explanation was found. Science at its best.
{ "source": [ "https://physics.stackexchange.com/questions/14968", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/66/" ] }
14,973
Earlier today, I saw this link on Facebook about neutrinos going faster than the speed of light, and of course, re-posted. Since then, a couple of my friends have gotten into a discussion about what this means (mostly about time-travel), but I don't really know what this really implies. This made me wonder... What are the biggest and most immediate implications of this potential discovery? Related: Superluminal neutrinos
Before I answer, a couple caveats: As Adam said, the universe isn't going to start behaving any differently because we discovered something. Right now it seems much more likely (even by admission of the experimenters) that it's just a mistake somewhere in the analysis, not an actual case of superluminal motion. Anyway: if the discovery turns out to be real, the effect on theoretical physics will be huge , basically because it has the potential to invalidate special relativity shows that special relativity is incomplete. That would have a "ripple effect" through the last century of progress in theoretical physics: almost every branch of theoretical physics for the past 70+ years uses relativity in one way or another, and many of the predictions that have emerged from those theories would have to be reexamined. (There are many other predictions based on relativity that we have directly tested, and those will continue to be perfectly valid regardless of what happens.) To be specific, one of the key predictions that emerges out of the special theory of relativity is that "ordinary" (real-mass) particles cannot reach or exceed the speed of light. This is not just an arbitrary rule like a speed limit on a highway, either. Relativity is fundamentally based on a mathematical model of how objects move, the Lorentz group. Basically, when you go from sitting still to moving, your viewpoint on the universe changes in a way specified by a Lorentz transformation, or "boost," which basically entails mixing time and space a little bit. (Time dilation and length contraction, if you're familiar with them) We have verified to high precision that this is actually true, i.e. that the observed consequences of changing your velocity do match what the Lorentz boost predicts. However, there is no Lorentz boost that takes an object from moving slower than light to moving faster than light. If we were to discover a particle moving faster than light, we have a type of motion that can't be described by a Lorentz boosts, which means we have to start looking for something else (other than relativity) to describe it. Now, having said that, there are a few (more) caveats. First, even if the detection is real, we have to ask ourselves whether we've really found a real-mass particle. The alternative is that we might have a particle with an imaginary mass, a true tachyon , which is consistent with relativity. Tachyons are theoretically inconvenient, though (well, that's putting it mildly). The main objection is that if we can interact with tachyons, we could use them to send messages back in time: if a tachyon travels between point A and point B, it's not well-defined whether it started from point A and went to point B or it started from B and went to point A. The two situations can be transformed into each other by a Lorentz boost, which means that depending on how you're moving, you could see one or the other. (That's not the case for normal motion.) This idea has been investigated in the past, but I'm not sure whether anything useful came of it, and I have my doubts that this is the case, anyway. If we haven't found a tachyon, then perhaps we just have to accept that relativity is incomplete. This is called "Lorentz violation" in the lingo. People have done some research on Lorentz-violating theories, but it's always been sort of a fringe topic; the main intention has been to show that it leads to inconsistencies, thereby "proving" that the universe has to be Lorentz-invariant. If we have discovered superluminal motion, though, people will start looking much more closely at those theories, which means there's going to be a lot of work for theoretical physicists in the years to come.
{ "source": [ "https://physics.stackexchange.com/questions/14973", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3454/" ] }
15,002
Can someone suggest a textbook that treats general relativity from a rigorous mathematical perspective? Ideally, such a book would Prove all theorems used. Use modern "mathematical notation" as opposed to "physics notation", especially with respect to linear algebra and differential geometry. Have examples that illustrate both computational and theoretical aspects. Have a range of exercises with varying degrees of difficulty, with answers. An ideal text would read a lot more like a math book than a physics book and would demand few prerequisites in physics. Bottom line is that I would like a book that provides an axiomatic development of general relativity clearly and with mathematical precision works out the details of the theory. Addendum (1): I did not intend to start a war over notation. As I said in one of the comments below, I think indicial notation together with the summation convention is very useful. The coordinate-free approach has its uses as well and I see no reason why the two can't peacefully coexist. What I meant by "mathematics notation" vs. "physics notation" is the following: Consider, as an example, one of the leading texts on smooth manifolds, John Lee's Introduction to Smooth Manifolds. I am very accustomed to this notation and it very similar to the notation used by Tu's Introduction to Manifolds, for instance, and other popular texts on differential geometry. On the other hand, take Frankel's Geometry of Physics. Now, this is a nice book but it is very difficult for me to follow it because 1) Lack of proofs and 2)the notation does not agree with other math texts that I'm accustomed to. Of course, there are commonalities but enough is different that I find it really annoying to try to translate between the two... Addendum (2): For the benefit of future readers, In addition to suggestions below, I have found another text that also closely-aligns with the criteria I stated above. It is, Spacetime: Foundations of General Relativity and Differential Geometry by Marcus Kriele. The author begins by discussing affine geometry, analysis on manifolds, multilinear algebra and other underpinnings and leads into general relativity at roughly the midpoint of the text. The notation is also fairly consistent with the books on differential geometry I mentioned above.
The Physics work in this field is rigorous enough. Hawking and Ellis is a standard reference, and it is perfectly fine in terms of rigor. Digression on notation If you have a tensor contraction of some sort of moderate complexity, for example: $$ K_{rq} = F_{ij}^{kj} G_{prs}^i H^{sp}_{kq}$$ and you try to express it in an index-free notation, usually that means that you make some parenthesized expression which makes $$ K = G(F,H)$$ Or maybe $$ K = F(G,H) $$ Or something else. It is very easy to prove (rigorously) that there is no parentheses notation which reproduces tensor index contractions, because parentheses are parsed by a stack-language (context free grammar in Chomsky's classification) while indices cannot be parsed this way, because they include general graphs. The parentheses generate parse trees, and you always have exponentially many maximal trees inside any graph, so there is exponential redundancy in the notation. This means that any attempt at an index free notation which uses parentheses, like mathematicians do, is bound to fail miserably: it will have exponentially many different expressions for the same tensor expression. In the mathematics literature, you often see tensor spaces defined in terms of maps, with many "natural isomorphisms" between different classes of maps. This reflects the awful match between functional notation and index notation. Diagrammatic Formalisms fix Exponential Growth Because the parenthesized notation fails for tensors, and index contraction matches objects in pairs, there are many useful diagrammatic formalisms for tensorial objects. Diagrams represent contractions in a way that does not require a name for each index, because the diagram lines match up sockets to plugs with a line, without using a name. For the Lorentz group and general relativity, Penrose introduced a diagrammatic index notation which is very useful. For the high spin representations of SU(2), and their Clebsch-Gordon and Wigner 6-j symbols, Penrose type diagrams are absolutely essential. Much of the recent literature on quantum groups and Jones polynomial, for example, is entirely dependent on Penrose notation for SU(2) indices, and sometimes SU(3). Feynman diagrams are the most famous diagrammatic formalism, and these are also useful because the contraction structure of indices/propagators in a quantum field theory expression leads to exponential growth and non-obvious symmetries. Feynman diagrams took over from Schwinger style algebraic expressions because the algebraic expressions have the same exponential redundancy compared to the diagrams. Within the field of theoretical biology, the same problem of exponential notation blow-up occurs. Protein interaction diagrams are exponentially redundant in Petri-net notation, or in terms of algebraic expressions. The diagrammatic notations introduced there solve the problem completely, and give a good match between the diagrammatic expression and the protein function in a model. Within the field of semantics within philosophy (if there is anything left of it), the ideas of Frege also lead to an exponential growth of the same type. Frege considered a sentence as a composition of subject and predicate, and considered the predicate a function from the subject to meaning. The function is defined by attaching the predicate to the subject. So that "John is running" is thought of as the function "Is running"("John"). Then an adverb is a function from predicates to predicates, so "John is running quickly" means ("quickly"("Is running"))("John"), where the quickly acts on "is running" to make a new predicate, and this is applied to "John". But now, what about adverb modifiers, like "very", as in "John is running very quickly"? You can represent these are functions from adverbs to adverbs, or as functions from predicates to predicates, depending on how you parenthesize: (("very"("quickly"))("Is running"))("John") vs. (("very")(("quickly")("Is running"))("John") Which of these two parenthetization is correct define two schools of semantic philosophy. There is endless debate on the proper Fregian representation of different parts of speech. The resolution, as always, is to identify the proper diagrammatic form, which removes the exponential ambiguity of parenthesized functional representation. The fact that philosophers have not done this in 100 years of this type of debate on Fregian semantics shows that the field is not healthy.
{ "source": [ "https://physics.stackexchange.com/questions/15002", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5364/" ] }
15,197
The following is taken from a practice GRE question: Two experimental techniques determine the mass of an object to be $11\pm 1\, \mathrm{kg}$ and $10\pm 2\, \mathrm{kg}$. These two measurements can be combined to give a weighted average. What is the uncertainty of the weighted average? What's the correct procedure to find the uncertainty of the average? I know what the correct answer is (because of the answer key), but I do not know how to obtain this answer.
I agree with @Ron Maimon that these ETS questions are problematic. But this is (i think) the reasoning they go with. Unlike @Mike's assumption you should not take the normal average, but as stated in the question the weighted average. A weighted average assigns to each measurement $x_i$ a weight $w_i$ and the average is then $$\frac{\sum_iw_ix_i}{\sum_i w_i}$$ Now the question is what weights should one take? A reasonable ansatz is to weigh the measurements with better precision more than the ones with lower precision. There are a million ways to do this, but out of those one could give the following weights: $$w_i = \frac{1}{(\Delta x_i)^2},$$ which corresponds to the inverse of the variance. So plugging this in, we'll have $$c = \frac{1\cdot a+\frac{1}{4}\cdot b}{1+\frac{1}{4}}= \frac{4a+b}{5}$$ Thus, $$\Delta c = \sqrt{\left(\frac{\partial c}{\partial a}\Delta a\right)^2+\left(\frac{\partial c}{\partial b}\Delta b\right)^2}$$ $$\Delta c = \sqrt{\left(\frac{4}{5}1\right)^2+\left(\frac{1}{5}2\right)^2}=\sqrt{\frac{16}{25}+\frac{4}{25}}=\sqrt{\frac{20}{25}}=\sqrt{\frac{4}{5}}=\frac{2}{\sqrt5}$$ which is the answer given in the answer key. Why $w_i=1/\sigma_i^2$ The truth is, that this choice is not completely arbitrary. It is the value for the mean that maximizes the likelihood (the Maximum Likelihood estimator). $$P(\{x_i\})=\prod f(x_i|\mu,\sigma_i)=\prod\frac{1}{\sqrt{2\pi\sigma_i}}\exp\left(-\frac{1}{2}\frac{\left(x_i-\mu\right)^2}{\sigma_i^2}\right)$$. This expression maximizes, when the exponent is maximal, i.e. the first derivative wrt $\mu$ should vanish: $$\frac{\partial}{\partial\mu}\sum_i\left(-\frac{1}{2}\frac{\left(x_i-\mu\right)^2}{\sigma_i^2}\right) = \sum_i\frac{\left(x_i-\mu\right)}{\sigma_i^2} = 0 $$ Thus, $$\mu = \frac{\sum_i x_i/\sigma_i^2}{\sum_i 1/\sigma_i^2} = \frac{\sum_iw_ix_i}{\sum_i w_i}$$ with $w_i = 1/\sigma_i^2$
{ "source": [ "https://physics.stackexchange.com/questions/15197", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3397/" ] }
15,279
From what I understand, the frequency of light coming from a source moving towards an observer increases. From $ E=h\nu $ , this implies an increase in the energy of each photon. What really is confusing, is where does that extra energy come from? Similarly, where is the energy lost during the opposite Doppler effect (redshift)? Why doesn't this violate the conservation of energy?
Conservation of energy doesn't apply to this situation because the energy you measure when at rest with respect to the source and the energy you measure when moving with respect to the source are in different reference frames. Energy is not conserved between different reference frames, in the sense that if you measure an amount of energy in one reference frame, and you measure the corresponding amount of energy in a different reference frame, the conservation law tells you nothing about whether those two measured values should be the same or different. If you're going to use conservation of energy, you have to make all your measurements without changing velocity. In fact, it's kind of misleading to say that energy increases or decreases due to a Doppler shift, because that would imply that there is some physical process changing the energy of the photon. That's really not the case here, it's simply that energy is a quantity for which the value you measure depends on how you measure it. For more information, have a look at Is kinetic energy a relative quantity? Will it make inconsistent equations when applying it to the conservation of energy equations? .
{ "source": [ "https://physics.stackexchange.com/questions/15279", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/567/" ] }
15,282
Recently I was watching a video on quantum computing where the narrators describe that quantum entanglement information travels faster than light! Is it really possible for anything to move faster than light? Or are the narrators just wrong?
Collapsing an entangled pair occurs instantaneously but can never be used to transmit information faster than light. If you have an entangled pair of particles, A and B, making a measurement on some entangled property of A will give you a random result and B will have the complementary result. The key point is that you have no control over the state of A, and once you make a measurement you lose entanglement. You can infer the state of B anywhere in the universe by noting that it must be complementary to A. The no-cloning theorem stops you from employing any sneaky tricks like making a bunch of copies of B and checking if they all have the same state or a mix of states, which would otherwise allow you to send information faster than light by choosing to collapse the entangled state or not. On a personal note, it irks me when works of sci-fi invoke quantum entanglement for superluminal communication (incorrectly) and then ignore the potential consequences of implied causality violation...
{ "source": [ "https://physics.stackexchange.com/questions/15282", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4887/" ] }
15,443
This question is an outgrowth of What is the difference between electric potential, potential difference (PD), voltage and electromotive force (EMF)? , where @sb1 mentioned Faraday's law. However, Faraday's law as part of Maxwell's equations cannot account for the voltage measured between the rim and the axis of a Faraday generator because $\frac {\partial B} {\partial t} = 0$. It would've been a different story if the derivative were $\frac {dB} {dt} $ but it isn't. A palliative solution to this problem is given by invoking the Lorentz force. However, Lorentz force cannot be derived from Maxwell's equation while it must be if we are to consider Maxwell's equations truly describing electromagnetic phenomena. As is known, according to the scientific method, one only experimental fact is needed to be at odds with a theory for the whole theory to collapse. How do you reconcile the scientific method with the above problem?
I'm not sure if this addresses what you're actually asking (if not I'll convert it to a comment), but Maxwell's equations only describe the dynamics of the EM field itself. The Lorentz force law is something separate , which describes the field's effect on charged particles. I've never heard any serious physicist claim that you can, or should be able to, derive the force law from Maxwell's equations. Classical electrodynamics takes both Maxwell's equations and the Lorentz force law as "postulates."
{ "source": [ "https://physics.stackexchange.com/questions/15443", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5070/" ] }
15,684
What are the mechanics of time dilation and length contraction? Going beyond the mathematical equations involving light and the "speed limit of the universe", what is observed is merely a phenomenon and not a true explanation of why time dilates or length contracts. It has been proven to work out, but do we know why? Is it something that happens at a subatomic level?
It's not a mechanism so much as a misconception of the nature of space (and its relationship to time): at low velocities, everything looks linear and Euclidean so we assume it is, but in reality it is not (as can be determined by appropriate experiments). It's kind of like asking by what mechanism you can reach something to your west by traveling east: if you conceptualize the earth as flat, the ability to end up to the west by traveling east isn't going to make much sense. Once you realize the earth is a sphere, you realize that there isn't exactly a west-is-east mechanism per se; it's really that the wrong concepts were being used (though they were a good approximation locally).
{ "source": [ "https://physics.stackexchange.com/questions/15684", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5667/" ] }
15,738
I was surprised to read that we don't know how to analyze turbulent fluids. On page 3-9 of The Feynman Lectures on Physics (Volume One) , Feynman writes: Finally, there is a physical problem that is common to many fields, that is very old, and that has not been solved. [..] Nobody in physics has really been able to analyze it mathematically satisfactorily in spite of its importance to the sister sciences. It is the analysis of circulating or turbulent fluids . If we watch the evolution of a star, there comes a point where we can deduce that it is going to start convection, and thereafter we can no longer deduce what should happen. A few million years later the star explodes, but we cannot figure out the reason. We cannot analyze the weather. We do not know the patterns of motions that there should be inside the earth [which cause earthquakes]. The simplest form on the problem is to take a pipe that is very long and push water through it at high speed. We ask: to push a given amount of water through that pipe, how much pressure is needed? No one can analyze it from first principles and the properties of water. If the water flows very slowly, or if we use a thick goo like honey, then we can do it nicely. you will find that in your textbook. What we really cannot do is deal with actual, wet water running through a pipe. That is the central problem which we ought to solve some day, and we have not. I'm no physicist, but I imagine he's saying that we have differential equations describing turbulent fluids, but no one has ever been able to explicitly solve them or sufficiently determine their properties. However, Feynman's words were written over 50 years ago now. Has there been any progress in analyzing turbulent fluids since then?
The progress in turbulence has come in fits and spurts, and it is very active in the last few years, due to the influence of AdS/CFT. I think it will be solved soon, but this opinion was shared by many in previous generations, and may be much too optimistic. Navier-Stokes equations The basic equations of motion for turbulent flows have been known since the 19th century. The fluid velocity obeys the incompressible Navier-Stokes equations: $$ \dot v_i + v^j \nabla_j v^i + \partial_i P = \nu \partial_j \partial_j v_i $$ and $$ \partial_j v_j = 0 $$ Where repeated indices are summed, and the units of mass normalize the fluid density to be 1. Each of the terms are easy to understand: the nonlinear term gives the advection, it says that the force on the fluid acts to accelerate the fluid as you move along with the fluid, not at one fixed x position. The pressure P term is just a constraint force that enforces incompressibility, and it is determined by taking the divergence of the equation, and enforcing that $\partial_i v_i = 0$ . This determines the Laplacian of the pressure $$ \partial_i v^j \partial_j v_i + \partial_i \partial_i P = 0$$ The friction force says that in the addition to moving along with itself and bending to keep the density constant, the velocity diffuses with a diffusion constant $\nu$ . In the limit $\nu=0$ , you get the Euler equations, which describe hydrodynamics in the absence of friction. In any appropriate boundary conditions, like periodic box, or vanishing velocities at infinity, the pressure equation determines the pressure from the velocity. The equations can be solved on a grid, and the future is determined from the past. Clay problem has nothing to do with turbulence The problem of showing that the limit as the grid goes to zero is everywhere sensible and smooth is far from trivial. It is one of the Clay institute million dollar prize problems. The reason this is nontrivial has nothing to do with turbulence, but with the much easier Reynolds scaling. There is a scale invariance in the solution space, as described on Terrance Tao's blog. The classical Reynolds scaling says that if you have any incompressible fluid flow, and you make it twice as small, twice as fast, you get a second flow which is also ok. You can imagine a fluid flow which generates a smaller faster copy of itself, and so on down, and eventually produces a singular spot where the flow is infinitely fast and infinitely small--- a singularity. This type of singularity has a vanishingly small energy in 3d, because the volume shrinks faster than the velocity energy density blows up. This is both good and bad--- it's bad for mathematicians, because it means that you can't use a simple energy bound to forbid this type of divergence. It is good for physics, because it means that these types of blowups, even if they occur, are completely irrelevant little specks that don't affect the big-picture motion, where the turbulence is happening. If they occur, they only affect little measure zero spots at tiny distances, and they would be resolved by new physics, a stronger hyperviscosity, which would make them decay to something smooth before they blow up. They do not lead to a loss of predictability outside of a microscopic region, because there is a galilean symmetry which decouples large-scale flows from small scale flows. A big flow doesn't care about a spot divergence, it just advects the divergence along. This isn't rigorous mathematics, but it is obvious in the physical sense, and should not make anyone studying turbulence lose sleep over existence/uniqueness. When you replace the velocity diffusion with a faster damping, called "hyperviscosity", you can prove existence and uniqueness. But the problem of turbulence is unaffected by the hyperviscosity, or even by the ordinary viscosity. It is all happening in the Euler regime--- well before the viscosity kicks in. This is another reason to be sure that the Clay problem is irrelevant. If I were writing the Clay problem, I would not have asked for existence/uniqueness. I would have asked for a statistical distribution on differential velocity fields which is an attracting steady state for long-wavelength stirred NS flow. This is a much more difficult, and much more important problem, because it is the problem of turbulence. Further, if such a distribution exists, and if it is attracting enough, it might demonstrate that the NS equations have a smooth solution away from a measure zero set of initial condition. The attracting fixed point will certainly have exponential decay of the energy in the viscous regime, and if everything gets close to this, everything stays smooth. Why Turbulence? Horace Lamb, a well known 19th century mathematical physicist, as an old man quipped that when he gets to heaven, he would ask God two questions: "Why relativity? and why turbulence?" He then said he is optimistic about getting a good answer to the first question. I think he should have been optimistic about the second too. The reason for turbulence is already clear in the ultraviolet catastrophe of classical statistical mechanics. Whenever you have a classical field, the equipartitioning of energy means that all the energy is concentrated in the shortest wavelength modes, for the simple reason that there are just a boatload more short-wavelength modes than long wavelength modes. This means that it is impossible to reach equilibrium of classical particles and classical fields, the fields suck all the energy down to the shortest distances scales. But in most situations, there are motions which can't easily transfer energy to short distances directly. The reason is that these motions are protected by conservation laws. For example, if you have a sound wave, it looks locally like a translation of the crystal, which means that it can't dump energy into short modes immediately, but it takes a while. For sound, there is a gradual attenuation which vanishes at long wavelengths, but the attenuation is real. There is an energy flow from the long-wavelength to the shortest wavelength modes in one step. But in other field theories, the energy flow is more local in $k$ -space. The analog of sound-wave friction in Navier-Stokes is the attenuation of a velocity due to viscosity. This is a diffusion process, and scales as $\sqrt{r}$ where $r$ is the scale of velocity variation. If you have a term which mixes up modes nonlinearly which scales better at long distances, which takes less time to move energy to smaller modes than the one-step diffusive dissipation process, it will dominate at long distances. Further, if this is an energy-conserving polynomial nonlinear term, the mixing will generally be between nearby scales. The reason is the additivity of wave-vectors under multiplication. A quadratic term with a derivative (as in the Navier-Stokes equation) will produce new wavenumbers in the range of the sum of the wavenumbers of the original motion. So there must a local flow of energy into smaller wavenumbers, just from ultraviolet-catastrophe mode-counting, and this flow of energy must be sort-of local (local in log-space) because of the wavenumber additivity constraint. The phenomenon of turbulence occurs in the regime where this energy flow, called the (downward) cascade, dominates the dynamics, and the friction term is negligible. Kolmogorov theory The first big breakthrough in the study of Turbulence came with Kolmogorov, Heisenberg, Obukhov, and Onsager in the wartime years. The wartime breakdown in scientific communications means that these results were probably independent. The theory that emerged is generally called K41 (for Kolmogorov 1941), and it is the zero th order description of turbulence. In order to describe the cascade, Kolmogorov assumed that there is a constant energy flux downward, called $\epsilon$ , that it terminates at the regime where viscosity kicks in, and that there are many decades of local-in- $k$ -space flow between the pumping region where you drive the fluid and the viscous region where you drain the energy. The result is that the spectrum has a statistical distribution of energy in each mode. Kolmogorov gave a dimensional argument for this distribution which roughly fit the measurement accuracy at the time. From the scaling law, all the correlation functions of the velocity could be extracted, and there was an exact relation: the Kolmogorov-Obukhov -5/3 law. These relations were believed to solve the problem for a decade. 2D turbulence In 2D, a remarkable phenomenon was predicted by Kraichnan--- the inverse cascade. The generic ultraviolet argument is assuming that the motion is ergodic on the energy surface, and this requires that there are no additional conservation laws. But in 2d, the flow conserves the square of the vorticity, called the enstrophy. The enstrophy $U$ is $$U = \int |\nabla \times v|^2 $$ And this has two more derivatives than the energy, so it grows faster with $k$ . If you make a statistical Boltzmann distribution for $v$ at constant energy and constant enstrophy, the high $k$ modes are strongly suppressed because they have a huge enstrophy. This means that you can't generate high $k$ modes starting from small $k$ modes. Instead, you find more freedom at small $k$ modes! The energy cascade goes up generically, instead of down, because at longer wavelengths, you can spread the energy over more motions with the same initial enstrophy, because the enstrophy constraint vanishes. This is the inverse cascade, and it was predicted theoretically by Kraichnan in 1968. The inverse cascade is remarkable, because it violates the ultraviolet catastrophe intuitions. It has been amply verified by simulations and by experiment in approximate 2d flows. It provides an explanation for the emergence of large-scale structure in the atmosphere, like hurricanes, which are amplified by the surrounding turbulent flows, rather than decay. It is the most significant advance in turbulence theory since K41. Modern theory I will try to review the recent literature, but I am not familiar with much of it, and it is a very deep field, with many disagreements between various camps. There are also very many wrong results, unfortunately. A big impetus for modern work comes from the analysis of turbulent flows in new systems analogous to fluids. The phenomenon of turbulence should occur in any nonlinear equation, and the cascade picture should be valid whenever the interactions are reasonably approximated by polynomials which are local in log- $k$ space. One place where this is studied heavily is in cosmology, in models of preheating. The field which is doing the turbulence here is a scalar inflaton (or fields coupled to the inflaton) which transfers energy in a cascade to eventually produce standard model particles. Another place where this is studied is in quark gluon plasmas. These fluids have a flow regime which is related to a gravitational dual by AdS/CFT. The gravitational analog of the turbulent flows have a classical gravitational counterpart in the laws of the membrane paradigm of black holes. Yaron Oz is one of the people working on this. One of the most astonishing results of the past few years is the derivation by Oz of the exact laws of turbulent scaling from conservation principles alone, without a full blown cascade assumption. This is, http://arxiv.org/abs/0909.3404 and http://arxiv.org/abs/0909.3574 Kraichnan model Kraichnan gave an interesting model for the advection of passive scalar fields by a turbulent flow. The model is a dust particle carried by the fluid. This is important, because the advected particle makes a Levy flight, not a Brownian motion. This has been verified experimentally, but it is also important because it gives a qualitative explanation for intermittency. Levy flights tend to cluster in regions before moving on by a big jump. The velocity advects itself much as it advects a dust particle, so if the dust is doing a Levy flight, it is reasonable that the velocity is doing that too. This means that you expect velocity perturbations to concentrate in regions of isolated turbulence, and that this concentration should follow a well-defined power law, according the the scalar advection. These ideas are related to the Mandelbrot model of multifractals. Mandelbrot gave this model to understand how it is that turbulent flows could have a velocity gradient which is concentrated in certain geometric regions. The model is qualitative, but the picture corrects the K41 exponents, which assume that the velocity is cascading homogeneously over all space. Martin-Siggia-Rose formalisms The major advance in the renormalization approach to turbulence came in the 1970s, with the development of the Martin-Siggia-Rose formalism. This gave a way of formally describing the statistics of a classical equation using a Lagrange multiplier field which goes along for the ride in the renormalization analysis. Forster Nelson Stevens gave a classic analysis of the inverse-cascade problem in 3d, the problem of the long-wavelength profile of a fluid stirred at short distances. While this problem is not directly related to turbulence, it does have some connection in that the statistical steady-state distribution requires taking into account interactions between neighboring modes, which do lead to a cascade. The FNS fixed points include Kolmogorov like spectra with some stirring forces, but there is no condition for the stirring forces to be at a renormalization group fixed point. Their analysis, however, remains the high point of the MSR formalism as applied to turbulence. This subject has been dormant for almost thirty years. What remains to be done The major unsolved problem is predicting the intermittency exponents--- the deviations from Kolmogorov scaling in the correlation functions of fully developed turbulence. These exponents are now known experimentally to two decimal places, I believe, and their universality has been verified extensively, so that the concept of homogeneous statistical cascade makes sense. Deriving these exponents requires a new principle by which one can extract the statistical distribution of a nonlinearly interacting field from the equations of motion. There are formal solutions which get you nowhere, because they start far away from renormalization fixed points, nevertheless every approach is illuminating in some way or another. This is a terrible review, from memory, but its better than nothing. Apologies to the neglected majority.
{ "source": [ "https://physics.stackexchange.com/questions/15738", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/175/" ] }
15,747
There is iron in our blood, which is magnetic. Roughly how strong would a magnet have to be to induce a noticeable attraction? It would be nice to know this for several distances. Also, do electromagnets that strong exist?
Humans are weakly diamagnetic. Rather than being attracted by a magnetic field we would tend to repel the lines of force. Look at the work of the High Field Magnet Laboratory http://www.ru.nl/HFML/ , in particular http://www.ru.nl/hfml/research/levitation/diamagnetic/ where they demonstrate levitation of a living frog. It took about 16T to levitate the frog.
{ "source": [ "https://physics.stackexchange.com/questions/15747", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2843/" ] }
15,899
I'll be generous and say it might be reasonable to assume that nature would tend to minimize, or maybe even maximize, the integral over time of $T-V$. Okay, fine. You write down the action functional, require that it be a minimum (or maximum), and arrive at the Euler-Lagrange equations. Great. But now you want these Euler-Lagrange equations to not just be derivable from the Principle of Least Action, but you want it to be equivalent to the Principle of Least Action. After thinking about it for awhile, you realize that this implies that the Principle of Least Action isn't really the Principle of Least Action at all: it's the "Principle of Stationary Action". Maybe this is just me, but as generous as I may be, I will not grant you that it is "natural" to assume that nature tends to choose the path that is stationary point of the action functional. Not to mention, it isn't even obvious that there is such a path, or if there is one, that it is unique. But the problems don't stop there. Even if you grant the "Principle of Stationary Action" as fundamentally and universally true, you realize that not all the equations of motions that you would like to have are derivable from this if you restrict yourself to a Lagrangian of the form $T-V$. As far as I can tell, from here it's a matter of playing around until you get a Lagrangian that produces the equations of motion you want. From my (perhaps naive point of view), there is nothing at all particularly natural (although I will admit, it is quite useful) about the formulation of classical mechanics this way. Of course, this wouldn't be such a big deal if these classical ideas stayed with the classical physics, but these ideas are absolutely fundamental to how we think about things as modern as quantum field theory. Could someone please convince me that there is something natural about the choice of the Lagrangian formulation of classical mechanics (I don't mean in comparison with the Hamiltonian formulation; I mean period ), and in fact, that it is so natural that we would not even dare abandon these ideas?
Could someone please convince me that there is something natural about the choice of the Lagrangian formulation... If I ask a high school physics student, "I am swinging a ball on a string around my head in a circle. The string is cut. Which way does the ball go?", they will probably tell me that the ball goes straight out - along the direction the string was pointing when it was cut. This is not right; the ball actually goes along a tangent to the circle, not a radius. But the beginning student will probably think this is not natural. How do they lose this instinct? Probably not by one super-awesome explanation. Instead, it's by analyzing more problems, seeing the principles applied in new situations, learning to apply those principles themselves, and gradually, over the course of months or years, building what an undergraduate student considers to be common intuition. So my guess is no, no one can convince you that the Lagrangian formulation is natural. You will be convinced of that as you continue to study more physics, and if you expect to be convinced of it all at once, you are going to be disappointed. It is enough for now that you understand what you've been taught, and it's good that you're thinking about it. But I doubt anyone can quickly change your mind. You'll have to change it for yourself over time. That being said, I think the most intuitive way to approach action principles is through the principle of least (i.e. stationary) time in optics. Try Feynman's QED , which gives a good reason to believe that the principle of stationary time is quite natural. You can go further mathematically by learning the path integral formulation of nonrelativistic quantum mechanics and seeing how it leads to high probability for paths of stationary action. More importantly, just use Lagrangian mechanics as much as possible, and not just finding equations of motion for twenty different systems. Use it to do interesting things. Learn how to see the relationship between symmetries and conservation laws in the Lagrangian approach. Learn about relativity. Learn how to derive electromagnetism from an action principle - first by studying the Lagrangian for a particle in an electromagnetic field, then by studying the electromagnetic field itself as described by a Lagrange density. Try to explain it to someone - their questions will sharpen your understanding. Check out Leonard Susskind's lectures on YouTube (series 1 and 3 especially). They are the most intuitive source I know for this material. Read some of the many questions here in the Lagrangian or Noether tags. See if you can figure out their answers, then read the answers people have provided to compare. If you thought that the Lagrangian approach was wrong, then you might want someone to convince you otherwise. But if you just don't feel comfortable with it yet, you'd be robbing yourself of a great pleasure by not taking the time to learn its intricacies. Finally, your question is very similar to this one , so check out the answers there as well.
{ "source": [ "https://physics.stackexchange.com/questions/15899", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3397/" ] }
15,981
I've read many times, including here on this very site that the commonly known explanation of flight is wrong, and that airplanes can fly because the shape of their wings deflects air down. This makes sense, but as far as I can tell it doesn't explain upside down flight or symmetric wings. The images I've seen show an inclined wing, which forces the air to go downwards. But how can planes fly upside down then?
Upside-down or right side up, flight works the same way. As you stated, the wing deflects air downward. When inverted, the pilot simply controls the the pitch of the aircraft to keep the nose up, thus giving the wings sufficient angle of attack to deflect air downwards. Most airplanes are designed with some positive angle of attack "built-in," meaning that there is some angle between the wings and the fuselage so that the wings have a small positive angle of attack while the fuselage is level. This is why the floor isn't tilted tailwards when you're in an airliner in level flight. So when upside down the nose has to be held a bit higher than usual, and the other flight systems (including the pilot!) must be designed to handle it, but there is nothing really special about upside-down flight.
{ "source": [ "https://physics.stackexchange.com/questions/15981", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5788/" ] }
15,990
In the Wikipedia article Classical Field Theory (Gravitation) , it says After Newtonian gravitation was found to be inconsistent with special relativity, . . . I don't see how Newtonian gravitation itself is inconsistent with special relativity. After all, Newton's Universal Law of Gravitation is completley analogous to Coulomb's Law, so it would seem that, if there were an analogous "gravitational magnetic field", one could formulate a theory of gravitation in exact analogy with Maxwell's Theory of Electromagnetism, and of course, this would automatically be consistent with Special Relativity. What about this approach to gravitation does not work? The only problem I could see with this is the lack of evidence for a "gravitational magnetic field". That being said, gravity is "weak" as it is, and my guess is that it would be extremely difficult to set up an experiment in which a massive body were moving fast enough for this "magnetic field effect" to be obsevable. EDIT: As has been pointed out to me in the answers, Newton itself is inconsistent with SR. Similarly, however, so is Coulomb's Law, yet, electromagnetism is still consistent with SR. Thus, how do we know that it is not the case that Newton's Law is the special "static" case of a more general gravitomagnetic theory exactly analogous to Maxwell's Theory: Let $\mathbf{G}$ be the gravitation field, let $\rho$ be the mass density, and $\mathbf{J}$ be the mass current density, let $\gamma _0$ be defined so that $\frac{1}{4\pi \gamma _0}=G$, the Universal Gravitation Constant, let $\nu _0$ be defined so that $\frac{1}{\sqrt{\gamma _0\nu _0}}=c$, and suppose there exists a field $\mathbf{M}$, the gravitomagnetic field, so that the following equations hold: $$ \vec{\nabla}\cdot \mathbf{G}=-\frac{\rho}{\gamma _0} $$ $$ \vec{\nabla}\cdot \mathbf{M}=0 $$ $$ \vec{\nabla}\times \mathbf{G}=-\frac{\partial \mathbf{M}}{\partial t} $$ $$ \vec{\nabla}\times \mathbf{M}=\nu _0\mathbf{J}+\gamma _0\nu _0\frac{\partial \mathbf{G}}{\partial t} $$ where theses fields would produce on a mass $m$ moving with velocity $\mathbf{v}$ in our inertial frame the Lorentz force $\mathbf{F}=m\left( \mathbf{G}+\mathbf{v}\times \mathbf{M}\right)$. This theory would automatically be consistent with SR and would reduce to Newton's Law of Gravitation for the case of gravitostatics: $\frac{\partial \mathbf{G}}{\partial t}=\frac{\partial \mathbf{M}}{\partial t}=\mathbf{J}=0$. (To be clear, you can't just set the time derivatives equal to $\mathbf{0}$ as it seems I have done. Upon doing so, you obtain the corresponding static theory, which is technically incorrect (as you can easily see because it won't be relativistically invariant), but is nevertheless often a useful approximation .) My question can thus be phrased as: without appealing directly to GR, what is wrong with this theory?
Newtonian gravitation is just the statement that the gravitational force between two objects obeys an inverse-square distance law, is proportional to the masses and is directed along the line that joins them. As such, it implies that the interaction between the objects is transmitted instantaneously and it must be inconsistent with special relativity (SR). If say the Sun suddenly started moving away from the Earth at a speed very close to the speed of light, SR tells you that the Earth must still move as if the Sun were in its old position until about 8 minutes after it started moving. In contrast, Newtonian gravitation would predict an instantaneous deviation of Earth from its old orbit. What you have discovered in your reasoning is that indeed, Coulomb's Law is NOT relativistically invariant either. But Maxwell electromagnetism is not Coulomb's Law. As a matter of fact, Coulomb's Law is deduced from Maxwell equations as a particular case. The assumptions are those of electrostatics, namely that the magnetic field is zero and that the electric field is constant in time. These assumptions lead to the Coulomb field but they are NOT consistent with SR in the sense that they can not be valid in every reference frame since if the electric field is constant in a reference frame, then there exists another frame in which it will be varying and the magnetic field will be differnent from zero. For more you can start reading this . Maxwell's electromagnetism IS consistent with SR since the full Maxwell's equations apply in all reference frames, no matter whether the particle is moving or not. General Relativity is the analogous for gravity of Maxwell's electromagnetism and, as it has already been said, it leads to equations for the gravitational field (the metric) analogous to those of Maxwell. Thus, it is not strange that something that resembles gravitational magnetism should appear.
{ "source": [ "https://physics.stackexchange.com/questions/15990", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3397/" ] }
16,018
Quite a few of the questions given on this site mention a photon in vacuum having a rest frame such as it having a zero mass in its rest frame. I find this contradictory since photons must travel at the speed of light in all frames according to special relativity. Does a photon in vacuum have a rest frame?
Short answer: no. Explanation: Many introductory text books talk about "rest mass" and "relativistic mass" and say that the "rest mass" is the mass measured in the particles rest frame. That's not wrong, you can do physics in that point of view, but that is not how people talk about and define mass anymore. In the modern view each particle has one and only one mass defined by the square of it's energy--momentum four vector (which being a Lorentz invariant you can calculate in any inertial frame): $$ m^2 \equiv p^2 = (E, \vec{p})^2 = E^2 - \vec{p}^2 $$ For a photon this value is zero. In any frame , and that allows people to reasonably say that the photon has zero mass without needing to define a rest frame for it.
{ "source": [ "https://physics.stackexchange.com/questions/16018", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4485/" ] }
16,048
So atoms are formed from protons and neutrons, which are formed from quarks. But where do these quarks come from? What makes them?
I cannot resist this mother goose quote: What are little boys made of? What are little boys made of? Frogs and snails, And puppy-dogs' tails; That's what little boys are made of. What are little girls made of? What are little girls made of? Sugar and spice, And all that's nice; That's what little girls are made of. You state : So atoms are formed from protons and neutrons, which are formed from quarks. and ask: But where do these quarks come from? What makes them? How do we know atoms are formed from protons and neutrons? We have deep inelastic scatterings which showed that the atoms have a hard core, so they are not a uniformly distributed matter. Then we have the periodic table of elements which organizes itself well counting protons and neutrons. How do we know that protons and neutrons are formed from quarks? We have the results from painstaking experiments that showed us once more that deep inelastic scattering shows a hard core inside the protons and neutrons. The study of the interaction products organized the particles and resonances into what is now called the standard model , a grouping in families that have a one to one correspondence with the hypothesis that the hadrons (protons neutrons resonances) are composed out of quarks. But not only. They also have gluons which hold the quarks together due to the strong interaction, and the gluons have been seen experimentally , again with scattering experiments. This is where we are now. The LHC is scattering protons on protons, i.e. quarks on quarks at much higher energies then ever before, and we are waiting for results. The theoretical interpretation called the Standard Model, so successful at lower energies presupposes that the quarks are elementary. Due to the gluon exchanges it is hard to see how a hard core might appear in quark quark scattering to take the onion one level lower, i.e. tell us that the quarks have a core. Even in neutrino quark scattering the gluons will interfere, if the SM theory is correct at high energies. At the moment there is no experimental indication that the quarks are not elementary. Nature though has surprised us before, and might do it again, once high energy lepton quark scattering experiments are designed and carried out in the future. Feynman I think had said: "to see what a watch is made of one does not throw one watch on another watch and count the gears flying off. One takes a screw driver". Leptons with their weak interactions are the equivalent of the screw driver.
{ "source": [ "https://physics.stackexchange.com/questions/16048", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1266/" ] }
16,114
After closing my refrigerator's door I noticed that it's much harder to reopen it immediately, as if there's an underpressure. Only after a minute or so it opens normally. How can this be explained?
When you open the door the cold air flows out and is replaced by air at room temperature. When you close the door this air gets cooled through contact with the stuff inside the fridge and therefore the pressure decreases, according to ideal gas laws by some 10%. That's not very much, but the surface of the door translates this small pressure difference to a larger force. Note that the effect is even stronger with deep freezers, where the difference in temperature (and hence pressure) is even bigger. edit Like Anna says there must be air leaking in if the door opens normally after some time. This air will be cooled as well, but the pressure will increase with more air being sucked in, until an equilibrium is reached: cold air at atmospheric pressure.
{ "source": [ "https://physics.stackexchange.com/questions/16114", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5838/" ] }
16,160
From a mathematical point of view it seems to be clear what's the difference between momentum and $mv$ and kinetic energy $\frac{1}{2} m v^2$. Now my problem is the following: Suppose you want to explain someone without mentioning the formulas what's momentum and what's kinetic energy. How to do that such that it becomes clear what's the difference between those two quantities? From physics I think one can list essentially the following differences: momentum has a direction, kinetic energy not momentum is conserved, kinetic energy not (but energy is) momentum depends linear on velocity, kinetic energy depends quadratically on velocity I think is is relatively easy to explain the first two points using everyday language, without referring to formulas. However is there a good way to illustrate the 3. point?
As a qualitative understanding, here's an example: If you shoot a bullet, the rifle recoils with the same momentum as the bullet, but the bullet has a lot more Kinetic energy. Aren't you glad your shoulder is being hit by the rifle stock, and not by the bullet?
{ "source": [ "https://physics.stackexchange.com/questions/16160", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2740/" ] }
16,390
Say I'm flying from Sydney, to Los Angeles (S2LA), back to Sydney (LA2S). During S2LA, travelling with the rotation of the earth, would the flight time be longer than LA2S on account of Los Angeles turning/moving away from our position? Or, in the opposite direction, would the flight to Sydney be faster since the Earth turns underneath us and moves Sydney closer? === Please ignore jet stream effects and all other variables; this is a control case in an ideal environment. By "dramatically" I suppose I mean a delay of 1 hour or more.
During the flight, you need to get up to use the restroom. There's one 10 rows in front of you, and another 10 rows behind you. Does it take longer to walk to the one that's moving away from you at 600 mph than the one that's moving towards you at 600 mph? No, because you're moving at 600 mph right along with it -- in the ground-based frame of reference. In the frame of reference of the airplane, everything is stationary. Similarly, the airplane is already moving along with the surface of the Earth before it takes off. The rotation of the Earth has no direct significant effect on flight times in either direction. That's to a first order approximation. As others have already said, since the Earth's surface is (very nearly) spherical and is rotating rather than moving linearly, Coriolis effects can be significant. But prevailing winds (which themselves are caused by Coriolis and other effects) are more significant that any direct Coriolis effect on the airplane.
{ "source": [ "https://physics.stackexchange.com/questions/16390", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2456/" ] }
16,814
Could someone experienced in the field tell me what the minimal math knowledge one must obtain in order to grasp the introductory Quantum Mechanics book/course? I do have math knowledge but I must say, currently, kind of a poor one. I did a basic introductory course in Calculus, Linear algebra and Probability Theory. Perhaps you could suggest some books I have to go through before I can start with QM?
I depends on the book you've chosen to read. But usually some basics in Calculus, Linear Algebra, Differential equations and Probability theory is enough. For example, if you start with Griffiths' Introduction to Quantum Mechanics , the author kindly provides you with the review of Linear Algebra in the Appendix as well as with some basic tips on probability theory in the beginning of the first Chapter. In order to solve Schrödinger equation (which is (partial) differential equation) you, of course, need to know the basics of Differential equations. Also, some special functions (like Legendre polynomials, Spherical Harmonics, etc) will pop up in due course. But, again, in introductory book, such as Griffiths' book, these things are explained in detail, so there should be no problems for you if you're careful reader. This book is one of the best to start with.
{ "source": [ "https://physics.stackexchange.com/questions/16814", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/6085/" ] }
17,076
I've always been taught that the spring constant $k$ is a constant — that is, for a given spring, $k$ will always be the same, regardless of what you do to the spring. My friend's physics professor gave a practice problem in which a spring of length $L$ was cut into four parts of length $L/4$. He claimed that the spring constant in each of the new springs cut from the old spring ($k_\text{new}$) was therefore equal to $k_\text{orig}/4$. Is this true? Every person I've asked seems to think that this is false, and that $k$ will be the same even if you cut the spring into parts. Is there a good explanation of whether $k$ will be the same after cutting the spring or not? It seems like if it's an inherent property of the spring it shouldn't change, so if it does, why?
Well, the sentence It seems like if it's an inherent property of the spring it shouldn't change, so if it does, why? clearly isn't a valid argument to calculate the $k$ of the smaller springs. They're different springs than their large parent so they may have different values of an "inherent property": if a pizza is divided to 4 smaller pieces, the inherent property "mass" of the smaller pizzas is also different than the mass of the large one. ;-) You may have meant that it is an "intensive" property (like a density or temperature) which wouldn't change after the cutting of a big spring, but you have offered no evidence that it's "intensive" in this sense. No surprise, this statement is incorrect as I'm going to show. One may calculate the right answer in many ways. For example, we may consider the energy of the spring. It is equal to $k_{\rm big}x_{\rm big}^2/2$ where $x_{\rm big}$ is the deviation (distance) from the equilibrium position. We may also imagine that the big spring is a collection of 4 equal smaller strings attached to each other. In this picture, each of the 4 springs has the deviation $x_{\rm small} = x_{\rm big}/4$ and the energy of each spring is $$ E_{\rm small} = \frac{1}{2} k_{\rm small} x_{\rm small}^2 = \frac{1}{2} k_{\rm small} \frac{x_{\rm big}^2}{16} $$ Because we have 4 such small springs, the total energy is $$ E_{\rm 4 \,small} = \frac{1}{2} k_{\rm small} \frac{x_{\rm big}^2}{4} $$ That must be equal to the potential energy of the single big spring because it's the same object $$ = E_{\rm big} = \frac{1}{2} k_{\rm big} x_{\rm big}^2 $$ which implies, after you divide the same factors on both sides, $$ k_{\rm big} = \frac{k_{\rm small}}{4} $$ So the spring constant of the smaller springs is actually 4 times larger than the spring constant of the big spring. You could get the same result via forces, too. The large spring has some forces $F=k_{\rm big}x_{\rm big}$ on both ends. When you divide it to four small springs, there are still the same forces $\pm F$ on each boundary of the smaller strings. They must be equal to $F=k_{\rm small} x_{\rm small}$ because the same formula holds for the smaller springs as well. Because $x_{\rm small} = x_{\rm big}/4$, you see that $k_{\rm small} = 4k_{\rm big}$. It's harder to change the length of the shorter spring because it's short to start with, so you need a 4 times larger force which is why the spring constant of the small spring is 4 times higher.
{ "source": [ "https://physics.stackexchange.com/questions/17076", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/302/" ] }