source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
110,669 | Is gravitational time dilation caused by gravity, or is it an effect of the inertial force caused by gravity? Is gravitational time dilation fundamentally different from time dilation due to acceleration, are they the same but examples of different configurations? Could you recreate the same kind of time dilation without gravity using centrifugal force? | No, gravitational time dilation is no different to other forms of time dilation. They all stem from the invariance of the line element . If we choose some coordinates, $x^i$ , then the line element is given by: $$ ds^2 = g_{ab}dx^adx^b \tag{1} $$ where the matrix $g_{ab}$ is called the metric tensor . In both GR and SR the line element is an invariant, that is all observers in all coordinate systems will calculate the same value for $ds$ . Suppose I'm using some set of coordinates $(t, x, y, z)$ to calculate your line element using equation (1). We'll stick to SR for now, where $g$ is just the Minkowski metric , so I get (I'm pulling the usual trick of setting $c = 1$ ): $$ ds^2 = -dt^2 + dx^2 + dy^2 + dz^2 $$ Now suppose you're doing the same calculation in your rest frame coordinates $(t', x', y', z')$ . By definition, in your rest frame $dx' = dy' = dz' = 0$ , so you would calculate: $$ ds^2 = -dt'^2 \tag{2} $$ Since we must both agree on the value of $ds^2$ we can equate the right hand sides of equations (1) and (2) to get: $$ -dt^2 + dx^2 + dy^2 + dz^2 = -dt'^2 $$ If any of $dx$ , $dy$ or $dz$ are non-zero, i.e. if you're moving in any way in my coordinate system this means that: $$ dt \ne dt' $$ and therefore our measurements of elapsed time will not match. This is why we get time dilation. In introductory works on SR you'll see time dilation calculated using various arrangements of light beams and mirrors, but this is the fundamental reason it occurs. I've used the example of SR above because the metric tensor is diagonal and all the elements are $-1$ or $1$ , so it's easy to write out the expression for $ds^2$ . In GR the metric may not be diagonal (it's often possible to choose coordinates where it is) and the values of the elements in the metric will typically be functions of position. However the working is exactly the same. We'd end up concluding that $dt \ne dt'$ in exactly the same way. Since you specifically asked about time dilation and centrifugal force, let's do the calculation explicitly. Suppose you're whirling about a pivot with velocity $v$ at a radius $r$ and I'm watching you from the pivot. I'm going to measure your position using polar coordinates $(t, r, \theta,\phi)$ , and in polar coordinates the line interval is given by (I'm leaving $c$ in the equation this time): $$ ds^2 = -c^2dt^2 + dr^2 + r^2(d\theta^2 + \sin^2\theta d\phi^2) $$ Note that this is just the flat space, i.e. Minkowski metric, in polar coordinates. We're using the flat space metric because there are no masses around to curve spacetime (we'll assume you and I have been on a diet :-). We can choose our axes so you are rotating in the plane $\theta = \pi/2$ , and you're moving at constant radius so both $dr$ and $d\theta$ are zero. The metric simplifies to: $$ ds^2 = -c^2dt^2 + r^2d\phi^2 $$ We can simplify this further because in my frame you're moving at velocity $v$ so $d\phi$ is given by: $$ d\phi= \frac{v}{r} dt $$ and therefore: $$ ds^2 = -c^2dt^2 + v^2dt^2 = (v^2 - c^2)dt^2 $$ In your frame you're at rest, so $ds^2 = -c^2dt'^2$ , and equating this to my value for $ds^2$ gives: $$ -c^2dt'^2 = (v^2 - c^2)dt^2 $$ or: $$ dt'^2 = (1 - \frac{v^2}{c^2})dt^2 $$ or: $$ dt' = dt \sqrt{1 - \tfrac{v^2}{c^2}} = \frac{dt}{\gamma} $$ which you should immediately recognise as the usual expression for time dilation in SR. Note that the centripetal force/acceleration does not appear in this expression. The time dilation is just due to our relative velocities and not to your acceleration towards the pivot. Finally, since I did say there was no difference between gravitational and other forms of time dilation I should justify this by proving that the special relativity calculation above works in the same way for combined gravitational and speed related time dilation. Specifically we'll calculate the time dilation for an object in orbit around a black hole. This turns out to be straightforward, showing how powerful this technique is. All we need to know is that the metric for a black hole is : $$ ds^2 = -\left(1-\frac{2GM}{c^2r}\right)c^2dt^2 + \frac{dr^2}{1-\frac{2GM}{c^2r}}+r^2d\theta^2 + r^2\sin^2\theta d\phi^2 $$ We proceed as before setting $dr = d\theta = 0$ and $\theta = \pi/2$ to get: $$ ds^2 = -\left(1-\frac{2GM}{c^2r}\right)c^2dt^2 + r^2 d\phi^2 $$ The orbital velocity is: $$ v = \sqrt{\frac{GM}{r}} $$ and as before we can rewrite $d\phi$ as: $$ d\phi = \frac{v}{r}dt = \frac{\sqrt{GM/r}}{r} dt $$ and substituting this in our metric gives: $$ ds^2 = -\left(1-\frac{2GM}{c^2r}\right)c^2dt^2 + \frac{GM}{r}dt^2 $$ As before, in the rest frame of the orbiting body we have $ds^2 = -c^2dt'^2$ , and equating this to the above value for $ds^2$ gives: $$ -c^2dt'^2 = -\left(1-\frac{2GM}{c^2r}\right)c^2dt^2 + \frac{GM}{r}dt^2 $$ which simplifies to: $$ dt' = \sqrt{1-\frac{3GM}{c^2r}}dt = \sqrt{1-\frac{3r_s}{2r}}dt $$ where $r_s$ is the Schwarzschild radius: $r_s = 2GM/c^2$ . And, reassuringly, this is exactly the result Wikipedia gives for the time dilation of an object in a circular orbit . This is the point I want you to take away. Once you understand the basic principle that the line element is an invariant you can use this to calculate the time dilation for any object, whether in a gravitational field or not, and whether moving or not. In fact, as I've just demonstrated, understanding this basic principle opens the door to understanding general relativity as well as special relativity. That's how important it is! | {
"source": [
"https://physics.stackexchange.com/questions/110669",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/35968/"
]
} |
110,703 | Total noob here. I realize that photons do not have a mass. However, they must somehow occupy space, as I've read that light waves can collide with one another. Do photons occupy space? and if so, does that mean there is a theoretically maximum brightness in which no additional amount of photons could be present in the same volume? | However, they must somehow occupy space, as I've read that light waves can collide with one another. That's not true. Yes, light waves can "collide" and interact with each other (rarely), but that itself doesn't imply that they need to occupy space. It's not even entirely clear what it means for a subatomic particle to occupy space. A particle like a photon is a disturbance in a quantum field, and is "spread out" across space in a sense; it doesn't have a definite size in the same sense that a macroscopic material object does. But you'll probably agree that, if it's possible to make any sensible definition of "occupying space" for a subatomic particle, it should involve preventing other things from also occupying that same space. Photons don't do that. They're bosons , and as a consequence of that they are not subject to the Pauli exclusion principle , so if you have a photon occupying some space (whatever that may mean), you can in theory pack an unlimited number of additional photons into the same space. | {
"source": [
"https://physics.stackexchange.com/questions/110703",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9502/"
]
} |
110,715 | The standard explanation for the cosmological redshift is that photons emitted from far away galaxies have their wavelengths lengthened as they travel through the expanding Universe. But perhaps the photons do not lose energy as they travel but rather the atoms in our detectors are more energetic in comparision with the atoms that emitted those photons a long time in the past leading to an apparent redshift effect? Addition (having had a comment exchange with @rob, see below) : My hypothesis is that the Planck mass $M_{pl} \propto a(t)$ where $a(t)$ is the Universal scale factor. Addition 2 Of course if the Planck mass $M_{pl}$ is changing then $G=1/M^2_{pl}$ is changing so that we no longer have standard GR! I've asked this question before, see Cosmological redshift interpretation , but this time I'm including a little bit of theory to back up my hypothesis. For simplicity let us assume a flat radial FRW metric: $$ds^2=-dt^2 + a^2(t)\ dr^2$$ Consider the null geodesic path of a light beam with $ds=0$ so that we have: $$dt = a(t)\ dr\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$$ Now at the present time $t_0$ we define the scale factor $a(t_0)=1$ so that we have: $$dt_0 = dr\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2)$$ Substituting equation (2) into equation (1) we have: $$dt = a(t)\ dt_0$$ In order for the interval of time $dt$ to stay constant as the scale factor $a(t)$ increases we must have the corresponding interval of present time $dt_0$ varying inversely with the scale factor: $$dt_0 \propto \frac{1}{a(t)}$$ Thus as cosmological time $t$ increases, and the Universe expands, equal intervals of cosmological time $dt$ correspond to smaller and smaller intervals of present time $dt_0$. Now the energy of a system is proportional to the frequency of its oscillation which in turn is inversely proportional to its oscillation period: $$E(t) \propto \frac{1}{dt}$$ The corresponding energy of the system in terms of the present epoch $t_0$ is given by $$E(t_0) \propto \frac{1}{dt_0}$$ $$E(t_0) \propto a(t)$$ Thus an atom at time $t$ is a factor $a(t)$ times more energetic than the same atom at time $t_0$. As the energy scale is ultimately set by the Planck mass then the Planck mass must be increasing as the Universe expands: $M_{pl} \propto a(t)$. This effect alone would account for the gravitational redshift of distant galaxies without the assumption that photons travelling from those galaxies lose energy due to wavelength expansion. Addition: I believe this hypothesis leads to a linear cosmological expansion $a(t)\propto t$ (see comments below). | However, they must somehow occupy space, as I've read that light waves can collide with one another. That's not true. Yes, light waves can "collide" and interact with each other (rarely), but that itself doesn't imply that they need to occupy space. It's not even entirely clear what it means for a subatomic particle to occupy space. A particle like a photon is a disturbance in a quantum field, and is "spread out" across space in a sense; it doesn't have a definite size in the same sense that a macroscopic material object does. But you'll probably agree that, if it's possible to make any sensible definition of "occupying space" for a subatomic particle, it should involve preventing other things from also occupying that same space. Photons don't do that. They're bosons , and as a consequence of that they are not subject to the Pauli exclusion principle , so if you have a photon occupying some space (whatever that may mean), you can in theory pack an unlimited number of additional photons into the same space. | {
"source": [
"https://physics.stackexchange.com/questions/110715",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22307/"
]
} |
110,739 | I've heard that, in classical and quantum mechanics, the law of conservation of information holds. I always wonder where my deleted files and folders have gone on my computer. It must be somewhere I think. Can anyone in principle recover it even if I have overwritten my hard drive? | Short Answer The information is contained in the heat given off by erasing the information. Landauer' Principle states that erasing information in a computation, being a thermodynamically irreversible process, must give off heat proportional to the amount of information erased in order to satisfy the second law of thermodynamics. The emitted information is hopelessly scrambled though and recovering the original information is impossible in practice. Scrambling of information is what increasing entropy really means in plain English. Charles H. Bennett and Rolf Landauer developed the theory of thermodynamics of computation. The main results are presented in The thermodynamics of computation—a review . Background Erasure of information and the associated irreversibility are macroscopic/thermodynamic phenomena. At the microscopic level everything is reversible and all information is always preserved, at least according to the currently accepted physical theories, though this has been questioned by notable people such as Penrose and I think also by Prigogine. Reversibility of basic physical laws follows from Liouville's_theorem for classical mechanics and unitarity of the time evolution operator for quantum mechanics. Reversibility implies the conservation of information since time reversal can then reconstruct any seemingly lost information in a reversible system.
The apparent conflict between macroscopic irreversibility and microscopic reversibilty is known as Loschmidt's paradox , though it is not actually a paradox. In my understanding sensitivity to initial conditions, the butterfly effect, reconciles macroscopic irreversibility with microscopic reversibility. Suppose time reverses while you are scrambling an egg. The egg should then just unscramble like in a film running backwards. However, the slightest perturbation, say by hitting a single molecule with a photon, will start a chain reaction as that molecule will collide with different molecules than it otherwise would have. Those will in turn have different interactions then they otherwise would have and so on. The trajectory of the perturbed system will diverge exponentially from the original time reversed trajectory. At the macroscopic level the unscrambing will initially continue, but a region of rescrambling will start to grow from where the photon struck and swallow the whole system leaving a completely scrambled egg. This shows that time reversed states of non-equilibrium systems are statistically very special, their trajectories are extremely unstable and impossible to prepare in practice. The slightest perturbation of a time reversed non-equilibrium system causes the second law of thermodynamics to kick back in. The above thought experiment also illustrates the Boltzmann brain paradox in that it makes it seem that a partially scrambled egg is more likely to arise form the spontaneous unscrambling of a completely scrambled egg than by breaking an intact one, since if trajectories leading to an intact egg in the future are extremely unstable, then by reversibility, so must trajectories originating from one in the past. Therefore the vast majority of possible past histories leading to a partially scrambled state must do so via spontaneous unscrambling. This problem is not yet satisfactorily resolved, particularly its cosmological implications, as can be seen by searching Arxiv and Google Scholar. Nothing in this depends on any non classical effects. | {
"source": [
"https://physics.stackexchange.com/questions/110739",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24452/"
]
} |
110,763 | I want to learn some QFT in curved spacetime . What papers/books/reviews can you suggest to learn this area? Are there any good books or other reference material which can help in learning about QFT in curved spacetime?
There is no restriction about the material, no matter physical or mathematical. | Quantum field theory (QFT) in curved spacetime is nowadays a mature set of theories quite technically advanced from the mathematical point of view. There are several books and reviews one may profitably read depending on his/her own interests. I deal with this research area from a quite mathematical viewpoint, so my suggestions could reflect my attitude (or they are biased in favor of it). First of all, Birrell and Davies' book is the first attempt to present a complete account of the subject. However the approach is quite old both for ideas and for the the presented mathematical technology, you could have a look at some chapters without sticking to it. Parker and Toms' recent textbook should be put in the same level as the classic by Birrel Davis' book in scope, but more up to date. Another interesting book is Fulling's one ("Aspects of QFT in curved spacetime"). That book is more advanced and rigorous than BD's textbook from the theoretical viewpoint, but it deals with a considerably smaller variety of topics. The Physics Report by Kay and Wald on QFT in the presence of bifurcate Killing horizons is a further relevant step towards the modern (especially mathematical) formulation as it profitably takes advantage of the algebraic formulation and presents the first rigorous definition of Hadamard quasifree state. An account of the interplay of Euclidean and Lorentzian QFT in curved spacetime exploiting zeta-function and heat kernel technologies, with many applications can be found in a book I wrote with other authors ("Analytic Aspects of Quantum Fields" 2003) A more advanced approach of Lorentzian QFT in curved spacetime can be found in Wald's book on black hole thermodynamics and QFT in curved spacetime.
Therein, the microlocal analysis technology is (briefly) mentioned for the first time. As the last reference I would like to suggest the PhD thesis of
T. Hack http://arxiv.org/abs/arXiv:1008.1776 (I was one of the advisors together with K. Fredenhagen and R. Wald). Here, cosmological applications are discussed. ADDENDUM . I forgot to mention the very nice lecture notes by my colleague Chris Fewster! http://www.science.unitn.it/~moretti/Fewsternotes.pdf ADDENDUM2 . There is now a quick introductory technical paper, by myself and I.Khavkine, on the algebraic formulation of QFT on curved spacetime: http://arxiv.org/abs/1412.5945 which in fact will be a chapter of a book by Springer. | {
"source": [
"https://physics.stackexchange.com/questions/110763",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/34669/"
]
} |
110,870 | In chapter 80, The Primary , Neal Stephenson has one of his character describe how the drives of a computer carried through a door (by the police, who where raiding the facility) will have been erased. Basically, there's an enormous electromagnet around the door frame: Cantrell is now drawing an elaborate diagram, and has even slowed down, almost to a stop, the better to draw it. It begins with a tall rectangle. Set within that is a parallelogram, the same size, but skewed a little bit downwards, and with a little circle drawn in the middle of one edge. Randy realizes he’s looking at a perspective view of a door-frame with its door hanging slightly ajar, the little circle being its knob. STEEL FRAME , Cantrell writes, hollow metal channels . Quick meandering scribbles suggest the matrix of wall surrounding it, and the floor underneath. Where the uprights of the doorframe are planted in the floor, Cantrell draws small, carefully foreshortened circles. Holes in the floor . Then he encircles the doorframe in a continuous hoop, beginning at one of those circles and climbing up one side of the doorframe, across the top, down the other side, through the other hole in the floor, and then horizontally beneath the door, then up through the first hole again, completing the loop. He draws one or two careful iterations of this and then numerous sloppy ones until the whole thing is surrounded in a vague, elongated tornado. Many turns of fine wire . Finally he draws two leads away from this huge door-sized coil and connects them to a sandwich of alternating long and short horizontal lines, which Randy recognizes as the symbol for a battery. The diagram is completed with a huge arrow drawn vigorously through the center of the doorway, like an airborne battering ram, labeled B which means a magnetic field. Ordo computer room door . "Wow," Randy says. Cantrell has drawn a classic elementary-school electromagnet, the kind of thing young Randy made by winding a wire around a nail and hooking it up to a lantern battery. Except that this one is wound around the outside of a doorframe and, Randy guesses, hidden inside the walls and beneath the floor so that no one would know it was there unless they tore the building apart. Magnetic fields are the styli of the modern world, they are what writes bits onto disks, or wipes them away. The read/write heads of Tombstone’s hard drive are exactly the same thing, but a lot smaller. If they are fine-pointed draftsman’s pens, then what Cantrell’s drawn here is a firehose spraying India ink. It probably would have no effect on a disk drive that was a few meters away from it, but anything that was actually carried through that doorway would be wiped clean. Between the pulse-gun fired into the building from outside (destroying every chip within range) and this doorframe hack (losing every bit on every disk) the Ordo raid must have been purely a scrap-hauling run for whoever organized it – Andrew Loeb or (according to the Secret Admirers) Attorney General Comstock’s sinister Fed forces who were using Andy as a cat’s paw. The only thing that would have made it through that doorway intact would have been information stored on CD-ROM or other nonmagnetic media, and Tombstone had none of that. Would this actually work? | There's really two parts to your question, which I'm going to answer separately. First of all, the explicit question: Could a huge electromagnetic coil erase a hard drive carried through it? Sure it could. That's pretty much what a degausser does, and these things are routinely used to erase magnetic media, including hard drives. OK, with that out of the way, let's move on to the implied question: Could you actually do that, without the person carrying the drive noticing? Based on some quick research, my conclusion is: no way in hell. To quickly and reliably degauss a hard drive, you apparently need a magnetic field strength on the order of 15,000 Gauss (= 1.5 Tesla). Even if we assume that the Ordo hard drives were old and maybe specifically selected for low coercivity, we're still talking several thousand Gauss at least . For comparison , the field strength inside a typical MRI scanner is also around 1.5 Tesla, while the field at the surface of a modern neodymium–iron–boron (Nd 2 Fe 14 B) rare earth magnet — basically, the strongest permanent magnet you can get — is around 1.25 Tesla. Thus, someone walking through Stephenson's "degausser door" with a bunch of hard drives would experience something similar as if they tried carrying them through an MRI coil — or holding them while standing right next to a humongous door-sized Nd 2 Fe 14 B magnet slab. Now, if you've ever played with neodymium magnets, you'll know that even tiny ones are damn hard to pry off any ferromagnetic objects they touch. To quote the Wikipedia page I linked to above: "Neodymium magnets larger than a few cubic centimeters are strong enough to cause injuries to body parts pinched between two magnets, or a magnet and a metal surface, even causing broken bones." As for MRI scanners, there's a reason why the first and last thing they check, when you go and have an MRI scan, is that you have nothing potentially ferromagnetic on or in your body. The reason is that anything ferromagnetic that gets too close to an active MRI magnet is likely the get torn off your hands and violently slammed against the magnet. This has been known to happen to pretty much any wholly or partially ferromagnetic object you'd care to imagine, from wheelchairs , office chairs and floor polishers to scissors , oxygen bottles (which killed a small child) and even pistols (which, yes, went off when it hit the scanner). So, let's imagine what'll happen to your hapless policeman, as he's walking towards the magnetized door carrying a stack of hard drives. The first thing he's likely to notice, while still several meters away, is that something's pulling at the drives he's carrying (since they have a lot of ferromagnetic metal in the casing, and even some pretty strong magnets inside). If he's not careful, the drives might slip out of his hands and fly through the air towards the door, slamming against the door jamb with enormous force (and, yes, likely getting pretty well wiped in the process). The next thing he might notice, if that's not enough to make him stay well away from the door, is that the same force is also tugging at his badge, gun, zipper, belt buckle, the screwdriver in his pocket that he used to open the servers and extract the hard drives, and anything else metallic that he might have on him. If he's not careful, and keeps approaching the door, those items might either get pulled out of his pockets, or they might simply get drawn to the door and pull him along with them. If he's lucky, the only thing getting pinched between the door and the objects is his clothing. If he's not... Of course, that's all assuming that, when the magnet turned on, it didn't immediately turn any nearby chairs, tables, computer equipment and miscellaneous office supplies into flying missiles , with potentially lethal consequences to anyone standing between such an object and the door. Or that the intense magnetic field didn't simply mess up the unlucky officer's pacemaker , as it would surely do to any cell phones or other electronic equipment they might be carrying. | {
"source": [
"https://physics.stackexchange.com/questions/110870",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45702/"
]
} |
111,006 | When I close one eye and put the tip of my finger near my open eye, it seems as if the light from the background image bends around my finger slightly, warping the image near the edges of my blurry fingertip. What causes this? Is it the heat from my finger that bends the light? Or the minuscule gravity that the mass in my finger exerts? (I don't think so.) Is this some kind of diffraction? To reproduce: put your finger about 5 cm from your open eye, look through the fuzzy edge of your finger and focus on something farther away. Move your finger gradually through your view and you'll see the background image shift as your finger moves. For all the people asking, I made another photo. This time the backdrop is a grid I have on my screen (due to a lack of grid paper). You see the grid deform ever so slightly near the top of my finger. Here's the setup: Note that these distances are arbitrary. It worked just as well with my finger closer to the camera, but this happens to be the situation that I measured. Here are some photos of the side of a 2 mm thick flat opaque plastic object, at different aperture sizes. Especially notice how the grid fails to line up in the bottom two photos. | OK, it seems that user21820 is right ; this effect is caused by both the foreground and the background objects being out of focus , and occurs in areas where the foreground object (your finger) partially occludes the background, so that only some of the light rays reaching your eye from the background are blocked by the foreground obstacle. To see why this happens, take a look at this diagram: The black dot is a distant object, and the dashed lines depict light rays emerging from it and hitting the lens, which refocuses them to form an image on a receptor surface (the retina in your eye, or the sensor in your camera). However, since the lens is slightly out of focus, the light rays don't converge exactly on the receptor plane, and so the image appears blurred. What's important to realize is that each part of the blurred image is formed by a separate light ray passing through a different part of the lens (and of the intervening space). If we insert an obstacle between the object and the lens that blocks only some of those rays, those parts of the image disappear! This has two effects: first, the image of the background object appears sharper, because the obstacle effectively reduces the aperture of the lens. However, it also shifts the center of the aperture, and thus of the resulting image, to one side. The direction in which the blurred image shifts depends on whether the lens is focused a little bit too close or a little bit too far. If the focus is too close, as in the diagrams above, the image will appear shifted away from the obstacle. (Remember that the lens inverts the image, so the image of the obstacle itself would appear above the image of the dot in the diagram!) Conversely, if the focus is too far, the background object will appear to shift closer to the obstacle. Once you know the cause, it's not hard to recreate this effect in any 3D rendering program that supports realistic focal blur. I used POV-Ray , because I happen to be familiar with it: Above, you can see two renderings of a classic computer graphics scene: a yellow sphere in front of a grid plane. The first image is rendered with a narrow aperture, showing both the grid and the sphere in sharp detail, while the second one is rendered with a wide aperture, but with the grid still perfectly in focus. In neither case does the effect occur, since the background is in focus. Things change, however, once the focus is moved slightly. In the first image below, the camera is focused slightly in front of the background plane, while in the second image, it is focused slightly behind the plane: You can clearly see that, with the focus between the grid and the sphere, the grid lines close to the sphere appear shifted away from it, while with the focus behind the grid plane, the grid lines shift towards the sphere. Moving the camera focus further away from the background plane makes the effect even stronger: You can also clearly see the lines getting sharper near the sphere, as well as bending, because part of the blurred image is blocked by the sphere. I can even re-create the broken line effect in your photos by replacing the sphere with a narrow cylinder: To recap: This effect is caused by the background being (slightly) out of focus, and by the foreground object effectively occluding part of the camera/eye aperture, causing the effective aperture (and thus the resulting image) to be shifted. It is not caused by: Diffraction: As shown by the computer renderings above (which are created using ray tracing , and therefore do not model any diffraction effects), this effect is fully explained by classical ray optics. In any case, diffraction cannot explain the background images shifting towards the obstacle when the focus is behind the background plane. Reflection: Again, no reflection of the background from the obstacle surface is required to explain this effect. In fact, in the computer renderings above, the yellow sphere/cylinder does not reflect the background grid at all. (The surfaces have no specular reflection component, and no indirect diffuse illumination effects are included in the lighting model.) Optical illusion: The fact that this is not a perceptual illusion should be obvious from the fact that the effect can be photographed, and the distortion measured from the photos, but the fact that it can also be reproduced by computer rendering further confirms this. Addendum: Just to check, I went and replicated the renderings above using my old DSLR camera (and an LCD monitor, a yellow plastic spice jar cap, and some thread to hang it from): The first photo above has the camera focus behind the screen; the second one has it in front of the screen. The first photo below shows what the scene looks like with the screen in focus (or as close as I could get it with manual focus adjustment). Finally, the crappy cellphone camera picture below (second) shows the setup used to take the other three photos. Addendum 2: Before the comments below were cleaned out, there was some discussion there about the usefulness of this phenomenon as a quick self-diagnostic test for myopia (nearsightedness). While I Am Not An Opthalmologist , it does appear that, if you experience this effect with your naked eye, while trying to keep the background in focus, then you may have some degree of myopia or some other visual defect, and may want to get an eye exam . (Of course, even if you don't, getting one every few years or so isn't a bad idea, anyway. Mild myopia, up to the point where it becomes severe enough to substantially interfere with your daily life, can be surprisingly hard to self-diagnose otherwise, since it typically appears slowly and, with nothing to compare your vision to, you just get used to distant objects looking a bit blurry. After all, to some extent that's true for everyone; only the distance varies.) In fact, with my mild (about −1 dpt) myopia, I can personally confirm that, without my glasses, I can easily see both the bending effect and the sharpening of background features when I move my finger in front of my eye. I can even see a hint of astigmatism (which I know I have; my glasses have some cylindrical correction to fix it) in the fact that, in some orientations, I can see the background features bending not just away from my finger, but also slightly sideways. With my glasses on, these effects almost but not quite disappear, suggesting that my current prescription may be just a little bit off. | {
"source": [
"https://physics.stackexchange.com/questions/111006",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23172/"
]
} |
111,078 | There is a problem with my logic and I cannot seem to point out where.
There's a rocket ship travelling at close-to-c speed v without any acceleration (hypothetically), and there is an observer AA with a clock A on Earth, and there's another observer on the rocket BB with a clock B and these two clocks were initially in sync when the rocket was at rest with a FoR (frame of reference) attached to Earth.
Now, this rocket's moving and AA tells us that B is running slower than A by a factor of $$ \gamma = 1 /(1-v^2/c^2)^{1/2} $$ $$ t_A/t_B = \gamma $$ where v is the relative 1d- velocity between the two i.e the earth and the rocket!
That would mean the time elapsed on A is greater than that on B but this will happen only in the FoR of AA?
So $t_B$ in this equation must be the time on B as observed by AA?
Is this correct? What do the terms mean in the equations?
If the symmetry holds and BB doesn't accelerate, then BB could say that $$ t_B/t_A = \gamma $$ right? where $t_B$ and $t_A$ are the times on B and A with respect to FoR of BB? But I was solving this problem and I took the earth FoR of A, but the prof took the rocket FoR of B? Like how will I know which FoR to solve the problem from? It'd greatly help if the terms in all the above equations were laid down neatly! DO we even need these FoRs?? Because in all the solved problems the prof is't specifying any and is using random ones! Please help!!!! This is the question where I messed up.
The first rocket bound for Alpha Centauri leaves Earth at a velocity (3/5)c. To commemorate the ten year anniversary of the launch, the nations of Earth hold a grand celebration in which they shoot a powerful laser, shaped like a peace sign, toward the ship. According to Earth clocks, how long after the launch (of the rocket) does the rocket crew first see the celebratory laser light? This must be 25 years.
My reasoning is: If v = 3c/5
10v + vt = ct
where t is time taken by the light to reach the rocket from Earth as calculated from Earth.. and I solved that for t. and added 10 years to that because the time starts at the launch of the rocket! According to clocks on the rocket, how long after the launch does the rocket crew first see the celebratory laser light? This is 20 years. Here, I say:
If it takes 25 years as observed by clocks on earth for the laser to reach the rocket, what should be the corresponding time as seen on a clock on the rocket? Using the formula: 25 = $\gamma$ t where $\gamma$ = 5/4 solved for t! According to the rocket crew, how many years had elapsed on the rocket's clocks when the nations of Earth held the celebration? That is, based on the rocket crews' post-processing to determine when the events responsible for their observations took place, how many years have passed on the rocket's clocks when the nations of Earth hold the celebration? For this, I did the following:
10 years on earth = T years on rocket ship where T must be lesser than 10 as observed from Earth FoR!
Therefore, T= 4(10)/5 years = 8 years!
But, prof says,
10 years on Earth = T years in rocket ship where T must be GREATER than 10 as observed from the rocket FoR???
Therefore, T = 10(5/4) years = 12.5 years!! What does this question actually want? | According to clock A, clock B runs slow. According to clock B, clock A runs slow. This isn't a contradiction since events that are simultaneous in AA are not simultaneous in BB. This will all be clear if you draw a spacetime diagram . Update: to be clear, given the upvotes and comments, the spacetime diagram above is not mine but is instead found at the "spacetime diagram" link just above image and is most likely also contained in the author's book " An Illustrated Guide to Relativity ". I came upon this image for the first time this afternoon and it is one of the best I've run across to aid in visualizing the symmetric time dilation due to the relativity of simultaneity. | {
"source": [
"https://physics.stackexchange.com/questions/111078",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/43958/"
]
} |
111,279 | The moon orbits Earth at about $380,\!000 \,\mathrm{km}$ away from it, at around $3,600 \,\mathrm{km}$ an hour. I was thinking, with light traveling at $300,\!000 \,\mathrm{km/s}$, how close to earth (probably in the $\mathrm{nm}$ range is my guess) would light have to be to Earth to orbit it? Update: After reading the below answers, here's my reasoning to why I thought it would be in the nanometre range. I thought that if light was as close as possible to Earth (like, a planck length away or something), Earth's gravity would make it hit Earth immediately, but I forgot that the pull of gravity gets weaker when one is further away from Earth (i.e. at the surface of Earth is still somewhat "far away"). | You can take the Newtonian expression for the orbital speed as a function of orbital radius and see what radius corresponds to an orbital speed of $c$, but this is not physically relevant because you need to take general relativity into account. This does give you an orbital radius for light, though it is an unstable orbit. If the mass of your planet is $M$ then the radius of the orbit is: $$ r = \frac{3GM}{c^2} $$ where $G$ is Newton's constant . The mass of the Earth is about $5.97 \times 10^{24}$ kg, so the radius at which light will orbit works out to be about $13$ mm. Obviously this is far less than the radius of the Earth, so there is no orbit for light round the Earth. To get light to orbit an object with the mass of the Earth you would have to compress it to a radius of less than $13$ mm. You might think compressing the mass of the Earth this much would form a black hole, and you'd be thinking on the right lines. If $r_M$ is the radius of a black hole with a mass $M$ then the radius of the light orbit is $1.5 r_M$. So you can only get light to orbit if you have an object that is either a black hole or very close to one, but actually it's even harder than that. The orbit at $1.5r_M$ is unstable, that is the slightest deviation from an exactly circular orbit will cause the light to either fly off into space or spiral down into the object/black hole. If you're interested in finding out more about this, the light orbit round a black hole is called the photon sphere , and Googling or this will find you lots of articles on the subject. | {
"source": [
"https://physics.stackexchange.com/questions/111279",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11890/"
]
} |
111,296 | I am not sure what causes gas molecules to be invisible.This question may look silly but I really want to know the story behind it. | (photograph credit: Efram Goldberg) [Note: left-most ampule is cooled to -196°C and covered by a white layer of frost .] $NO_2$ is a good example of a colorful gas. $N_2O_4$ (colorless) exists in equillibrium with $NO_2$. At lower temperature (left in Wikipedia photo), $N_2O_4$ is favored, while at higher temperature $NO_2$ is favored. For a gas to have color, there needs to be an electronic transition corresponding to the energy of visible light. $F_2$ (pale yellow) , $Cl_2$ (pale green) , $Br_2$ (reddish) , and $I_2$ (purple) are other examples of gases with color. A complete analysis of how visible or invisible a gas is would consider the density of the gas, the length of the light path, the Rayleigh scattering function of the gas, and the absorbance coefficients of any electronic transitions availible to the gas molecules or atoms in the visible range. | {
"source": [
"https://physics.stackexchange.com/questions/111296",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41158/"
]
} |
111,358 | According to the the definition of anti-particles, they are particles with same mass but opposite charge. Neutrinos by definition have no charge. So, how can it have an anti-particle? | There are other neutral particles with antiparticles, such as the neutron and the $K^0$ meson. In those cases we have a microscopic theory that says those particles are made of quarks: for instance, the $K^0$ is made of a down quark and an anti-strange quark, while its antiparticle the $\bar K^0$ is made of a strange quark and an anti-down. The neutrino is different from these because we have no evidence that it has any composite structure. While the neutrino doesn't have any electric charge, it does have a quantum number that appears to be conserved in the same way as electric charge: lepton number . We find in experiments that neutrinos are never created alone. A neutrino is always produced in conjunction with a positive lepton ($e$, $\mu$, or $\tau$), and an antineutrino is always produced in conjunction with a negative lepton. There is another key property of neutrinos that's important when thinking about their antiparticles, which is their spin. Weak decays break mirror symmetry (or "parity symmetry"). If you have a beta-decay source that doesn't have any spin to it at all, and you measure the spins of the decay electrons that come out, you'll find that they are strongly polarized: beta-decay electrons prefer to be "left-handed", or traveling so that their south poles point forwards and their north poles point backwards. Beta-decay antielectrons, by contrast, prefer to be right-handed. The neutrinos follow the same rule: neutrinos have left-handed spins, and antineutrinos have right-handed spins. If a neutrino had exactly zero mass, this polarization would be complete. However, we now have convincing evidence that at least two flavors of neutrino have finite mass. This means that it's possible, in theory, for an relativistic observer to "outrun" a left-handed neutrino, in which reference frame its north pole would be pointing along its momentum — that observer would consider it a right-handed neutrino. Would a right-handed neutrino act like an antineutrino? That would imply that the neutrino is actually its own antiparticle (an idea credited to Majorana ). Would the right-handed neutrino simply refuse to participate in the weak interaction? That would make them good candidates for dark matter (though I think there is other evidence against this). It's an open experimental question whether there is really a difference between neutrinos and antineutrinos, apart from their spin, and there are several active searches, e.g. for forbidden double-beta decays. | {
"source": [
"https://physics.stackexchange.com/questions/111358",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40382/"
]
} |
111,374 | I was thinking about the Google XPrize for Space Travel the other day. In order to claim the prize of building a robot that goes to the moon, travels 500m, and relays data, I had the idea of building a tiny vessel the size of a marble and shooting it at the moon with a railgun. The theory is that since the mass of the marble-sized craft is only a fraction of the size of a regular shuttle, it should take exponentially less energy to get it to the moon. So, if you could aim it right, and shoot it with enough power to escape Earth's gravity (Let's say it weight 1oz) couldn't you shoot your own vessel to the moon with essentially a potato launcher? Why does this not work? | There are other neutral particles with antiparticles, such as the neutron and the $K^0$ meson. In those cases we have a microscopic theory that says those particles are made of quarks: for instance, the $K^0$ is made of a down quark and an anti-strange quark, while its antiparticle the $\bar K^0$ is made of a strange quark and an anti-down. The neutrino is different from these because we have no evidence that it has any composite structure. While the neutrino doesn't have any electric charge, it does have a quantum number that appears to be conserved in the same way as electric charge: lepton number . We find in experiments that neutrinos are never created alone. A neutrino is always produced in conjunction with a positive lepton ($e$, $\mu$, or $\tau$), and an antineutrino is always produced in conjunction with a negative lepton. There is another key property of neutrinos that's important when thinking about their antiparticles, which is their spin. Weak decays break mirror symmetry (or "parity symmetry"). If you have a beta-decay source that doesn't have any spin to it at all, and you measure the spins of the decay electrons that come out, you'll find that they are strongly polarized: beta-decay electrons prefer to be "left-handed", or traveling so that their south poles point forwards and their north poles point backwards. Beta-decay antielectrons, by contrast, prefer to be right-handed. The neutrinos follow the same rule: neutrinos have left-handed spins, and antineutrinos have right-handed spins. If a neutrino had exactly zero mass, this polarization would be complete. However, we now have convincing evidence that at least two flavors of neutrino have finite mass. This means that it's possible, in theory, for an relativistic observer to "outrun" a left-handed neutrino, in which reference frame its north pole would be pointing along its momentum — that observer would consider it a right-handed neutrino. Would a right-handed neutrino act like an antineutrino? That would imply that the neutrino is actually its own antiparticle (an idea credited to Majorana ). Would the right-handed neutrino simply refuse to participate in the weak interaction? That would make them good candidates for dark matter (though I think there is other evidence against this). It's an open experimental question whether there is really a difference between neutrinos and antineutrinos, apart from their spin, and there are several active searches, e.g. for forbidden double-beta decays. | {
"source": [
"https://physics.stackexchange.com/questions/111374",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37871/"
]
} |
111,652 | Can we divide two vector quantities? For eg., Pressure( a scalar) equals force (a vector) divided by area (a vector). | No, in general you cannot divide one vector by another. It is possible to prove that no vector multiplication on three dimensions will be well-behaved enough to have division as we understand it. (This depends on exactly what one means by 'well-behaved enough', but the core result here is Hurwitz's theorem .) Regarding force, area and pressure, the most fruitful way is to say that force is area times pressure: $$
\vec F=P\cdot \vec A.
$$ As it turns out, pressure is not actually a scalar but a matrix (or, more technically, a rank 2 tensor). This is because, in certain situations, an area with its normal vector pointing in the $z$ direction can also experience forces along $x$ and $y$ , which are called shear stresses. In this case, the correct linear relation is that $$
\begin{pmatrix}F_x\\ F_y \\ F_z \end{pmatrix}
=
\begin{pmatrix}p_x & s_{xy} & s_{xz} \\ s_{yx} & p_y & s_{yz} \\ s_{zx} & s_{zy} & p_z\end{pmatrix}
\begin{pmatrix}A_x\\ A_y \\ A_z \end{pmatrix}.
$$ In a fluid, shear stresses are zero and the pressure is isotropic, so all the $p_j$ s are equal, and therefore the pressure tensor $P$ is a scalar matrix. In a solid, on the other hand, shear stresses can occur even in static situations, so you need the full matrix. In this case, the matrix is referred to as the stress tensor of the solid. | {
"source": [
"https://physics.stackexchange.com/questions/111652",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45557/"
]
} |
111,670 | When solving the Einstein field equations, $$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R = 8\pi GT_{\mu\nu}$$ for a particular stress-energy tensor, we obtain the metric of the spacetime manifold, $g_{\mu\nu}$ which endows the manifold with some geometric structure. However, how can we deduce global properties of a spacetime manifold with the limited knowledge we usually have (i.e. simply the metric)? For example, how may we deduce: Whether the manifold is closed or exact Homology and de Rham cohomology Compactness I know if we can establish compactness, one can easily arrive at the Euler characteristic, and hence the genus of the manifold, using the Gauss-Bonnet-Chern theorem, $$\int_M \mathrm{Pf}[\mathcal{R}] = (2\pi)^n \chi(M)$$ where $\chi$ is the Euler characteristic and $n$ half the dimension of the manifold $M$. In addition, the Chern classes of the tangent bundle computed using the metric give some information regarding the cohomology. Note this question is really not limited to spacetime manifolds. There are many scenarios in physics wherein we may only know limited information up to the metric, e.g. moduli spaces. It would be interesting to see how one can deduce global properties. This question is inspired by brief discussions on the Physics S.E. with user Robin Ekman, and I would like to thank Danu for placing a bounty; a pleasant surprise! Resources, especially journal papers, which focus on addressing global properties of spacetimes (or more exotic spacetimes, e.g. orbifolds) are appreciated. | Well, the simplest case is that some topologies of spacetime may only allow a particular class of metrics. But unfortunately, it usually requires the knowledge of the metric at every point to be quite certain. Here's a few thing we can probably assume about the spacetime manifold : All the usual jazz about manifolds in general relativity (paracompactness, Hausdorff, etc). This is not necessarily the case, as some theories may allow weirder versions of it, or allow more general topologies like manifolds with boundaries and conifolds, but that is what is usually assumed to get a Lorentzian metric. The fact that spacetime is a connected manifold isn't really a physical constraint but more of a metaphysical one : you can't really say much about any disconnected piece of spacetime since it cannot affect our own. By the way, if your manifold is indeed compact, only spacetimes with Euler characteristics 0 admit a Lorentzian metric, since you need to have a line element. It is also usually assumed to have some causality conditions. It may not necessarily be true, but it seems like a rather remote possibility. If the spacetime is causal (no causal loops), it cannot be compact. If you also want it to be globally hyperbolic (No loops or naked singularities), it will fix the topology as $\mathbb{R} \times \Sigma$, $\Sigma$ some 3-manifold, per Geroch's theorem. To get the topology from the metric, another important constraint is geodesic completeness : no geodesic should have a finite range in its affine parameter. You can put de Sitter space in $\mathbb{R}^4$, but much like the stereographic projection of a sphere, you will reach the "edge" when a geodesic tries to go through the other pole but finds none. Because of the dependance of particle physics on time reversal and space reversal, it is assumed that spacetime is both time-orientable and space-orientable. If it was not, there would be no $SO^+(3,1)$ group and as such no spin groups, but only the Pin groups, with different properties for fermions. Those are some rather generic things you can say about spacetime from some, we hope, rather reasonable assumptions. Experimental evidence of topology is harder though. The Topology Censorship Theorem is a (classical) theorem regarding the ability to gauge the topology of spacetime. From Visser : "In any asymptotically flat, globally hyperbolic spacetime such that every inextendible null geodesic satisfies the averaged null energy condition, every causal curve from past null infinity to future null infinity is deformable to the trivial causal curve". Which rules out, if all conditions are met, the ability to send a particle along any trajectory along a topological handle (or wormhole, in the science). You can try to check the topology of the spacelike hypersurface of the spacetime by simply observing any repeating patterns, but so far this has not met with any success. The PLANCK space observatory had, among other missions, looking for any correlations in the CMB that might indicate some compactified dimension of space. Edit : Oh, and by the way, space being compact in some dimension will also affect the modes of any field on it (the so called topological Casimir effect). Non-orientable compactness also has some different effects. | {
"source": [
"https://physics.stackexchange.com/questions/111670",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/34382/"
]
} |
111,675 | How could one charge a spherical capacitor with a battery or any other emf source? | Well, the simplest case is that some topologies of spacetime may only allow a particular class of metrics. But unfortunately, it usually requires the knowledge of the metric at every point to be quite certain. Here's a few thing we can probably assume about the spacetime manifold : All the usual jazz about manifolds in general relativity (paracompactness, Hausdorff, etc). This is not necessarily the case, as some theories may allow weirder versions of it, or allow more general topologies like manifolds with boundaries and conifolds, but that is what is usually assumed to get a Lorentzian metric. The fact that spacetime is a connected manifold isn't really a physical constraint but more of a metaphysical one : you can't really say much about any disconnected piece of spacetime since it cannot affect our own. By the way, if your manifold is indeed compact, only spacetimes with Euler characteristics 0 admit a Lorentzian metric, since you need to have a line element. It is also usually assumed to have some causality conditions. It may not necessarily be true, but it seems like a rather remote possibility. If the spacetime is causal (no causal loops), it cannot be compact. If you also want it to be globally hyperbolic (No loops or naked singularities), it will fix the topology as $\mathbb{R} \times \Sigma$, $\Sigma$ some 3-manifold, per Geroch's theorem. To get the topology from the metric, another important constraint is geodesic completeness : no geodesic should have a finite range in its affine parameter. You can put de Sitter space in $\mathbb{R}^4$, but much like the stereographic projection of a sphere, you will reach the "edge" when a geodesic tries to go through the other pole but finds none. Because of the dependance of particle physics on time reversal and space reversal, it is assumed that spacetime is both time-orientable and space-orientable. If it was not, there would be no $SO^+(3,1)$ group and as such no spin groups, but only the Pin groups, with different properties for fermions. Those are some rather generic things you can say about spacetime from some, we hope, rather reasonable assumptions. Experimental evidence of topology is harder though. The Topology Censorship Theorem is a (classical) theorem regarding the ability to gauge the topology of spacetime. From Visser : "In any asymptotically flat, globally hyperbolic spacetime such that every inextendible null geodesic satisfies the averaged null energy condition, every causal curve from past null infinity to future null infinity is deformable to the trivial causal curve". Which rules out, if all conditions are met, the ability to send a particle along any trajectory along a topological handle (or wormhole, in the science). You can try to check the topology of the spacelike hypersurface of the spacetime by simply observing any repeating patterns, but so far this has not met with any success. The PLANCK space observatory had, among other missions, looking for any correlations in the CMB that might indicate some compactified dimension of space. Edit : Oh, and by the way, space being compact in some dimension will also affect the modes of any field on it (the so called topological Casimir effect). Non-orientable compactness also has some different effects. | {
"source": [
"https://physics.stackexchange.com/questions/111675",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41996/"
]
} |
111,761 | It is written everywhere that gravity is curvature of spacetime caused by the mass of the objects or something to the same effect. This raises a question with me: why isn't spacetime curved due to other forces or aspects of bodies? Why isn't it that there are curvatures related to the charge of a body or the spin of particles or any other characteristics? | Charge does curve spacetime. The metric for a charged black hole is different to an uncharged black hole. Charged (non-spinning) black holes are described by the Reissner–Nordström metric . This has some fascinating features, including acting as a portal to other universes, though sadly these are unlikely to be physically relevant. There is some discussion of this in the answers to the question Do objects have energy because of their charge? , though it isn't a duplicate. Anything that appears in the stress-energy tensor will curve spacetime. Spin also has an effect, though I have to confess I'm out of my comfort zone here. To take spin into account we have to extend GR to Einstein-Cartan theory . However on the large scale the net spin is effectively zero, and we wouldn't expect spin to have any significant effect until we get down to quantum length scales. | {
"source": [
"https://physics.stackexchange.com/questions/111761",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26063/"
]
} |
111,917 | Why is it that raindrops don't collide and 'stick together' on their descent to Earth, arriving in streams rather than separate drops? | Have a look at the Wikipedia article on raindrop formation . You'll also find lots of articles on raindrop formation and growth by Googling raindrop formation or something like that. Raindrops do coalesce, but they also fragment, and the eventual size is a balance of the two processes. The fragmentation occurs because of the forces from turbulent air flow. Turbulence can cause droplets to collide, in which case they may coalesce, however it can also break apart large droplets. Incidentally, a stream of water is unstable at low flow rates because of the Plateau-Rayleigh instability so it's very unlikely you could get a continuous stream of rain even under ideal atmospheric conditions. The closest you would get is a series of droplets in a line. However in the real world even the slightest turbulence would scatter the droplets and lead to the random distribution of droplets that we see. | {
"source": [
"https://physics.stackexchange.com/questions/111917",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/44026/"
]
} |
111,955 | The oil, vinegar and other liquids in homemade salad dressing separate into layers after sitting for a while, making the mixture become more organized as time evolves. Why doesn't this violate the 2nd law of thermodynamics? I assume that the answer is that since the separation is due to gravity, the effect is due to an external force and so the system isn't closed which is necessary for the 2nd law, but I'm not sure. If that's the answer, then what happens if I consider the whole system include the salad dressing, gravitational field and whatever mass is generating the gravitational field? The entropy of the salad dressing seems to decrease and so the entropy of some other component of this system must be increasing, but it's hard to see what this other component would be. | The separation does not violate the 2nd law of thermodynamics, because the oil and water phases being separate is a lower energy state. The water molecules strongly interact with each other, forming hydrogen bonds . The protons of water are shared between two oxygen atoms of two different water molecules, forming a constantly changing network of molecules. Water molecules do not have strong intermolecular interactions with oil molecules. The more the two phrases are mixed, the more water molecules are at an interface surface. The water molecules at an interface surface can not fully participate in intermolecular interactions with other water molecules, so this is a higher energy state. For a process to spontaneously occur, the Gibbs free energy (G) must decrease. $\Delta G = \Delta H-T\Delta S$ So entropy (S) is only part of the consideration. Enthalpy (H) and temperature (T) must also be considered. In this case the decrease in enthalpy (H) due to energy of intermolecular interactions makes up for the decrease in entropy (S). The process is an exothermic process. Even absent gravity, it is still thermodynamically favorable for the phases to separate, to minimize the interfacial surface area, just like a spherical drop of water being the lowest energy state absent gravity. I would predict that absent any gravity, the lowest energy state of the salad dressing would be a sphere of water phase surrounded by a spherical shell of oil phase. | {
"source": [
"https://physics.stackexchange.com/questions/111955",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26866/"
]
} |
112,354 | One of the possible ways to simulate gravity in outer space is to have a rotating spaceship, so that the centrifugal force experienced provides a gravity-like force. My question is: shouldn't this only work when our feet are touching the floor of the spaceship? Only in that case the floor is providing a contact force to balance the centrifugal force. If we jumped, there is no gravity within the spacecraft so what is it that would make us come back down? Also: imagine we had a shower: what would make the water fall down? | If you jumped "straight up", you would still have a horizontal component of velocity (relative to a nonrotating frame), so you would still end up coming "back down". Likewise, the shower water is moving horizontally in a nonrotating frame, which makes it collide with the floor eventually (since the floor is curving upwards in the nonrotating frame). But to a person on the ship, it looks as if the water was moving downwards, rather than the floor (and you) moving upwards. More dangerous would be if you were to try to run in the opposite direction of the rotation; if you ran fast enough, you would eventually find that you had become weightless. This would also mean that your feet would no longer be touching the ground, the world would be spinning underneath you, and you'd have no way of getting back down again. Fortunately, since the air is also moving due to the rotation, the "wind" would eventually "slow you down" (technically it would actually speed you up) and you would eventually regain "gravity" and fall to the ground. | {
"source": [
"https://physics.stackexchange.com/questions/112354",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37677/"
]
} |
112,367 | I'm confused about Noether's theorem applied to gauge symmetry. Say we have $$\mathcal L=-\frac14F_{ab}F^{ab}.$$ Then it's invariant under
$A_a\rightarrow A_a+\partial_a\Lambda.$ But can I say that the conserved current here is $$J^a=\frac{\partial\mathcal L}{\partial(\partial_aA_b)}\delta A_b=-\frac12 F^{ab}\partial_b\Lambda~?$$ Why do I never see such a current written? If Noether's theorem doesn't apply here, then is space-time translation symmetry the only candidate to produce Noether currents for this Lagrangian? | Indeed, nothing is wrong with Noether theorem here, $J^\mu = F^{\mu \nu} \partial_\nu \Lambda$ is a conserved current for every choice of the smooth scalar function $\Lambda$. It can be proved by direct inspection, since
$$\partial_\mu J^\mu = \partial_\mu (F^{\mu \nu} \partial_\nu \Lambda)=
(\partial_\mu F^{\mu \nu}) \partial_\nu \Lambda+ F^{\mu \nu} \partial_\mu\partial_\nu \Lambda = 0 + 0 =0\:.$$
Above, $\partial_\mu F^{\mu \nu}=0 $ due to field equations and $F^{\mu \nu} \partial_\mu\partial_\nu \Lambda=0$ because $F^{\mu \nu}=-F^{\nu \mu}$ whereas $\partial_\mu\partial_\nu \Lambda =\partial_\nu\partial_\mu \Lambda$. ADDENDUM . I show here that $J^\mu$ arises from the standard Noether theorem. The relevant symmetry transformation, for every fixed $\Lambda$, is
$$A_\mu \to A'_\mu = A_\mu + \epsilon \partial_\mu \Lambda\:.$$ One immediately sees that
$$\int_\Omega {\cal L}(A', \partial A') d^4x = \int_\Omega {\cal L}(A, \partial A) d^4x\tag{0}$$
since even ${\cal L}$ is invariant. Hence,
$$\frac{d}{d\epsilon}|_{\epsilon=0} \int_\Omega {\cal L}(A, \partial A) d^4x=0\:.\tag{1}$$
Swapping the symbol of derivative and that of integral (assuming $\Omega$ bounded) and exploiting Euler-Lagrange equations, (1) can be re-written as:
$$\int_\Omega \partial_\nu \left(\frac{\partial {\cal L}}{\partial \partial_\nu A_\mu} \partial_\mu \Lambda\right) \: d^4 x =0\:.\tag{2}$$
Since the integrand is continuous and $\Omega$ arbitrary, (2) is equivalent to
$$\partial_\nu \left(\frac{\partial {\cal L}}{\partial \partial_\nu A_\mu} \partial_\mu \Lambda\right) =0\:,$$
which is the identity discussed by the OP (I omit a constant factor):
$$\partial_\mu (F^{\mu \nu} \partial_\nu \Lambda)=0\:.$$ ADDENDUM2 . The charge associated to any of these currents is related with the electrical flux at spatial infinity. Indeed one has:
$$Q = \int_{t=t_0} J^0 d^3x = \int_{t=t_0} \sum_{i=1}^3 F^{0i}\partial_i \Lambda d^3x = \int_{t=t_0} \partial_i\sum_{i=1}^3 F^{0i} \Lambda d^3x -
\int_{t=t_0} ( \sum_{i=1}^3 \partial_i F^{0i}) \Lambda d^3x \:.$$
As $\sum_{i=1}^3 \partial_i F^{0i} = -\partial_\mu F^{\mu 0}=0$, the last integral does not give any contribution and we have
$$Q = \int_{t=t_0} \partial_i\left(\Lambda \sum_{i=1}^3 F^{0i} \right) d^3x = \lim_{R\to +\infty}\oint_{t=t_0, |\vec{x}| =R} \Lambda \vec{E} \cdot \vec{n} \: dS\:.$$
If $\Lambda$ becomes constant in space outside a bounded region $\Omega_0$ and if, for instance, that constant does not vanish, $Q$ is just the flux of $\vec{E}$ at infinity up to a constant factor. In this case $Q$ is the electric charge up to a constant factor (as stressed by ramanujan_dirac in a comment below). In that case, however, $Q=0$ since we are dealing with the free EM field. | {
"source": [
"https://physics.stackexchange.com/questions/112367",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/46348/"
]
} |
112,483 | Electrons have a much bigger charge density as the protons (and especially lead nuclei), aren't compound particles as the protons (and especially lead nuclei) are able to get a much bigger energy with the same fields as the protons (and especially the lead nuclei). Why seems common for the big colliders after the LEP to use protons (and much bigger nuclei)? Reacting comments: Yes, to get good experimental data about quark matter they need a lot of hot quarks (= collided big nuclei). But to create new particles, the energy per degree of freedom needs to be maximized, and this maximum is at the single particle with the highest charge density, and this is the electron. | Whenever you accelerate a charged particle it emits EM radiation known as Bremsstrahlung , and obviously charged particles moving in a circle are accelerating (towards the centre). This means that any circular collider emits a continual stream of Bremsstrahlung radiation. To counteract the energy lost to Bremsstrahlung you have to put energy in, and that costs money and annoys the local power companies. For a given beam energy the Bremsstrahlung losses increase with decreasing particle mass, so it costs a lot more to run an electron collider than to run a proton collider of the same energy and beam current. The LEP collider, with a maximum energy of about 200GeV consumed around 70MW when running, while the LHC with a vastly higher beam energy only consumes around 120MW. These figures are a bit misleading since they include the costs of cooling, etc, and not just running the beam. According to this article the power required for maintaining the beam at the LHC is only around 20MW. I haven't been able to find the corresponding information for LEP. All the proposed future electron/positron colliders are linear. This avoids the Bremsstrahlung losses when you bend the particle beam. | {
"source": [
"https://physics.stackexchange.com/questions/112483",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/32426/"
]
} |
112,615 | I've heard this in many quantum mechanics talks and lectures, nevertheless I don't seem to grasp the idea behind it. What I mean is, at which point is that our modern understanding of quantum mechanics led to a technological development so fundamental for today's computers that we could not have got it working other way? Why is it not enough with Maxwell, Bohr, Lorentz, (Liénard)? | The reason is very simple. Computers depend on electronics. Even the first diodes and triodes that the first bulky computers were made up of depending on the quantum mechanical nature of matter. The present ones with the chip technology are directly dependent on energy levels and bands of conduction etc in the electronics used. Semiconductivity is a quantum mechanical phenomenon. Edit after the editing of the question What I mean is, at which point is that our modern understanding of quantum mechanics led to a technological development so fundamental for today's computers that we could not have got it working another way? The crucial point where quantum mechanical calculations became necessary was with the use of transistor technology, which has morphed to chip technology. It was with the invention of the transistor that control of quantum mechanical calculations was necessary for the leaps in progress we have made. For the vacuum tube computers, it was not necessary except for explaining the tubes existence. The chip designs have reached the point of even needing to foresee the Casimir effect (QM vacuum between charged plates). Why is it not enough with Maxwell, Bohr, Lorentz, (Liénard)? Maxwell is not enough because the classical theory cannot explain atoms molecules and solid state. Bohr is not enough because the primitive calculations could not be used in complicated lattices. Lorenz is irrelevant for solid state physics, the energies of the ions and electrons are low. | {
"source": [
"https://physics.stackexchange.com/questions/112615",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45735/"
]
} |
112,866 | Photons have no mass but they can push things, as evidenced by laser propulsion . Can photons push the source which is emitting them? If yes, will a more intense flashlight accelerate me more? Does the wavelength of the light matter? Is this practical for space propulsion? Doesn't it defy the law of momentum conservation? Note: As John Rennie mentioned, all in all the wavelength doesn't matter, but for a more accurate answer regarding that, see the comments in DavePhD's answer . Related Wikipedia articles: Ion thruster , Space propulsion | Can photons push the source which is emitting them? Yes. If yes, will a more intense flashlight accelerate me more? Yes Does the wavelength of the light matter? No Is this practical for space propulsion? Probably not Doesn't it defy the law of momentum conservation? No In fact that last question is the key one, because photons do carry momentum (even though they have no mass). Photons, like all particles obey the relativistic equation: $$ E^2= p^2c^2 + m^2c^4 $$ where for a photon the mass, $m$, is zero. That means the momentum of the photon is given by: $$ p = \frac{E}{c} = \frac{h\nu}{c} $$ where $\nu$ is the frequency of the light. Let's suppose you have a flashlight that emits light with a power $W$ and a frequency $\nu$. The number of photons per second is the total power divided by the energy of a single photon: $$ n = \frac{W}{h\nu} $$ The momentum change per second is the numbr of photons multiplied by the momentum of a single photon: $$ P/sec = \frac{W}{h\nu} p = \frac{W}{h\nu} \frac{h\nu}{c} = \frac{W}{c} $$ But the rate of change of momentum is just the force, so we end up with an equation for the force created by your flashlight: $$ F = \frac{W}{c} $$ Now you can see why I've answered your questions above as I have. The force is proportional to the flashlight power, but the frequency $\nu$ cancels out so the frequency of the light doesn't matter. Momentum is conserved because it's the momentum carried by the photons that creates the force. As for powering spaceships, your 1W flashlight creates a force of about $3 \times 10^{-9}$ N. You'd need a staggeringingly intense light source to power a rocket. | {
"source": [
"https://physics.stackexchange.com/questions/112866",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/46604/"
]
} |
112,959 | Why is it that many of the most important physical equations don't have ugly numbers (i.e., "arbitrary" irrational factors) to line up both sides? Why can so many equations be expressed so neatly with small natural numbers while recycling a relatively small set of physical and mathematical constants? For example, why is mass–energy equivalence describable by the equation $E = mc^2$ and not something like $E \approx 27.642 \times mc^2$ ? Why is time dilation describable by something as neat as $t' = \sqrt{\frac{t}{1 - \frac{v^2}{c^2}}}$ and not something ugly like $t' \approx 672.097 \times 10^{-4} \times \sqrt{\frac{t}{1 - \frac{v^2}{c^2}}}$ . ... and so forth. I'm not well educated on matters of physics and so I feel a bit sheepish asking this. Likewise I'm not sure if this is a more philosophical question or one that permits a concrete answer ... or perhaps even the premise of the question itself is flawed ... so I would gratefully consider anything that sheds light on the nature of the question itself as an answer. EDIT : I just wished to give a little more context as to where I was coming from with this question based on some of the responses: @Jerry Schirmer comments: You do have an ugly factor of $2.997458 \times 10^8 m/s$ in front of everything. You just hide the ugliness by calling this number c. These are not the types of "ugly constants" I'm talking about in that this number is the speed of light. It is not just some constant needed to balance two side of an equation. @Carl Witthoft answers: It's all in how you define the units ... Of course this is true, we could in theory hide all sorts of ugly constants by using different units on the right and the left. But as in the case of $E=mc^2$ , I am talking about equations where the units on the left are consistent with the units on the right, irrespective of the units used. As I mentioned on a comment there: $E=mc^2$ could be defined using units like $m$ in imperial stones ( $\textsf{S}$ ), $c$ in cubits/fortnight ( $\textsf{CF}^{−1}$ ) and $E$ in ... umm ... $\textsf{SC}^2\textsf{F}^{−2}$ ... so long as the units are in the same system, we still don't need a fudge factor. When the units are consistent in this manner, there's no room for hiding fudge constants. | It's a side effect of the unreasonable effectiveness of mathematics . You are in good company thinking it is a little strange. Many quantities in physics can be related to each other by a few lines of algebra. These tend to be the models that we think of as "pretty."
Terms manipulated by pure algebra tend to pick up integer factors, or factors that are integers raised to integer powers; if only a few algebraic manipulations are involved, the integers and their powers tend to be small ones. Other quantities may be related by a few lines of calculus. From calculus you get the transcendental numbers, which can't be related to the integers by solving an algebraic equation. But there are lots of algebraic transformations you can do to relate one integral to another, and so many of these transcendental numbers can be related to each other by factors of small integers raised to small integer powers. This is why we spend a lot of time talking about $\pi$, $e$, and sometimes Bernoulli's $\gamma$, but don't really have a whole library of irrational constants for people to memorize. Most of constants with many significant digits come from unit conversions, and are essentially historical accidents. Carl Witthoft gives the example of $E=mc^2$ having a numerical factor if you want the energy in BTUs. The BTU is the heat that's needed to raise the temperature of a pound of water by one degree Fahrenheit, so in addition to the entirely historical difference between kilograms and pounds and Rankine and Kelvin it's tied up with the heat capacity of water. It's a great unit if you're designing a boiler! But it doesn't have any place in the Einstein equation, because $E\propto mc^2$ is a fact of nature that is much simpler and more fundamental than the rotational and vibrational spectrum of the water molecule. There are several places where there are real, dimensionless constants of nature that, so far as anyone knows, are not small integers and familiar transcendental numbers raised to small integer powers. The most famous is probably the electromagnetic fine structure constant $\alpha \approx 1/137.06$, defined by the relationship $\alpha \hbar c = e^2/4\pi\epsilon_0$, where this $e$ is the electric charge on a proton. The fine structure constant is the "strength" of electromagnetism, and the fact that $\alpha\ll1$ is a big part of why we can claim to "understand" quantum electrodynamics. "Simple" interactions between two charges, like exchanging one photon, contribute to the energy with a factor of $\alpha$ out front, perhaps multiplied by some ratio of small integers raised to small powers. The interaction of exchanging two photons "at once," which makes a "loop" in the Feynman diagram, contributes to the energy with a factor of $\alpha^2$, as do all the other "one-loop" interactions. Interactions with two "loops" (three photons at once, two photons and a particle-antiparticle fluctuation, etc.) contribute at the scale of $\alpha^3$. Since $\alpha\approx0.01$, each "order" of interactions contributes roughly two more significant digits to whatever quantity you're calculating. It's not until sixth- or seventh-order that there begin to be thousands of topologically-allowed Feynman diagrams, contributing so many hundreds of contributions at level of $\alpha^{n}$ that it starts to clobber the calculation at $\alpha^{n-1}$. An entry point to the literature . The microscopic theory of the strong force, quantum chromodynamics, is essentially identical to the microscopic theory of electromagnetism, except with eight charged gluons instead of one neutral photon and a different coupling constant $\alpha_s$. Unfortunately for us, $\alpha_s \approx 1$, so for systems with only light quarks, computing a few "simple" quark-gluon interactions and stopping gives results that are completely unrelated to the strong force that we see. If there is a heavy quark involved, QCD is again perturbative, but not nearly so successfully as electromagnetism. There is no theory which explains why $\alpha$ is small (though there have been efforts ), and no theory that explains why $\alpha_s$ is large. It is a mystery. And it will continue to feel like a mystery until some model is developed where $\alpha$ or $\alpha_s$ can be computed in terms of other constants multiplied by transcendental numbers and small integers raised to small powers, at which point it will again be a mystery why mathematics is so effective. A commenter asks Isn't α already expressible in terms of physical constants or did you mean to say mathematical constants like π or e? It's certainly true that
$$ \alpha \equiv \frac{e^2}{4\pi\epsilon_0} \frac1{\hbar c} $$
defines $\alpha$ in terms of other experimentally measured quantities. However, one of those quantities is not like the other. To my mind, the dimensionless $\alpha$ is the fundamental constant of electromagnetism; the size of the unit of charge and the polarization of the vacuum are related derived quantities. Consider the Coulomb force between two unit charges:
$$
F = \frac{e^2}{4\pi\epsilon_0}\frac1{r^2} = \alpha\frac{\hbar c}{r^2}
$$
This is exactly the sort of formulation that badroit was asking about: the force depends on the minimum lump of angular momentum $\hbar$, the characteristic constant of relativity $c$, the distance $r$, and a dimensionless constant for which we have no good explanation. | {
"source": [
"https://physics.stackexchange.com/questions/112959",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/46643/"
]
} |
113,037 | Every piece of knowledge in science has a beginning lying in someone's experiment. I would like to know which experiment gave scientists the reason to believe nuclear fission/fusion existed and was instrumental in the development of the field of nuclear energy. I would also accept a thought experiment as an acceptable answer, as long as it answers the question. | which experiment gave scientists the reason to believe nuclear fission/fussion existed Fusion was first. Francis William Aston built a mass spectrometer in 1919 and measured the masses of various isotopes, realizing that the mass of helium-4 was less than 4 times that of hydrogen-1. From this information, Arthur Eddington proposed hydrogen fusion as a possible energy source of stars. "Certain physical investigations in the past year, which I hope we may hear about at this meeting, make it probable to my mind that some portion of this sub-atomic energy is actually being set free in the stars. F. W. Aston's experiments seem to leave no room for doubt that all the elements are constituted out of hydrogen atoms bound together with negative electrons. The nucleus of the helium atom, for example, consists of 4 hydrogen atoms bound with 2 electrons. But Aston has further shown conclusively that the mass of the helium atom is less than the sum of the masses of the 4 hydrogen atoms which enter into it; and in this at any rate the chemists agree with him. There is a loss of mass in the synthesis amounting to about 1 part in 120, the atomic weight of hydrogen being 1.008 and that of helium just 4." Eddington 24 August 1920 At that time it was not understood that a neutron was distinct from a proton. It was thought that the nucleus of helium 4 contained 4 protons and 2 electrons (instead of two protons and two neutrons), but Eddington's main idea that hydrogen fusing to helium released energy thereby powering stars was correct. Eric Doolittle proposed a vague fission process in stars in 1919, but of course this was incorrect: "It seems very probable that when subjected to these inconceivably great temperatures and pressures, atoms may be broken up, and a part, at least, of their sub-atomic energy may be liberated. And it is only necessary to suppose that a part of the energy of the atom is in this way radiated into space in order that the life of a sun, or star, may be almost indefinitely prolonged". Fission of heavy elements was discovered in the 1930s. Enrico Fermi's experiments caused fission in 1934, be he did not realize that fission was occurring. Otto Hahn and Fritz Strassmann concluded that upon neutron bombardment, uranium was broken into two lighter nuclei. Lise Meitner and Otto Frisch made calculations concerning the large amount of energy released and introduced the term "fission". | {
"source": [
"https://physics.stackexchange.com/questions/113037",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26122/"
]
} |
113,092 | In mechanics problems, especially one-dimensional ones, we talk about how a particle goes in a direction to minimize potential energy. This is easy to see when we use cartesian coordinates: For example, $-\frac{dU}{dx}=F$ (or in the multidimensional case, the gradient), so the force will go in the direction of minimizing the potential energy. However, it becomes less clear in other cases. For example, I read a problem that involved a ball attached to a pivot, so it could only rotate. It was then claimed that the ball would rotate towards minimal potential energy, however $-\frac{dU}{d\theta} \neq F$! I think in this case it might be equal to torque, which would make their reasoning correct, but it seems like regardless of the degrees of freedom of the problem, it is always assumed that the forces act in a way such that the potential energy is minimized. Could someone give a good explanation for why this is? Edit: I should note that I typed this in google and found this page. where it states that minimizing potential energy and increasing heat increases entropy. For one, this isn't really an explanation because it doesn't state why it increases entropy. Also, if possible, I would like an explanation that doesn't involve entropy. But if it is impossible to make a rigorous argument that doesn't involve entropy then using entropy is fine. As a side note, how does this relate to Hamilton's Principle? | This is a physical rather than a mathematical justification - ignore my answer if that isn't what you wanted! All systems have some thermal motion so they explore the phase space in their immediate vicinity. If there is a nearby point with a free energy lower by some amount $\Delta G$ then the relative probability of finding the system at that point will be $\exp(-\Delta G/RT)$. So if the energy is minimised by moving to that point, i.e.$\Delta G < 0$, we just have to wait and we'll find the system has moved there. The only place in phase space the system won't move is when the free energy is at a (local) minimum. That's why a system always (locally) minimises its free energy if you wait long enough. | {
"source": [
"https://physics.stackexchange.com/questions/113092",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/35734/"
]
} |
113,098 | I recently got into a discussion whether and how the Coriolis force/effect is related to gravitation. Is gravitation involved in the Coriolis force? If so, how could that be explained? | This is a physical rather than a mathematical justification - ignore my answer if that isn't what you wanted! All systems have some thermal motion so they explore the phase space in their immediate vicinity. If there is a nearby point with a free energy lower by some amount $\Delta G$ then the relative probability of finding the system at that point will be $\exp(-\Delta G/RT)$. So if the energy is minimised by moving to that point, i.e.$\Delta G < 0$, we just have to wait and we'll find the system has moved there. The only place in phase space the system won't move is when the free energy is at a (local) minimum. That's why a system always (locally) minimises its free energy if you wait long enough. | {
"source": [
"https://physics.stackexchange.com/questions/113098",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/46715/"
]
} |
113,104 | Most of the solar panels that I have seen do not have any mirrors, etc., but usually solar cookers have mirrors. What is the reason for solar panels not having focusing mirrors? | The simple answer is that the two devices work in completely different ways. Solar cookers, as well as so-called ' solar thermal collectors ', focus the light of the sun to heat something (a pot in a cooker, some oil or ceramics) and the heat is then transferred somewhere, where it generates electricity, usually by some steam engine. So, the more heat, the better. Solar panels on the other hand use the photovoltaic effect , which directly converts light into electric energy. Light excites electrons to the conduction band and the current is then transmitted somewhere. Too much heat, however, destroys the materials used, so focusing might be a very bad idea. | {
"source": [
"https://physics.stackexchange.com/questions/113104",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/30245/"
]
} |
113,431 | All I found in Google was very broad. From a physics models perspective, why can photons emitted from a laser cut? Does this cut mean that the photons are acting like matter? | When lasers cut something, they're only cutting in the sense that they're making atoms be not as attracted as they once were to each other. When you get down to the nitty-gritty details, it is not really the same as mechanical cutting. Remember that lasers shoot photons, and when photons hit atoms, they excite electrons. If you excite these electrons enough, they'll have enough energy to disassociate from the atoms they formerly "belonged" to. This makes individual atoms disassociate with whatever other atoms they were once bonded to, and in the mad scramble to go to a lower energy state, they very likely do not go into the same configuration they were before. Some atoms, like the ones directly hit by the laser beam, go to a vapor and float away. Others "choose" one side of the material to go to. Any bonds the material had with itself is then dissolved, so it is effectively cut. This is different than, say, taking shears or scissors to the material. The methods of those things cutting are purely mechanical, and you don't have to worry about vapors as much as when cutting with a laser. (You also don't have to worry about reflections from materials, either!) | {
"source": [
"https://physics.stackexchange.com/questions/113431",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/46859/"
]
} |
113,749 | For instance evidence that a highly energetic laser beam attracts objects nearby? In the framework of Einstein's general relativity all energy curves spacetime and hence exerts an attraction, but my question is whether it is an experimentally verified fact that energy that doesn't come from mass (such as photons) indeed attracts massive objects? | As far as I know there has been no experimental evidence that light curves spacetime. We know that if GR is correct it must do, and all the experiments we've done have (so far) confirmed the predictions made by GR , so it seems very likely that light does indeed curve spacetime. The trouble is that spacetime is exceedingly hard to curve by any significant amount. Curving it is no problem if you have an astronomical body to hand, but measuring the curvature due to lab scale masses requires very fine measurements. Bearing in mind that mass is a very concentrated form of energy (by a factor of $c^2$) it's hard to see how we could ever get an intense enough source of light to create measurable curvature. There might be some indirect measurement possible, but none springs immediately to mind. | {
"source": [
"https://physics.stackexchange.com/questions/113749",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/44558/"
]
} |
114,133 | In many physics divulgation books I've read, this seems to be a commonly accepted point of view (I'm making this quote up, as I don't remember the exact words, but this should give you an idea): Heisenberg's uncertainty principle is not a result of our lack of proper measurement tools. The fact that we can't precisely know both the position and momentum of an elementary particle is, indeed, a property of the particle itself. It is an intrinsic property of the Universe we live in. Then this video came out: Heisenberg's Microscope - Sixty Symbols (skip to 2:38 , if you're already familiar with the uncertainty principle). So, correct me if I'm wrong, what we may claim according to the video is: the only way to measure an elementary particle is to make it interact with another elementary particle: it is therefore incorrect to say that an elementary particle doesn't have a well defined momentum/position before we make our measurement. We cannot access this data (momentum/position) without changing it, therefore it is correct to say that our ignorance about this data is not an intrinsic property of the Universe (but, rather, an important limit of how we can measure it). Please tell me how can both of the highlighted paragraphs be true or how they should be corrected. | The first paragraph is basically right, but I wouldn't ascribe the uncertainty principle to particles, just to the universe/physics in general. You can no more get arbitrarily good, simultaneous measurements of position and momentum (of anything ) than you can construct a function with an arbitrarily narrow peak whose Fourier transform is also arbitrarily narrowly peaked. Physics tells us position and momentum are related via the Fourier transform, mathematics places hard limits on them based on this relation. The second paragraph is used to explain the uncertainty principle all too often, and it is at best misleading, and really more wrong than anything else. To reiterate, uncertainty follows from the mathematical definitions of position and momentum, without consideration for what measurements you might be making. In fact, Bell's theorem tells us that under the hypothesis of locality (things are influenced only by their immediate surroundings, generally presumed to be true throughout physics), you cannot explain quantum mechanics by saying particles have "hidden" properties that can't be measured directly. This takes some getting used to, but quantum mechanics really is a theory of probability distributions for variables, and as such is richer than classical theories where all quantities have definite, fixed, underlying values, observable or not. See also the Kochen-Specker theorem . | {
"source": [
"https://physics.stackexchange.com/questions/114133",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20852/"
]
} |
114,270 | Are there any atmospheric condition under which a visible raindrop can fail to hit the ground by evaporating first? I imagine this would require a large vertical temperature difference, and possibly the rain forming very high up. Has anything like this been observed experimentally, or if not - is it possible to perform a calculation to show whether this is plausible? | Yes, most certainly, and meteorologists call this kind of rain Virga (see Wikipedia page of the same name) . These are the salient and more interesting points of the Wiki article: Often it is falling ice crystals that undergo compressional heating as the fall from greater heights, where the pressure is very low; It is very common in desert and temperate climates: Western United States, Canadian Prairies, the Middle East and Australia. It plays a role in seeding non-Virgal (i.e. reaching the ground as liquid) rain when virgal material is blown into another supersaturated cloud and begets rain through nucleation; Its evaporation, with its high associated latent heat, means that virga draws a great deal of heat from the surrounding air, thus begetting violent up and downdraughts hazardous to aeroplanes; Almost all (sulphuric acid) rain on Venus is virga. Presumable all rain on the early Earth was too. | {
"source": [
"https://physics.stackexchange.com/questions/114270",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45256/"
]
} |
114,466 | Consider a system of the two identical positive point charges situated in free space (isolated from the influence of any other external fields) as shown in the attached diagram. Particle $1$ is at $(a,a,0)$ and particle $2$ is at $(0,a,0)$ . Their velocities, at the considered instant of time, are as shown in the diagram ( $\mathbf{v}_1$ along the $+x$ axis, $\mathbf{v}_2$ along the $+z$ axis). Now, by applying the Biot-Savart law, we find that the magnetic field due to particle $2$ at the position of particle $1$ is along $+y$ axis, which means that the force acting on particle $1$ is along $+z$ axis according to Fleming’s left-hand rule. A similar analysis shows that there is no magnetic force on particle $2$ as the magnetic field of particle $1$ should vanish at positions (relative to particle $1$ ) located along its velocity vector. Now, if we observe the net torque on the system of two particles about the $y$ axis then it is non-zero and is directed along the $-y$ axis. Here, there are no external forces or torques acting on the isolated two-particle system, and yet the net torque, as well as the net force on the particles, are nonzero. Why? Also, Newton's third law of motion seems to be broken in this scenario. Why? Edit $1$ I have come to know from the responses that the electromagnetic field itself takes away some momentum and angular momentum about the considered axis. However, I think that if I consider only two charged particles as my system then the Abraham-Lorentz force can be assumed to be acting upon the system, and that is sufficient to make sure that we have considered the momentum being carried away by the electromagnetic field itself. Even after considering the action of the Abraham-Lorentz force for the two-particle system, the scenario breaks both Newton's third law and the conservation of linear and angular momentum. This is because the Abraham Lorentz forces do not exactly counterbalance the force and the torques, on the two-particle system under consideration, due to the magnetic field. Edit $2$ The previous edit was a result of confusion and misunderstanding on my part. The force associated with the momentum carried away by the electromagnetic field as a result of the electromagnetic interaction described in the question is the Lorentz force itself which simply doesn't obey the third law of motion. The Abraham-Lorentz force is a different story. It is associated with the momentum carried away by the radiation emitted by the accelerated charged particles. This is an additional force apart from the Lorentz force and corresponds to an additional carriage of momentum by the electromagnetic field. The momentum carried away by the electromagnetic field in correspondence with the Lorentz force doesn't correspond to radiation. | You are correct in your assertion that pairs of charged point particles can interact magnetically in ways that seemingly violate Newton's 3 rd law, and therefore also seem to violate the conservation of both linear and angular momentum. This is a fundamental result and it is the decisive (thought) experiment which forces us to change our viewpoint on electrodynamics from something like charged particles interact with each other to a field-based one that says charged particles interact with the electromagnetic field. What this means, and the key point here, is that the electromagnetic field should be considered as a dynamical entity of its own, on par with material particles, and it can hold energy, momentum, and angular momentum of its own. The linear and angular momentum of the complete dynamical system, which includes the particles and the field, is indeed conserved. This means that in a situation like your diagram, where there is a net force and torque on the mechanical side of the system (i.e. the particles), there are corresponding and opposite net forces and torques on the electromagnetic field. So, how much linear and angular momentum are there? This is a solid piece of classical electrodynamics: these momenta are 'stored' throughout space, with densities
$$
\mathbf g =\epsilon_0 \mathbf E\times\mathbf B
$$
and
$$
\mathbf j =\epsilon_0\mathbf r\times\left( \mathbf E\times\mathbf B\right),
$$
respectively. Once you account for these, it follows from Maxwell's equations and the Lorentz force that, for an isolated system, the total momenta are conserved. The details of the calculation are a bit messy, and so are the actual conservation laws; I gave a nice derivation of the linear momentum one in this answer . | {
"source": [
"https://physics.stackexchange.com/questions/114466",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/87745/"
]
} |
114,525 | I had a left over coffee cup this morning, and I tried to wash it out. I realized I always instinctively use hot water to clean things, as it seems to work better. A Google search showed that other people get similar results, but this Yahoo answer is a bit confusing in terms of hot water "exciting" dirt. What is the physical interaction between hot water and oil or a material burnt onto another vs cold water interaction? | The other answers are correct, but I think that you might benefit from a more "microscopic" view of what is happening here. Whenever one substance (a solute) dissolves in another (a solvent), what happens on the molecular scale is that the solute molecules are surrounded by the solvent molecules. What causes that to happen? As @Chris described, there are two principles at work - thermodynamics , and kinetics . In plain terms, you could think of thermodynamics as an answer to the question "how much will dissolve if I wait for an infinite amount of time," whereas kinetics answers the question "how long do I have to wait before X amount dissolves." Both questions are not usually easy to answer on the macroscopic scale (our world), but they are both governed by two very easy to understand principles on the microscopic scale (the world of molecules): potential and kinetic energy . Potential Energy On the macroscopic scale, we typically only think about gravitational potential energy - the field responsible for the force of gravity. We are used to thinking about objects that are high above the earth's surface falling towards the earth when given the opportunity. If I show you a picture of a rock sitting on the surface of the earth: And then ask "Where is the rock going to go?" you have a pretty good idea: it's going to go to the lowest point (we are including friction here). On the microscopic scale, gravitational fields are extremely weak, but in their place we have electrostatic potential energy fields. These are similar in the sense that things try to move to get from high potential energy to lower potential energies, but with one key difference: you can have negative and positive charges, and when charges have the opposite sign they attract each other, and when they have the same sign, they repel each other. Now, the details of how each individual molecule gets to have a particular charge are fairly complicated, but we can get away with understanding just one thing: All molecules have some attractive potential energy between them, but the magnitude of that potential energy varies by a lot. For example, the force between the hydrogen atom on one water molecule ( $H_2O$ ) and the oxygen atom on another water molecule is roughly 100 times stronger than the force between two oxygen molecules ( $O_2$ ). This is because the charge difference on water molecules is much greater (about 100 times) than the charge difference on oxygen molecules. What this means is we can always think of the potential energy between two atoms as looking something like this: The "ghost" particle represents a stationary atom, and the line represents the potential energy "surface" that another atom would see. From this graph, hopefully you can see that the moving atom would tend to fall towards the stationary atom until it just touches it, at which point it would stop. Since all atoms have some attractive force between them, and only the magnitude varies, we can keep this picture in our minds and just change the depth of the potential energy "well" to make the forces stronger or weaker. Kinetic Energy Let's modify the first potential energy surface just a little bit: Now if I ask "where is the rock going to go?," It's a little bit tougher to answer. The reason is that you can tell the rock is "trapped" in the first little valley. Intuitively, you probably can see that if it had some velocity, or some kinetic energy , it could escape the first valley and would wind up in the second. Thinking about it this way, you can also see that even in the first picture, it would need a little bit of kinetic energy to get moving. You can also see that if either rock has a lot of kinetic energy, it will actually go past the deeper valley and wind up somewhere past the right side of the image. What we can take away from this is that potential energy surfaces tell use where things want (I use the term very loosely) to go, while kinetic energy tells us whether they are able to get there. Let's look at another microscopic picture: Now the atoms from before are at their lowest potential energy. In order for them to come apart, you will need to give them some kinetic energy. How do we give atoms kinetic energy? By increasing the temperature . Temperature is directly related to kinetic energy - as the temperature goes up, so does the average kinetic energy of every atom and molecule in a system. By now you might be able to guess how increasing the temperature of water helps it to clean more effectively, but let's look at some details to be sure. Solubility We can take the microscopic picture of potential and kinetic energies and extract two important guidelines from it: All atoms are "sticky," although some are stickier than others Higher temperatures mean that atoms have larger kinetic energies Going back to the coffee cup question, all we need to do now is think about how these will play out with the particular molecules you are looking at. Coffee is a mixture of lots of different stuff - oils, water-soluble compounds, burnt hydrocarbons (for an old coffee cup), etc. Each of these things has a different "stickiness." Oils are not very sticky at all - the attractive forces between them are fairly weak. Water-soluble compounds are very "sticky" - they attract each other strongly because they have large charges. Since water molecules also have large charges, this is what makes water-soluble compounds water-soluble - they stick to water easily. Burnt hydrocarbons are not very sticky, sort of like oils. Since molecules with large charges tend to stick to water molecules, we call them hydrophilic - meaning that they "love" water. Molecules that don't have large charges are called hydrophobic - they "fear" water. Although the name suggests they are repelled by water, it's important to know that there aren't actually any repelling forces between water and hydrophobic compounds - it's just that water likes itself so much, the hydrophobic compounds are excluded and wind up sticking to each other. Going back to the dirty coffee cup, when we add water and start scrubbing, a bunch of stuff happens: Hydrophilic Compounds Hydrophilic compounds dissolve quickly in water because they stick to water pretty well compared to how well they stick to each other and to the cup. In the case where they stick to each other or the cup better than water, the difference isn't huge, so it doesn't take much kinetic energy to get them into the water. So, warm water makes them dissolve more easily. Hydrophobic Compounds Hydrophobic compounds (oils, burnt stuff, most stains) don't stick to the water. They stick to each other a little bit (remember that the forces are much weaker compared to water since the charges are very small), but water sticks to itself so well that the oils don't have a chance to get between the water molecules. We can scrub them, which will provide enough energy to knock them loose and allow the water to carry them away, but if we were to increase the kinetic energy as well by increasing the water temperature, we could overcome both the weaker forces holding the hydrophobic compounds together, while simultaneously giving the water molecules more mobility so they can move apart and let the hydrophobic compounds in. And so, warmer water makes it easier to wash away hydrophobic compounds as well. Macroscopic View We can tie this back to the original thermodynamics vs. kinetics discussion. If you increase the temperature of the water, the answer to the question "How much will dissolve" is "more." (That was the thermodynamics part). The answer to "How long will it take" is "not as long" (kinetics). And as @anna said, there are other things you can do to make it even easier. Soap for example, is made of long chain molecules with one charged end and one uncharged end. This means one end is hydrophilic, while the other end is hydrophobic. When you add soap to the picture, the hydrophilic end goes into the water while the hydrophobic end tries to surround the oils and burnt stuff. The net result is little "bubbles" (called micelles ) made up of soap molecules surrounding hydrophobic molecules that are in turn surrounded by water. | {
"source": [
"https://physics.stackexchange.com/questions/114525",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/47311/"
]
} |
114,669 | Newton's third law states that every force has an equal and opposite reaction. But this doesn't seem like the case in the following scenario: For example, a person punches a wall and the wall breaks. The wall wasn't able to withstand the force, nor provide equal force in opposite direction to stop the punch. If the force was indeed equal, wouldn't the punch not break the wall? I.e., like punching concrete, you'll just hurt your hand. Doesn't this mean Newton's third law is wrong in these cases? | Despite 11 answers to this question already, I don't feel that any have answered the question well. (Note: This answer is simplified and assumes the punch is slow enough to ignore inertia and relativity) Firstly, let's look at force at the atomic level. This is where the force is really happening. The forces that we feel in everyday life are generally the forces between atoms and molecules ( intermolecular forces ). I'll use Helium atoms as an example, because they're easy to draw. When two He atoms get close together, their electron shells overlap and cause them to repel each other . Note that you never get a situation where one atom repels, and the other does nothing, or one repels and one attracts. Always they both repel each other, or both attract each other, and both atoms feel the same magnitude force, in exactly opposite directions . The force they feel is a function of the distance between them. The force between them behaves basically like a spring. In the illustration above, the two atoms are repelling each other, and will accelerate away from each other. As they move apart, the force decreases, until at a certain point, it reaches zero, and we consider them not to be 'touching' any more. Now imagine we start with one atom stationary, and throw another atom at it. When the moving atom gets close enough to the stationary one, they will feel the force of repulsion. Both will accelerate based on the force between them. They accelerate in opposite directions, so the stationary atom accelerates and flies off, while the moving one decelerates to a stop. Molecules behave in a similar way towards each other. Since a wall is made up of molecules, it behaves pretty much like the force between molecules, except in a solid object, neighboring molecules are bonded together, meaning that when you push them closer together, they repel, and when you pull them further apart, they attract. The wall is basically a very stiff spring. When you push on a wall, it bends. Bending is the only way it can push back on you. Bending means that some of the molecules in the wall are pushed closer together, and some are pulled further apart. The harder you push, the more it bends. It bends just so that it's pushing back on you as hard as you're pushing. If you're pushing with a constant force, everything is in equilibrium, and all the force vectors acting on each molecule add up to zero, so nothing is accelerating. If you push hard enough, you'll manage to stretch some molecules far enough apart that their bond breaks. At that point the force between them drops to zero. Now those molecules are not in equilibrium, and they will accelerate away from each other. If you push hard enough, and the wall breaks, it's no longer bending, it's accelerating away from your hand, just like the atoms in the example above. As it accelerates away, the force between your hand and the wall decreases and reaches zero when your hand and the wall are no longer 'touching'. When you punch a wall, the forces you and the wall are feeling are entirely made up of the forces between atoms and molecules. So whether the wall stands or falls, Newton's 3rd law holds the whole time. The wall can only push back on your hand to the extent that it can bend without breaking. But what if I push really hard on the wall ? The answer is you can't . You can put a lot of effort into the punch, but if you were to measure the actual force applied to the wall, it would increase up to the point, then the wall would break, then the force would drop back down to zero. Newton's 3rd law doesn't mean that everything is indestructible. Added: If you haven't already discovered Veritasium's excellent YouTube channel, you should. He has a good video helping us to understand Newton's Third Law: | {
"source": [
"https://physics.stackexchange.com/questions/114669",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/46233/"
]
} |
114,697 | Why is absolute zero considered to be asymptotical ? Wouldn't regions such as massive gaps between galaxy clusters have temperatures of absolute zero? I just do not see why our model must work the way that it does. I mean there have to be regions with no thermal energy out there, the universe is massive. | We can only approach absolute zero asymptotically because we can't suck heat out of a system. The only way we can get heat out is to place our system in contact with something cooler and let the heat flow from hot to cold as it usually does. Since there is nothing colder than absolute zero, we can never get all the heat to flow out of a system. We can reduce the temperature by increasing the size of the system and diluting the heat. In fact this is why the CMB (cosmic microwave background) temperature is only 2.7K rather than gazillions of K as it was shorlty after the Big Bang. The expansion of the universe has diluted the heat left over from the Big Bang and reduced the temperature. However achieving absolute zero this way would require infinite dilution and therefore infinite time, which is why the universe approaches absolute zero asymptotically. Actually, assuming the dark energy doesn't go away the universe will never cool to absolute zero even given infinite time. This is because the accelerated expansion caused by dark energy creates a cosmological horizon, and this produce Hawking radiation. The Hawking radiation will keep the temperature above absolute zero. | {
"source": [
"https://physics.stackexchange.com/questions/114697",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
116,290 | I'm an electrical engineer, and I understand wave propagation, interference patterns, and so on. But I'm missing something basic, so perhaps my understanding isn't as good as I believe. I'll show my thinking; please tell me where I am mistaken. Say that I have two guitars. Each one is precisely tuned. Each guitar has somebody play an open-string E note. The result is a louder note, which implies to me constructive interference. But, if this is true, why is there never a destructive "noise cancelling" effect? Obviously, both guitar strings aren't always going to vibrate in the same phase. | You correctly diagnosed the problem in the last sentence. The problem is with the phase. There is no interference happening here. The two sources do not maintain a constant phase difference. When interference occurs (with a constant phase relation between the two sources), you will have a net intensity of $(E_1 + E_2)^2$, which is four times either if they are equal. In the destructive case, the net result gives $0$ intensity (for a phase difference of $\pi$). However, when there is no constant phase relation, the phase difference would be randomly distributed between $0$ and $2\pi$, and in this case, you just have the average intensity adding up, to give $\langle E_1^2 \rangle + \langle E_2^2 \rangle$, which is 2 times either if they are equal. That's what you hear, and mistake it for constructive interference. | {
"source": [
"https://physics.stackexchange.com/questions/116290",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/48651/"
]
} |
116,452 | I have two questions on mirrors. I’ve read that in the past quality mirrors were coated with silver but that today vacuum evaporated coatings of aluminum are the accepted standard. When I look at the reflectance vs. wavelength plot , I see that silver has a higher reflectance than aluminum. So why use aluminum instead of silver? If one wants the highest quality mirrors I assume that cost is secondary, so what is the physics that I am missing here? Why are the more “technical mirrors” (I am not sure what this means but I assume more precise?) front-surfaced instead of back-surfaced? | Telescope mirrors and other mirrors used by scientists telescopes regularly do use a silver coating. See for instance here . However, aluminum coating are the norm (certainly for the large primary mirrors deployed in telescopes) because of durability reasons. I quote from the text linked to above: The challenge with using silver as a coating material is that, unlike aluminum, it tarnishes with exposure to air, specifically to sulfur. And "Like the family silver set," explains Tom Geballe, Gemini Senior Astronomer, "which slowly develops brown tarnish spots over time and must be regularly polished, the shiny silver on a telescope mirror also tarnishes rapidly reducing reflectivity and increasing emissivity. The observatory's engineers, however, can't just grab a cloth and some polish when the tarnish spots appear." On your second question: a back surface reflective coating implies an additional reflective surface: the air-glass interface. This leads to increased light losses and the need for anti-reflective coatings. | {
"source": [
"https://physics.stackexchange.com/questions/116452",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11361/"
]
} |
116,595 | Why is the application of probability in Quantum Mechanics (QM) fundamentally different from its application in other areas? QM applies probability according to the same probability axioms as in other areas of physics, engineering, etc. Why is there a difference? Naively one would assume one of these possibilities: It is not the same probability (theory?) It is a matter of interpretation (of the formalism?) Something else? Many answers (which I am still studying) focus on the fact that the combined probability of two mutually exclusive events in QM is not equal to the sum of the probabilities of each event (which holds classically by definition). This fact (appears to) makes the formulation of another probability (a quantum one) a necessity. Yet this again breaks down to assumed independent or assumed mutually exclusive , if this is not so, the "classical probability" is applicable (as indeed in other areas). This is one of the main points of the question. | The theory of probability used in QM is intrinsically different from the one commonly used for the following reason: The space of events is non-distributive (more properly non-Boolean ) and this fact deeply affects the conditional probability theory. The probability that A happens if B happened is computed differently in classical probability theory and in quantum theory, when A and B are quantum incompatible events . In both cases probability is a measure on a lattice , but, in the classical case, the lattice is a Boolean one (a $\sigma$ -algebra), in the quantum case it is not. To be clearer, classical probability is a map $\mu: \Sigma(X) \to [0,1]$ such that $\Sigma(X)$ is a class of subsets of the set $X$ including $\emptyset$ , closed with respect to the complement and the countable union, and such that $\mu(X)=1$ and: $$\mu(\cup_{n\in \mathbb N}E_n) = \sum_n \mu(E_n)\quad \mbox{if $E_k \in \Sigma(X)$ with $E_p\cap E_q= \emptyset$ for $p\neq q$.}$$ The elements of $\Sigma(X)$ are the events whose probability is $\mu$ . In this view, for instance, if $E,F \in \Sigma(X)$ , $E\cap F$ is logically interpreted as the event " $E$ AND $F$ ".
Similarly $E\cup F$ corresponds to " $E$ OR $F$ " and $X\setminus F$ has the meaning of "NOT $F$ " and so on.
The probability of $P$ when $Q$ is given verifies $$\mu(P|Q) = \frac{\mu(P \cap Q)}{\mu(Q)}\:.\tag{1}$$ If you instead consider a quantum system, there are "events", i.e. elementary "yes/no" propositions experimentally testable, that cannot by joined by logical operators AND and OR. An example is $P=$ "the $x$ component of the spin of this electron is $1/2$ " and $Q=$ "the $y$ component is $1/2$ ". There is no experimental device able to assign a truth value to $P$ and $Q$ simultaneously , so that elementary propositions as " $P$ and $Q$ " make no sense. Pairs of propositions like $P$ and $Q$ above are physically incompatible . In quantum theories (the most elementary version due to von Neumann), the events of a physical system are represented by the orthogonal projectors of a separable Hilbert space $H$ . The set ${\cal P}(H)$ of those operators replaces the classical $\Sigma(X)$ . In general, the meaning of $P\in {\cal P}(H)$ is something like
"the value of the observable $Z$ belongs to the subset $I \subset \mathbb R$ " for some observable $Z$ and some set $I$ . There is a procedure to integrate such a class of projectors labelled on real subsets to construct a self-adjoint operator $\hat{Z}$ associated to the observable $Z$ , and this is nothing but the physical meaning of the spectral decomposition theorem . If $P, Q \in {\cal P}(H)$ , there are two possibilities: $P$ and $Q$ commute or they do not . Von Neumann's fundamental axiom states that commutativity is the mathematically corresponding of physical compatibility . When $P$ and $Q$ commutes, $PQ$ and $P+Q-PQ$ still are orthogonal projectors, that is elements of ${\cal P}(H)$ . In this situation, $PQ$ corresponds to " $P$ AND $Q$ ", whereas $P+Q-PQ$ corresponds to " $P$ OR $Q$ " and so on, in particular "NOT $P$ " is always interpreted as the orthogonal projector onto $P(H)^\perp$ (the orthogonal subspace of $P(H)$ ), and all classical formalism holds true this way.
As a matter of fact, a maximal set of pairwise commuting projectors has formal properties identical to those of classical logic: is a Boolean $\sigma$ -algebra. In this picture, a quantum state is a map assigning the probability $\mu(P)$ that $P$ is experimentally verified to every $P\in {\cal P}(H)$ .
It has to satisfy: $\mu(I)=1$ and $$\mu\left(\sum_{n\in \mathbb N}P_n\right) = \sum_n \mu(P_n)\quad \mbox{if $P_k \in {\cal P}(H)$ with $P_p P_q= P_qP_p =0$ for $p\neq q$.}$$ Celebrated Gleason's Theorem , establishes that, if $\text{dim}(H)\neq 2$ , the measures $\mu$ are all of the form $\mu(P)= \text{tr}(\rho_\mu P)$ for some mixed state $\rho_\mu$ (a positive trace-class operator with unit trace), biunivocally determined by $\mu$ .
In the convex set of states, the extremal elements are the standard pure states . They are determined, up to a phase, by unit vectors $\psi \in H$ , so that, with some trivial computation (completing $\psi_\mu$ to an orthonormal basis of $H$ and using that basis to compute the trace), $$\mu(P) = \langle \psi_\mu | P \psi_\mu \rangle = ||P \psi_\mu||^2\:.$$ (Nowadays, there is a generalized version of this picture, where the set ${\cal P}(H)$ is replaced by the class of bounded positive operators in $H$ (the so-called "effects") and Gleason's theorem is replaced by Busch's theorem with a very similar statement.) Quantum probability is therefore given by the map, for a given generally mixed state $\rho$ , $${\cal P}(H) \ni P \mapsto \mu(P) =\text{tr}(\rho_\mu P) $$ It is clear that, as soon as one deals with physically incompatible
propositions , (1) cannot hold just because there is nothing like $P \cap Q$ in the set of physically sensible quantum propositions.
All that is due to the fact that the space of events ${\cal P}(H)$ is now a non-commutative set of projectors, giving rise to a non-Boolean lattice. The formula replacing (1) is now: $$\mu(P|Q) = \frac{\text{tr}(\rho_\mu QPQ)}{\text{tr}(\rho_\mu Q)}\tag{2}\:.$$ Therein, $QPQ$ is an orthogonal projector and can be interpreted as " $P$ AND $Q$ " (i.e., $P\cap Q$ ) when $P$ and $Q$ are compatible. In this case (1) holds true again. (2) gives rise to all "strange things" showing up in quantum experiments (as in the double slit one). In particular the fact that, in QM, probabilities are computed by combining complex probability amplitudes arises from (2). (2) just relies upon the von Neumann-Luders reduction postulate stating that, if the outcome of the measurement of $P\in {\cal P}(H)$ is YES when the state was $\mu$ (i.e., $\rho_\mu$ ), the the state immediately after the measurement is $\mu'$ associated to $\rho_{\mu'}$ with $$\rho_{\mu'} := \frac{P\rho_\mu P}{\text{tr}(\rho_\mu P)}\:.$$ ADDENDUM . Actually, it is possible to extend the notion of logical operators AND and OR for all pairs of elements in ${\cal P}(H)$ and that was the program of von Neumann and Birkhoff (the quantum logic ). In fact just the lattice structure of ${\cal P}(H)$ permits it, or better is it. With this extended notion of AND and OR, " $P$ AND $Q$ " is the orthogonal projector onto $P(H)\cap Q(H)$ whereas " $P$ OR $Q$ " is the orthogonal projector onto the closure of the space $P(H)+Q(H)$ . When $P$ and $Q$ commute these notions of AND and OR reduce to the standard ones. However, with the extended definitions, ${\cal P}(H)$ becomes a lattice in the proper mathematical sense, where the partial order relation is given by the standard inclusion of closed subspaces ( $P \geq Q$ means $P(H) \supset Q(H)$ ).
The point is that the physical interpretation of this extension of AND and OR is not clear. The resulting lattice is however non-Boolean. In other words, for instance, these extended AND and OR are not distributive as the standard AND and OR are (this reveals their quantum nature). However, also keeping the definition of "NOT $P$ " as the orthogonal projector onto $P(H)^\perp$ , the found structure of ${\cal P}(H)$ is well known: A $\sigma$ -complete, bounded, orthomodular, separable, atomic, irreducible and verifying the covering property, lattice. Around 1995 it was definitely proved, by Solér, a conjecture due to von Neumann stating that there are only three possibilities for practically realizing such lattices: The lattice of orthogonal projectors in a separable complex Hilbert space, the lattice of orthogonal projectors in a separable real Hilbert space, the lattice of orthogonal projectors in a separable quaternionic Hilbert space. Gleason's theorem is valid in the three cases. The extension to the quaternionc case was obtained by Varadarajan in his famous book 1 on the geometry of quantum theory, however a gap in his proof has been fixed in this published paper I have co-authored 2 Assuming Poincaré symmetry, at least for elementary systems (elementary particles), the case of real and quaternionic Hilbert spaces can be ruled out (here is a pair of published works I have co-authored on the subject: 3 and 4 ). ADDENDUM2 . After a discussion with Harry Johnston, I think that an interpretative remark is worth to be mentioned about the probabilistic content of the state $\mu$ within the picture I illustrated above. In QM $\mu(P)$ is the probability that, if I performed a certain experiment (in order to check $P$ ), $P$ would turn out to be true.
It seems that there is here a difference with respect to the classical notion of probability applied to classical systems. There, probability mainly refers to something already existent (and to our incomplete knowledge of it). In the formulation of QM I presented above, probability instead refers to that which will happen if... ADDENDUM3 . For $n=1$ the theorem of Gleason is valid and trivial. For $n=2$ there is known counterexample. $\mu_\nu(P)= \frac{1}{2}(1+ (v \cdot n_P)^3)$ where $v$ is a unit vector in $\mathbb R^3$ and $n_P$ is the unit vector in $\mathbb R^3$ associated to the orthogonal projector $P: \mathbb C^2 \to \mathbb C^2$ in the Bloch sphere: $P= \frac{1}{2} \left(I+\sum_{j=1}^3 n_j \sigma_j \right)$ . ADDENDUM4 . From the perspective of quantum probability, the von Neumann-Luders reduction postulate has a very natural interpretation. Suppose that $\mu$ is a probability measure over the quantum lattice ${\cal P}(H)$ representing a quantum state and assume that the measurement of $P \in {\cal P}(H)$ , on that state, has outcome $1$ . The post measurement state is therefore represented by $\mu_P(\cdot) = \mu(P \cdot P)$ , just in view of the aforementioned postulate. It is easy to prove that $\mu_P : {\cal P}(H) \to [0,1]$ is the only probability measure such that $$\mu_P(Q) = \frac{\mu(Q)}{\mu(P)} \quad \mbox{if $Q \leq P$}\:.$$ | {
"source": [
"https://physics.stackexchange.com/questions/116595",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/44176/"
]
} |
116,608 | The electromagnetic force and strong and weak forces require particles like photons and gluons . But in case of gravity there is no such particle found. Every mass bearing object creates a gravitational field around it, and whenever another mass bearing object enters its field the gravitational force comes into operation. If all other forces of nature have some particles associated with them why should gravity be an exception? And if there is no such particle, what exactly is the gravitational field and how does it spread over an infinite distance and cause the gravitational force to operate? Note: I am a high school student and have not studied quantum mechanics . | You're quite right that the other fundamental forces of Nature possess mediator particles, e.g. the photon for the electromagnetic force. For gravity, a graviton particle has been postulated, and is included in the
five standard string theories which are candidates for quantum gravity. From a quantum field theory perspective, the graviton arises as an excitation of the gravitational field. String theory, of course, postulates it arises in the spectrum of a closed string. Mass certainly gives rise to a gravitational field, but many other quantities do as well, according to the field equations of general relativity. As you're a high school student, I'll present them as, $$\underbrace{G_{\mu\nu}}_{\text{geometry}}\sim \underbrace{T_{\mu\nu}}_{\text{matter}}$$ Spacetime geometry, and hence the gravitational effects, are equated to the matter present in a system, which may include energy , pressure and other quantities other than mass. From a general relativity standpoint, the gravitational field may be viewed, or interpreted, as the curvature of spacetime, which is a manifold, i.e. surface. If we take space to be infinitely large, then the gravitational field must extend indefinitely; otherwise where would we choose to truncate? Even from a Newtonian perspective, we see that given the equation, $$F_g \sim \frac{1}{r^2}$$ gravity must extend infinitely, as we never reach the point $r=\infty$ where it is truly zero. As you asked, if the graviton is postulated, what is the need for a field? Well, we know that particle number is not conserved ; we can have virtual particle and anti-particle pair production, and as such the idea that a field propagates throughout space, and the particles are excitations of the field, is a more compatible viewpoint. In addition, the concept of a field arises because of locality . From empirical evidence we know gravitation and electromagnetism do not act instantaneously, at every point. | {
"source": [
"https://physics.stackexchange.com/questions/116608",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/48733/"
]
} |
116,921 | What happens to all of the electrons and protons in the material of a neutron star? Could there ever be an electron star or a proton star? | If a dense, spherical star were made of uniformly charged matter, there'd be an attractive gravitational force and a repulsive electrical force. These would balance for a very small net charge:
$$
dF = \frac1{r^2}\left( - GM_\text{inside} dm + \frac1{4\pi\epsilon_0}Q_\text{inside} dq
\right)
$$
which balances if
$$
\frac{dq}{dm} = \frac{Q_\text{inside}}{M_\text{inside}} = \sqrt{G\cdot 4\pi\epsilon_0} \approx 10^{-18} \frac{e}{\mathrm{GeV}/c^2}.
$$
This is approximately one extra fundamental charge per $10^{18}$ nucleons, or a million extra charges per mole — not much. Any more charge than this and the star would be unbound and fly apart. What actually happens is that the protons and electrons undergo electron capture to produce neutrons and electron-type neutrinos. | {
"source": [
"https://physics.stackexchange.com/questions/116921",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/31340/"
]
} |
117,091 | If a ball hits the floor after an acceleration then why does it bounces lower? I mean the Energy is passed to the floor then why does the floor give back less Energy? | Assuming for a moment an infinitely hard and smooth surface, let's look at the energy of the ball. When the ball is dropped from a height $h$, initial potential energy is $mgh$. You would expect it to accelerate to a velocity $v=\sqrt{2gh}$. However, during the fall, it will experience drag from the air. This will cause the dissipation of some of the energy of the ball into energy of the air (turbulence, heating, flow). How large this effect is will depend on the ball, the height, ... For example a ping pong ball (light for its size) will experience a much greater effect than a golf ball (same size, but heavier). Then we get to the impact. The ball will deform during the impact - the center of mass tries to keep going, but the surface it hits try to stop it. This leads to elastic deformation like this: (Image credit: http://ej.iop.org/images/0143-0807/34/2/345/Full/ejp450030f2_online.jpg ) The potential energy of the ball was converted to elastic energy. You can think of it as the mass of the ball being mounted to a spring that compresses when you hit the floor - but there will be some friction (both inside the ball, and particularly between the ball and the floor) which will dissipate energy: . The picture on the left is the "in flight" state - the spring is uncoiled. The picture in the middle is the "fully compressed" state in which all energy of motion would have been converted to elastic energy. The picture on the right is the "actual" state: the spring did not compress all the way down because energy was lost due to friction (and thus was not available for compressing the spring). Why do I say that friction between ball and floor is important for vertical impact? Look at this picture: Image credit: http://deansomerset.com/wp-content/uploads/2011/11/tennis-ball-impact.jpg This is a tennis ball bouncing on the ground. See how distorted it is? Imagine taking that spherical surface, and pushing it into this new shape. The distortion requires you to change the contact area. As you do so, ball rubs against floor. The lateral force that this generates dissipates heat - so energy is lost instead of being stored in the elasticity of the ball. Of course for an air filled ball, there are losses associated with the compression of the air: while the air is (adiabatically) compressed, it heats up; while it is hot, it dissipates heat to the environment; and when it expands, it cools down again. This ought to mean that when you hit a ball it gets cold: but we know that a squash ball, for example, gets HOT, not cold, when you hit it (this is why squash players "warm the ball up" before playing: as it becomes hotter, the pressure in the ball rises and it becomes bouncier). This heating is due to the extreme distortion (and thus again friction) of the ball during impact: Source: screen shots from https://www.youtube.com/watch?v=5IOvqCHTS7o There are other loss mechanisms - internal friction of the rubber in the ball, internal friction in the surface you are hitting (sand vs concrete), ... All this combines to give a particular ball and surface combination something called the coefficient of restitution - a number that expresses how much of the energy of the ball before the impact is "given back" (restitution (noun): the restoration of something lost or stolen to its proper owner.) after the impact. This coefficient is always less than 1 (unless you have flubber ). Since the height to which the ball will bounce is directly proportional to its energy (barring effects of air friction), with a coefficient of restitution of less than one the ball will bounce less and less high. | {
"source": [
"https://physics.stackexchange.com/questions/117091",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/44025/"
]
} |
117,101 | The sun's light can cast the shadow of another object, but does it ever cast its own shadow? | Yes. For example, on October 8th 1970 Earth was in the Sun's radiofrequency shadow with respect to quasar 3C 279 . In other words, quasar 3C 279 was occluded by the sun. Observation from just before and after the occulation permitted measurement of the bending of radiowaves as a test of general relativity. The sun would also block other frequencies of electromagnetic radiation including visible light. | {
"source": [
"https://physics.stackexchange.com/questions/117101",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/47304/"
]
} |
117,102 | We all know that in an idealised world all objects accelerate at the same rate when dropped regardless of their mass. We also know that in reality (or more accurately, in air) a lead feather falls much faster than a duck's feather with exactly the same dimensions/structure etc. A loose explanation is that the increased mass of the lead feather somehow defeats the air resistance more effectively than the duck's feather. Is there a more formal mathematical explanation for why one falls faster than the other? | We also know that in reality a lead feather falls much faster than a
duck's feather with exactly the same dimensions/structure etc No, not in reality, in air . In a vacuum, say, on the surface of the moon ( as demonstrated here ), they fall at the same rate. Is there a more formal mathematical explanation for why one falls
faster than the other? If the two objects have the same shape, the drag force on the each object, as a function of speed $v$, is the same. The total force accelerating the object downwards is the difference between the force of gravity and the drag force: $$F_{net} = mg - f_d(v)$$ The acceleration of each object is thus $$a = \frac{F_{net}}{m} = g - \frac{f_d(v)}{m}$$ Note that in the absence of drag, the acceleration is $g$. With drag, however, the acceleration, at a given speed, is reduced by $$\frac{f_d(v)}{m}$$ For the much more massive lead feather, this term is much smaller than for the duck's feather. | {
"source": [
"https://physics.stackexchange.com/questions/117102",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/181630/"
]
} |
118,460 | I have always wondered and once I even got it, but then completely forgot. I understand that gravity causes high and low tides in oceans, but why does it occur on the other side of Earth? | Imagine that we have a very massive object in space. At some distance away (call it ten units) we release three tennis balls in a row: The tennis balls all fall towards the massive object. But because gravity goes like distance squared, the nearer balls feel a stronger attraction than the farther balls, and they move apart from each other: You're riding on the middle tennis ball. You feel like you're in free fall, in a good inertial frame. You look towards the heavy object and you see the leading tennis ball moving away from you. You look away from the heavy object and you see the following tennis ball moving away from you. The heavy object is pulling the three tennis balls apart. Likewise, if you had three objects at the same distance falling towards the massive object, you'd see them converge as they all fell along slightly different rays towards the same center. This gives the tidal compression. You can imagine the process of launching a whole constellation of tennis balls, choosing the center one as your "rest frame," and having their motions approximate the arrow pattern in Joshua's figure. The situation stays essentially the same if you add angular momentum, except that then your tennis ball constellation doesn't crash onto the massive object. | {
"source": [
"https://physics.stackexchange.com/questions/118460",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/30901/"
]
} |
118,734 | I work in a 4 story building that is approx. 150 feet away from a set of train tracks. When a large (40+ car) freight train goes by, the shaking in the building is perceptible. As I've watched the train go by, there does not appear to be any side to side movement and the speed is constant. What exactly is causing the vibrations in the ground? Is it simply the train's traversal over each segment of track? Surely the train itself is not "bouncing" along the track with enough energy transfer to shake the ground, right? | There's a nice article on exactly this subject in this PDF file . Summarising from the article: the vibration arises because the track is not completely smooth and the train wheels are not perfectly circular. As the train moves along thetrack, the result is an oscillating force at each wheel/track contact, and this is transmitted to the ground at each sleeper/ground contact. It's this force that shakes the ground. | {
"source": [
"https://physics.stackexchange.com/questions/118734",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/50263/"
]
} |
118,825 | I see this all the time* that there still doesn't exist a mathematical proof for confinement. What does this really mean and how would a sketch of a proof look like? What I mean by that second question is: what are the steps one needs to prove in order to "mathematically prove confinement"? * See e.g. Scherer's "Introduction to Chiral Perturbation Theory" middle of page 7 | The problem In case you were not aware of this, finding a proof for confinement is one of the Millenium Problems by the Clay Mathematics Institute. You can find the (detailed) answer to your question in the official problem description by Arthur Jaffe and Edward Witten. In short: proving confinement is essentially equivalent to showing that a quantum Yang-Mills theory exists and is equipped with a "mass gap". The latter manifests itself in the fact that the lowest state in the spectrum of the theory cannot have an arbitrarily low energy, but can be found at some energy $\Delta>0$.
Proving this means to formulate the theory in the framework of axiomatic quantum field theory and deduce systematically all of its properties. Mass gap implies confinement In order to understand why proving that the theory has a mass gap is equal to proving confinement, we first have to understand what confinement is. In technical language it means that all observable states of finite energy are singlets under transformations of the global colour $\text{SU}(3)$. In simple terms this means that all observable particles are colour-neutral. Since quarks and gluons themselves carry colour charge, this implies that they cannot propagate freely, but occur only in bound states, namely hadrons. Proving that the states in the theory cannot have arbitrarily low energies, i.e. there is a mass gap, means that there are no free particles. This in turn means that there cannot be free massless gluons which would have no lower bound on their energy. Hence, a mass gap implies confinement. Motivation The existence of confinement, while phenomenologically well-established, is not fully understood on a purely theoretical level. Confinement is a low energy phenomenon and is as such not accessible by perturbative QCD. There exist various low energy effective theories such as chiral perturbation theory which, while giving good phenomenological descriptions of hadron physics, do not teach us much about the underlying mechanism. Lattice QCD, albeit good for certain qualitative and quantitative predictions, also does not allows us to prove something on a fundamental level. Furthermore, there is the AdS/CFT correspondence, which allows us to describe theories which are similar to QCD in many respects, but a description of QCD itself is not accessible at this point. To conclude: there are many open questions to answer before we have a full understanding of QCD. | {
"source": [
"https://physics.stackexchange.com/questions/118825",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/31965/"
]
} |
118,960 | My layman understanding of the Uncertainty Principle is that you can't determine the both the position and momentum of a particle at the same point in time, because measuring one variable changes the other, and both cannot be measured at once. But what happens if I measure a charged particle with any number of detectors over a period of time? Can I use a multitude of measurements to infer these properties for some point in the past? If not, how close can we get? That is, how precise can our estimate be? | The uncertainty principle should be understood as follows: The position and momentum of a particle are not well-defined at the same time. Quantum mechanically, this is expressed through the fact that the position and momentum operators don't commute: $[x,p]=i\hbar$. The most intuitive explanation, for me, is to think about it in terms of wave-particle duality. De Broglie introduced the idea that every particle also exhibits the properties of a wave. The wavelength then determines the momentum through $$p=\frac{h}{\lambda}$$ where $\lambda$ is the De Broglie wavelength associated with the particle. However, when one thinks about a wave, it is clear that the object described by it will not be easy to ascribe a position to. In fact, one needs a specific superposition of waves to create a wave that is essentially zero everywhere except at some position $x$. However, if one creates such a wave packet, one loses information about the exact wavelength (since a wave with a single, well-defined wavelength will simply extend throughout space). So, there is an inherent limitation to knowing the wavelength (i.e. momentum) and position of a particle. On a more technical level, one could say that the uncertainty principle is simply a consequence of wave-particle duality combined with properties of the Fourier transform. The uncertainty is made precise by the famous Heisenberg uncertainty principle,
$$\sigma_x\sigma_p\geq \frac{\hbar}{2}$$
More generally, for two non-comuting observables $A$ and $B$ (represented by hermitian operators), the generalized uncertainty principle reads
$$\sigma_A^2\sigma_B^2\geq \left(\frac{1}{2i}\langle [A,B] \rangle\right)^2\ \implies \ \sigma_A\sigma_B \geq \frac{|\langle [A,B]\rangle| }{2}$$
Here, $\sigma$ denotes the standard deviation and $\langle\dots\rangle$ the expectation value. This holds at any time. Therefore, the measurement occurring right now, having occurred in the past or occurring in the future has nothing to do with it: The uncertainty principle always holds. | {
"source": [
"https://physics.stackexchange.com/questions/118960",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/42888/"
]
} |
119,387 | From the second law of thermodynamics: The second law of thermodynamics states that the entropy of an
isolated system never decreases, because isolated systems always
evolve toward thermodynamic equilibrium, a state with maximum entropy. Now I understand why the entropy can't decrease, but I fail to understand why the entropy tends to increase as the system reach the thermodynamic equilibrium. Since an isolated system can't exchange work and heat with the external environment, and the entropy of a system is the difference of heat divided for the temperature, since the total heat of a system will always be the same for it doesn't receive heat from the external environment, it's natural for me to think that difference of entropy for an isolated system is always zero. Could someone explain me why I am wrong? PS: There are many questions with a similar title, but they're not asking the same thing. | Take a room and an ice cube as an example. Let's say that the room is the isolated system. The ice will melt and the total entropy inside the room will increase.
This may seem like a special case, but it's not. All what I'm really saying is that the room as whole is not at equilibrium meaning that the system is exchanging heat, etc. inside itself increasing entropy. That means that the subsystems of the whole system are increasing their entropy by exchanging heat with each other and since entropy is extensive the system as whole is increasing entropy. The cube and the room will exchange, at any infinitesimal moment, heat $Q$ , so the cube will gain entropy $\frac{Q}{T_1}$ , where $T_1$ is the temperature of the cube because it gained heat $Q$ , and the room will lose entropy $\frac{Q}{T_2}$ , where $T_2$ is the temperature of the room because it lost heat $Q$ . Since $\frac{1}{T_1}>\frac{1}{T_2}$ the total change in entropy will be positive. This exchange will continue until the temperatures are equal meaning that we have reached equilibrium. If the system is at equilibrium it already has maximum entropy. | {
"source": [
"https://physics.stackexchange.com/questions/119387",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21017/"
]
} |
119,394 | In regards to the 'conservation of angular momentum' being the explanation of why celestial objects spin... If you fill a ball or any other container with a liquid and try to spin it, you will not see any more than 5 or 6 revolutions because of the frictional losses of the liquid inside the container. I first discovered this during lunch in grade school when I tried to spin my milk carton by throwing it up into the air while giving it a 'spin'. At best, I could get only 3 rotations out of it. This same principle can be seen when you spin a raw egg and a hard boiled egg on a table top. The cooked 'solid' egg will spin the raw one won't. Because of my early experiences with milk cartons and eggs, somewhere in the intervening years I found it hard to believe some of the accepted 'facts' about our planet: So here is my question: If the age of the earth 4.5 BILLION years, how can it be spinning freely in space with a liquid or semi liquid core for that length of time? Combine the effects of the liquid core with the effects of liquid oceans and a gaseous atmosphere, all of which are creating resistance to rotation, these frictional losses would have stopped any rotation long ago. If the earth had a solid core, I could understand it... If the earth was less that 4.5 billion years old I might understand it... But given the accepted age of 4.5 billion years with a liquid core and a fluid outer shell I say there is a fly in the ointment somewhere! | Your intuition about spinning fluids is wrong for a couple reasons. Angular momentum is conserved so an isolated system of any shape
will keep on spinning unless it has a way to transfer that momentum
elsewhere. If you spun in egg levitating in a vacuum it would spin
forever. The more bumps, flaws, or non-spherical features your container has
the faster it can transfer the angular momentum of the fluid to the
container, and then from the container to the environment. The Earth has these features, but they are very very tiny compared to the overall (spherical and smooth) size of the planet. Most containers you've spun have probably caused your brain to
over-estimate the amount of angular momentum they have when you spin
them because you don't get all of the fluid spinning. When you
twist a container the momentum gets transfered from the interface
between the fluid and the container rather inefficiently. It takes
a lot of revolutions to get everything "up to speed". The Earth, besides having been "up to speed" since the beginning, having been consolidated from an already-spinning volume of dust and gas, is also spinning in isolation, per #1 above. The Exploratorium exhibit in San Francisco has a great demo of a fluid spinning in a spherical container called the Turbulent Orb : Their description: The Turbulent Orb is a large polycarbonate sphere full of special, colored, flow-visualization fluid. The sphere is mounted on top of a pedestal and can be spun in either direction and at different speeds. The fluid in the sphere shows swirls and waves of internal fluid motions produced by the actions of the visitors. The turbulence of the fluid in the sphere is reminiscent of the turbulent flows that occur in planetary atmospheres. This exhibit shows the complexities of fluid motion that can be produced by very simple circumstances. My own experience playing with the exhibit is that you must spin the outer sphere around dozens of revolutions before all of the fluid in it is moving uniformly. Once you do that, the fluid inside of it continues to spin for quite some time. Because the fluid has a pearlescent additive you can even find evidence that the central portion of the fluid keeps spinning faster than the outer fluid (which slows faster due to friction with the stationary shell). If you first spin the orb in one direction and then let the outer fluid slow a bit, and then spin it in the opposite direction you'll see vortices form with axes tangential to the outer sphere. You do not get this effect if you start spinning it when the fluid is stationary. | {
"source": [
"https://physics.stackexchange.com/questions/119394",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/50507/"
]
} |
119,415 | OK, so $\ce{Fe}$ is the most 'stable element'. As such, why do all elements above it not decay into $\ce{Fe}$? In all cases, would it not lead to an increase in binding energy and therefore energy been released, meaning it is energetically feasible, and should happen spontaneously (given enough time)? | There are two separate issues to consider. Firstly there is usually an energy barrier to decay. Radioactive decay occurs due to quantum tunnelling through the barrier, and the rate therefore depends on the barrier height. One of the very first studies of this was by George Gamow back in 1928, who studied the alpha decay of uranium-238. Even though alpha decay produces about 5Mev of energy (nearly 500 gigajoules per mole!!) the half life of uranium-238 is about the same as the age of the Solar System. Gamow's calculation is discussed in this PDF , or Google for many similar articles. The decay is slow because there is a barrier of around 25Mev that prevents the decay. So while it may be energetically favourable for a nucleus to decay to iron a kinetic barrier may reduce the rate to a negligably small value. Secondly, although for example nickel-60 may have a lower binding energy per nucleon than iron-56 $^1$ this does not mean the reaction: $$ \mathrm{^{60}Ni \rightarrow {}^{56}Fe + \alpha }$$ is exothermic because the $\alpha$ particle also has a lower binding energy per nucleon than iron $^2$. If you took 56 nickel nuclei, disassembled them into individual nucleons then reassembled them into 60 iron nuclei you might get an overall energy decrease, but this route isn't available. Decay pathways are limited to $\alpha$, $\beta$ and fission, and if any step is not energetically favourable the decay process will stop at that step. $^1$ Actually, according to Wikipedia nickel-62 is the most stable nucleus not iron-56 $^2$ I have no idea whether this reaction is exothermic or not | {
"source": [
"https://physics.stackexchange.com/questions/119415",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
119,636 | Our class is learning about hydrostatic water pressure and we have been told that we can calculate the force of the liquid on an object at any depth using "the density x 9.8 x the depth". However, as the depth increases, wouldn't the density of the liquid increase because of the weight of the liquid above it compressing it? So should't there be something in the equation to account for the varying density? To me, "density x 9.8 x depth" seems like it is saying that the density will be constant... | The density does increase with depth, but only to a tiny extent. At the bottom of the deepest ocean the density is only increased by about 5% so the change can be ignored in most situations. If you're dealing with these sorts of depths you also need to take temperature into account because the water temperature changes with depth and the density also changes with temperature. | {
"source": [
"https://physics.stackexchange.com/questions/119636",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/48745/"
]
} |
119,688 | (I apologize for this elementary question. I don't know much about physics.) Let's say that I put a metal pot in the refrigerator for several hours. At this point, I guess, the pot and the air (in the refrigerator) have the same temperature. Now, I touch this pot. It feels very cold. But when I "touch" the air (that is inside the fridge) it doesn't "feel" as cold. I don't feel the same "ouch!" that I feel when I touch the pot. Why is that? Why does the metal seem colder than air although they both have the same temperature? (I know that gas has less particles in it in one unit of volume compared to solids and liquids, but since "temperature" means "the average kinetic energy", these fewer air particles are supposed to hit my hand in a velocity that's going to compensate for their lower number, aren't they?) A related question, for clarification: If I use a thermometer to measure the temperature of the pot & air (let's assume it's a thermometer that has a probe that can touch objects), will it show the same reading for both? If so, what makes the thermometer different than my hand? I mean, my hand is sort of a thermometer, so why would it fail whereas a non-human thermometer would work? | Short answer: The thermometer measures actual temperature (which is the same for both), while your hand measures the transfer of energy (heat), which is higher for the pot than the air. Long answer: Keyword: Thermal Conductivity The difference is a material-specific parameter called thermal conductivity . If you are in contact with some material (gas, liquid, solid), heat, which is a form of energy, will flow from the medium with higher temperature to the one with low temperature. The rate at which this happens is determined by a parameter called thermal conductivity. Metals are typically good heat conductors, which is why metal appears colder than air, even though the temperature is the same. Regarding your second question: the thermometer will show the same temperature. The only difference is the time at which thermal equilibrium is achieved, i.e. when the thermometer shows the correct temperature. Final remark: the rate at which heat (energy) is drained from your body determines whether you perceive a material as cold or not, even if the temperature is the same. For reference, here is a table which lists thermal conductivities for several materials: | {
"source": [
"https://physics.stackexchange.com/questions/119688",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/50647/"
]
} |
119,729 | Well, as I am learning about quantum physics, one of the first topics I came across was De Broglie's wave equation. $$\frac{h}{mc} = \lambda$$
As is obvious, it relates the wavelength to the mass of an object. However, what came to my mind is the photon. Doesn't the photon have zero mass? Therefore, won't the wavelength be infinity and the particle nature of the particle non existent? Pretty sure there is a flaw in my thinking, please point it out to me! | What you have there isn't actually de Broglie's equation for wavelength. The equation you should be using is $$\lambda = \frac{h}{p}$$ And although photons have zero mass, they do have nonzero momentum $p = E/c$. So the wavelength relation works for photons too, you just have to use their momentum. As a side effect you can derive that $\lambda = hc/E$ for photons. The equation you included in your question is something different: it gives the Compton wavelength of a particle, which is the wavelength of a photon that has the same electromagnetic energy as the particle's mass energy. In other words, a particle of mass $m$ has mass energy $mc^2$, and according to the formulas in my first paragraph, a photon of energy $mc^2$ will have a wavelength $\lambda = hc/mc^2 = h/mc$. The Compton wavelength is not the actual wavelength of the particle; it just shows up in the math of scattering calculations. | {
"source": [
"https://physics.stackexchange.com/questions/119729",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/50416/"
]
} |
119,732 | According to the Wikipedia page on the electron : The electron has no known substructure. Hence, it is defined or assumed to be a point particle with a point charge and no spatial extent. Does point particle mean the particle should not have a shape, surface area or volume? But when I searched Google for the "electron shape" I got many results (like this and this ) that says electrons are round in shape. | As far as we know the electron is a point particle - this is addressed in the question Qmechanic suggested: What is the mass density distribution of an electron? However an electron is surrounded by a cloud of virtual particles, and the experiments in the links you provided have been studying the distribution of those virtual particles. In particular they have been attempting to measure the electron electric dipole moment , which is determined by the distribution of the virtual particles. In this context the word shape means the shape of the virtual particle cloud not the shape of the electron itself. The Standard Model predicts that the cloud of virtual particles is spherically symmetric to well below current experimental error. However supersymmetry predicts there are deviations from spherical symmetry that could be measurable. The recent experimentals have found the electric dipole moment to be zero, i.e. the virtual particle cloud spherically symmetric, to an accuracy that is challenging the supersymmetric calculations. However there are many different theories based upon supersymmetry, so the result doesn't prove supersymmetry doesn't exist - it just constrains it. | {
"source": [
"https://physics.stackexchange.com/questions/119732",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21920/"
]
} |
119,745 | In quantum field theory we have the concepts of regularization and renormalization . I'm a little confused about these two. In my understanding regularization is a way to make divergent integrals convergent and in renormalization you add terms to the Lagrangian which in turn cancel the divergent integrals. Is there a connection between the two of these, or are these two separate ways of dealing with the infinities, i.e. do you use them both together to make the divergences disappear? | Regularization and renormalization are conceptually distinct. As you essentially indicate, regularization is the process by which one renders divergent quantities finite by introducing a parameter $\Lambda$ such that the "original divergent theory" corresponds to a certain value of that parameter. I put "original divergent theory" in quotations because strictly speaking, the theory is ill-defined before regularization. Once you regularize your theory, you can calculate any quantity you want in terms of the "bare" quantities appearing in the original lagrangian (such as masses $m$, couplings $\lambda$, etc.) along with the newly introduced regularization parameter $\Lambda$. The bare quantities are not what is measured in experiments. What is measured in experiments are corresponding physical quantities (the physical masses $m_P$, couplings $\lambda_P$, etc.). Renormalization is the process by which you take the regularized theory, a theory written in terms of bare quantities and the regularization parameter $(\Lambda, m, \lambda, \dots)$, and you apply certain conditions (renormalization conditions) which cause physical quantities you want to compute, such as scattering amplitudes, to depend only on physical quantities $(m_P, \lambda_P, \dots)$, and in performing this procedure on a renormalizable quantum field theory, the dependence on the cutoff disappears. So, in a sense, renormalization can be thought of as more of a procedure for writing your theory in terms of physical quantities than as a procedure for "removing infinities." The removing infinities part is already accomplished through regularization. Beware that what I have described here is not the whole conceptual story of regularization and renormalization. I'd highly recommend that you try to read about the following topics which give a more complete picture of how renormalization is thought about nowadays: effective field theory wilsonian renormalization renormalization group renormalization group flows You may also find the following physics.SE posts interesting/illuminating: What exactly is regularization in QFT? Regulator-scheme-independence in QFT Why is renormalization necessary in finite theories? Why do we expect our theories to be independent of cutoffs? | {
"source": [
"https://physics.stackexchange.com/questions/119745",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36946/"
]
} |
119,950 | I am taking a quantum physics class, and for the life of me, I can not remember why we would need a vast amount of energy to understand the microscopic universe. | Frequently one probes matter by bombarding it with radiation or with other pieces of matter, and then looking at the products. This is called a scattering experiment. Since the probe system is quantum, whether or not it is made of light or matter, it is associated with a wavelength. The de Broglie relation tells you that this wavelength is $\lambda = h/p$, where $h$ is Planck's constant and $p$ is the momentum of the probe. This de Broglie wavelength gives a lower limit to the resolution of the probe. Any feature smaller than this will simply be smeared/averaged over the probe's wavelength, and so will not be visible. For the same reason, normal microscopes only work down to a few hundred nm (the wavelength of visible light). Electron microscopes allow you to see smaller features because the electrons have a smaller wavelength. Therefore, the smaller the feature you would like to see, the higher must be the momentum, and thus the energy, of your probe system. This is one (oversimplified) reason why the LHC has to be so large and achieve such high energies (relative to usual microscopic scales). | {
"source": [
"https://physics.stackexchange.com/questions/119950",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/47559/"
]
} |
120,163 | Recently, I was pondering over the thought that is most of the elementary particles have intrinsic magnetism, then can gravity be just a weaker form of electromagnetic attraction? But decided the idea was silly. But I then googled it and found this article. Is this idea really compatible with other theories as the article mentions? Is there any chance of this proposition being true? | Short answer: No. Long answer: Nooooooooooooooooooooooooooooooooooooooooooooooooooooo. Moral of the story: Gravity and EM are two very different things that look similar to some people because they both fall off like $\frac{1}{r^2}$. Be careful what you trust. When someone makes a claim like that, check their references. If there are no references, ignore it. | {
"source": [
"https://physics.stackexchange.com/questions/120163",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40382/"
]
} |
121,337 | The density matrix $\hat{\rho}$ is often introduced in textbooks as a mathematical convenience that allows us to describe quantum systems in which there is some level of missing information. $\hat{\rho} = \sum_{i=1}^N c_i \rvert\psi_i\rangle \langle \psi_i \rvert$ I have two questions regarding density matrices. First, it is clear that a generic pure state $\rvert \psi \rangle$ belongs to some Hilbert space $\mathcal{H}$. But what mathematical space do density matrices belong to? It is clear that the expression $\hat{\rho} \in \mathcal{H}$ is incorrect, as a mixed state cannot possibly be described in terms of a state in a Hilbert space. Can we think of the space of all possible density matrices (of a given dimension) as a metric space? Does this space have the topological properties of a manifold? Secondly, can the density matrix be considered 'physical'? For example, if we take a single photon described in the Fock basis (neglecting polarization), can the fundamental description of that photon ever be the completely mixed state $\frac{1}{2}(\rvert 0 \rangle \langle 0 \rvert+\rvert 1 \rangle \langle 1 \rvert)$? Or is this only a reflection of some ignorance on the part of the experimentalist, and in reality the photon must be described by a pure state? | A mixed state is mathematically represented by a bounded , positive trace-class operator with unit trace $\rho : \cal H \to \cal H$ .
Here $\cal H$ denotes the complex Hilbert space of the system (it may be nonseparable). The set of mixed states $S(\cal H)$ is a convex body in the complex linear space of trace class operators $B_1(\cal H)$ which is a two-side $*$ -ideal of the $C^*$ -algebra of bounded operators $B(\cal H)$ . Convex means that if $\rho_1,\rho_2 \in S(\cal H)$ then a convex combination of them, i.e. $p\rho_1 + q\rho_2$ if $p,q\in [0,1]$ with $p+q=1$ , satisfies $p\rho_1 + q\rho_2 \in S(\cal H)$ . two-side $*$ -ideal means that linear combinations of elements of $B_1(\cal H)$ belong to that space (the set is a subspace), the adjoint of an element of $B_1(\cal H)$ stays in that space as well and $AB, BA \in B_1(\cal H)$ if $A\in B_1(\cal H)$ and $B \in B(\cal H)$ . I stress that, instead, the subset of states $S(\cal H)\subset B_1(\cal H)$ is not a vector space since only convex combinations are allowed therein. The extremal elements of $S(\cal H)$ , namely the elements which cannot be decomposed as a nontrivial convex combinations of other elements, are all of the pure states. They are of the form $|\psi \rangle \langle \psi|$ for some unit vector of $\cal H$ . (Notice that, since phases are physically irrelevant the operators $|\psi \rangle \langle \psi|$ biunivocally determine the pure states, i.e. $|\psi\rangle$ up to a phase.) The space $B_1(\cal H)$ and thus the set $S(\cal H)$ admits at least three relevant normed topologies induced by corresponding norms. One is the standard operator norm $||T||= \sup_{||x||=1}||Tx||$ and the remaining ones are: $$||T||_1 = || \sqrt{T^*T} ||\qquad \mbox{the trace norm}$$ $$||T||_2 = \sqrt{||T^*T||} \qquad \mbox{the Hilbert-Schmidt norm}\:.$$ It is possible to prove that: $$||T|| \leq ||T||_2 \leq ||T||_1 \quad \mbox{if $T\in B_1(\cal H)$.}$$ Moreover, it turns out that $B_1(\cal H)$ is a Banach space with respect to $||\cdot||_1$ (it is not closed with respect the other two topologies, in particular, the closure with respect to $||\cdot||$ coincides to the ideal of compact operators $B_\infty(\cal H)$ ). $S(\cal H)$ is closed with respect to $||\cdot ||_1$ and, more strongly, it is a complete metric space with respect to the distance $d_1(\rho,\rho'):= ||\rho-\rho'||_1$ .
When $dim(\cal H)$ is finite the three topologies coincide (though the norms do not), as a general result on finite dimensional Banach spaces. Concerning your last question, there are many viewpoints. My opinion is that a density matrix is physical exactly as pure states are. It is disputable whether or not a mixed state encompasses a sort of physical ignorance, since there is no way to distinguish between "classical probability" and "quantum probability" in a quantum mixture as soon as the mixture is created. See my question Classical and quantum probabilities in density matrices and, in particular Luboš Motl's answer.
See also my answer to Why is the application of probability in QM fundamentally different from application of probability in other areas? ADDENDUM . In finite dimension, barring the trivial case $dim({\cal H})=2$ where the structure of the space of the states is pictured by the Poincaré-Bloch ball as a manifold with boundary , $S(\cal H)$ has a structure which generalizes that of a manifold with boundary. A stratified space . Roughly speaking, it is not a manifold but is the union of (Riemannian) manifolds with different dimension (depending on the range of the operators) and the intersections are not smooth. When the dimension of $\cal H$ is infinite, one should deal with the notion of infinite dimensional manifold and things become much more complicated. | {
"source": [
"https://physics.stackexchange.com/questions/121337",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/51896/"
]
} |
121,353 | Does a man on the moon (or any other similar body) experience daytime like we do? By daytime I mean looking upward and seeing a bright "sky," not dark space. If not, then what is the reason behind it? | like even when light gets on the moon why does the space appears dark from the moon? For the same reason it appears dark from the Earth (when flying at an altitude of 80,000 feet or so): Image credit: View from the SR-71 Blackbird . The fact is, we can't 'see space' from the Earth's surface during the day because the atmosphere is 'in the way'- the atmosphere scatters light from the Sun and other sources of light and we see that rather than the darkness of space. If we fly high enough so that most of the atmosphere is below us, we can 'see space' through the little bit of atmosphere above us. Since the Moon has no atmosphere, the light from the Sun is not scattered and the sky appears dark from the surface of the Moon. | {
"source": [
"https://physics.stackexchange.com/questions/121353",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45039/"
]
} |
121,451 | At what pressure will be particles in a medium be unable to form a sound wave when disturbed? How can this pressure be described mathematically? My guess is that this would correspond to the point at which the restoring force due to pressure is unable to create a transverse wave and the disturbed particles travel infinitely far away before the hypothetical wave reaches it's amplitude. But I have no idea how you would even begin to start finding a quantitative value for this point. | It's obviously not a sharp cut-off, but as a general guide sound waves cannot propagate if their wavelength is equal to or less than the mean free path of the gas molecules. This means that even for arbitrarily low pressures sound will still propagate provided the wavelength is long enough. Possibly this is stretching a point, but even in interstellar gas clouds sound waves (more precisely shock waves) will propagate, but their length scale is on the order of light years. | {
"source": [
"https://physics.stackexchange.com/questions/121451",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/43055/"
]
} |
121,453 | In the book Quantum Mechanics , Volume 1, written by Albert Messiah, page no. 142-143, the author says: but the diaphragm is a quantum object, just like the electron. Its momentum is not defined to better than $dp$ I do not understand why diaphragm is a quantum object. Also, it is not clear to me what author says below: One must postulate that the measuring apparatus is a quantized object which also obeys uncertainty relations On the other hand, N. Bohr says that the measuring apparatus is always a classical object (i.e. an instrument which follows the rules of classical mechanics). Any clear explanation? | It's obviously not a sharp cut-off, but as a general guide sound waves cannot propagate if their wavelength is equal to or less than the mean free path of the gas molecules. This means that even for arbitrarily low pressures sound will still propagate provided the wavelength is long enough. Possibly this is stretching a point, but even in interstellar gas clouds sound waves (more precisely shock waves) will propagate, but their length scale is on the order of light years. | {
"source": [
"https://physics.stackexchange.com/questions/121453",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/51950/"
]
} |
121,775 | Using $g = \frac{Gm}{r^2}$, the force on a point mass located at 1 AU from the Sun ($m = 2 \cdot 10^{30} \text{ kg}$) is about ~0.006 N/kg. Does that mean that, e.g., a 70 kg person is ~42g lighter during the day, and ~42g heavier at night? That seems like it could make a big difference for things like measuring gold bars or other weight-sensitive items. (Gold arbitrage: buy your gold during the day and sell it during the night! Risk-free profit!) This makes me suspect that I'm overlooking something obvious, because a ~0.05% weight difference seems like something everyone would have noticed long ago. So what am I missing? Edit: A few answers below indicate that there shouldn't be any weight difference, because the Earth orbits the Sun in free-fall. But if that's the reason, does that mean that a 1:1 tidally-locked Earth-Sun system wouldn't experience differential gravity from the Sun on opposite sides? That doesn't seem right. | This diagram shows the Earth rotating round the Sun at it's orbital velocity $v$. That is the centre of the Earth is orbiting around the Sun at velocity $v$. NB the scale is rather fanciful - don't take it literally! I'll also assume the orbit is circular, and for convenience I'll ignore the Earth's rotation i.e. assume it's tidally locked. To calculate the orbital velocity at the centre of the Earth, $v$, we just note that the centripetal acceleration must be the same as the gravitational acceleration of the Sun so: $$ \frac{v^2}{r} = \frac{GM}{r^2} $$ which gives: $$ v^2 = \frac{GM}{r} \tag{1} $$ which is a well known result . Now consider the point on the Earth's surface nearest the Sun i.e. the black dot. The acceleration due to Earth's gravity is the usual $9.81 m/s^2$, but there will be a correction due to the fact the point is $r_e$ metres nearer the Sun. Let's calculate that correction. The gravitational acceleration due to the Sun at the black dot is: $$ a_g = \frac{GM}{(r - r_e)^2} $$ The centripetal acceleration due to the motion of the point around the Sun is: $$ a_c = \frac{v^2}{r - r_e} $$ where because I've assumed the Earth is tidally locked the velocity $v$ is just the Earth's orbital velocity given by equation (1). If we substitute for this we get: $$ a_c = \frac{GM}{r(r - r_e)} $$ So the correction to the acceleration at the black dot is: $$\begin{align}
\Delta a &= a_g - a_c \\
&= \frac{GM}{(r - r_e)^2} - \frac{GM}{r(r - r_e)} \\
&= GM \left( \frac{r_e}{r(r - r_e)^2} \right) \\
&\approx GM \frac{r_e}{r^3}
\end{align}$$ where the last approximation is because $r \gg r_e$ so $r - r_e \approx r$. Putting in the numbers we get: $$ \Delta a \approx 2.5 \times 10^{-7} m/s^2 $$ So the fractional change in the weight of an object due to the Sun is: $$ \frac{2.5 \times 10^{-7}}{g} \approx 2.6 \times 10^{-8} $$ and the object is 0.0000026% lighter. Interestingly if you go through the working for the far side of the Earth you get exactly the same result i.e. the object on the far side is also 0.0000026% lighter. In fact this is why the tidal forces of the Sun (and Moon of course) raise a bulge on both the near and far sides of the Earth. Incidentally, I note that Christoph guesstimated a correction of $10^{-7}$ and he was pretty close :-) | {
"source": [
"https://physics.stackexchange.com/questions/121775",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41566/"
]
} |
121,830 | The bit that makes sense – tidal forces My physics teacher explained that most tidal effect is caused by the Moon rotating around the Earth, and some also by the Sun. They said that in the Earth - Moon system, the bodies are in free-fall about each other. But that points on the surface of Earth, not being at Earth's centre of gravity, experience slightly different pulls towards the Moon. The pull is a little greater if they are on the Moon's side, and a little less on the side away from the Moon. Once free-fall is removed, on the Moon side this feels like a pull towards the Moon and on the the opposite side it feels like a repulsion from the Moon. This makes sense to me, and is backed up by other questions and answers here, like this and also this Phys.SE question. The bit that doesn't make sense – tidal bulges They also said that there are "tidal bulges" on opposite sides of the Earth caused by these forces. The bulges are stationary relative to the Moon, and the Earth rotating through the bulges explains why we get two tides a day. They drew a picture like this one… An image search for tidal bulges finds hundreds of similar examples, and here's an animation from a scientist on Twitter. …But, if there is a tidal bulge on both sides of Earth, like a big wave with two peaks going round and around, how can an island, like Great Britain where I live, simultaneously have a high tide on one side and a low tide on the other? For example: Holyhead tide times on the West coast Whitby tide times on the East Two ports with tides 6 hours, or 180º apart. It's high tide at one while low tide at the other. But they are only 240 miles distant by road. Great Britain is much smaller than Earth. It's probably not even as big as the letter "A" in the word "TIDAL" in that picture. To prove this isn't just Britain being a crazy anomaly, here is another example from New Zealand: WESTPORT, New Zealand South Island Kaikoura Peninsula, New Zealand South Island Two ports that are 180º (6 hours) apart, but separated by just 200 delightful miles through a national park. New Zealand, unlike the UK, is in fairly open ocean. | There is no tidal bulge. This was one of Newton's few mistakes. Newton did get the tidal forcing function correct, but the response to that forcing in the oceans: completely wrong. Newton's equilibrium theory of the tides with its two tidal bulges is falsified by observation. If this hypothesis was correct, high tide would occur when the Moon is at zenith and at nadir. Most places on the Earth's oceans do have a high tide every 12.421 hours, but whether those high tides occur at zenith and nadir is sheer luck. In most places, there's a predictable offset from the Moon's zenith/nadir and the time of high tide, and that offset is not zero. One of the most confounding places with regard to the tides is Newton's back yard. If Newton's equilibrium theory was correct, high tide would occur at more or less the same time across the North Sea. That is not what is observed. At any time of day, one can always find a place in the North Sea that is experiencing high tide, and another that is simultaneously experiencing low tide. Why isn't there a bulge? Beyond the evidence, there are a number of reasons a tidal bulge cannot exist in the oceans. The tidal bulge cannot exist because the way water waves propagate. If the tidal bulge did exist, it would form a wave with a wavelength of half the Earth's circumference. That wavelength is much greater than the depth of the ocean, which means the wave would be a shallow wave. The speed of a shallow wave at some location is approximately $\sqrt{gd}$ , where $d$ is the depth of the ocean at that location. This tidal wave could only move at 330 m/s over even the deepest oceanic trench, 205 m/s over the mean depth of 4267 m, and less than that in shallow waters. Compare with the 465 m/s rotational velocity at the equator. The shallow tidal wave cannot keep up with the Earth's rotation. The tidal bulge cannot exist because the Earth isn't completely covered by water. There are two huge north-south barriers to Newton's tidal bulge, the Americas in the western hemisphere and Afro-Eurasia in the eastern hemisphere. The tides on the Panama's Pacific coast are very, very different from the tides just 100 kilometers away on Panama's Caribbean coast. A third reason the tidal bulge cannot exist is the Coriolis effect. That the Earth is rotating at a rate different from the Moon's orbital rate means that the Coriolis effect would act to sheer the tidal wave apart even if the Earth was completely covered by a very deep ocean. What is the right model? What Newton got wrong, Laplace got right. Laplace's dynamic theory of the tides accounts for the problems mentioned above. It explains why it's always high tide somewhere in the North Sea (and Patagonia, and the coast of New Zealand, and a few other places on the Earth where tides are just completely whacko). The tidal forcing functions combined with oceanic basin depths and outlines results in amphidromic systems. There are points on the surface, "amphidromic points", that experience no tides, at least with respect to one of the many forcing functions of the tides. The tidal responses rotate about these amphidromic points. There are a large number of frequency responses to the overall tidal forcing functions. The Moon is the dominant force with regard to the tides. It helps to look at things from the perspective of the frequency domain. From this perspective, the dominant frequency on most places on the Earth is 1 cycle per 12.421 hours, the M 2 tidal frequency. The second largest is the 1 cycle per 12 hours due to the Sun, the S 2 tidal frequency. Since the forcing function is not quite symmetric, there are also 1 cycle per 24.841 hours responses (the M 1 tidal frequency), 1 cycle per 24 hours responses (the S 1 tidal frequency), and a slew of others. Each of these has its own amphidromic system. With regard to the North Sea, there are three M 2 tidal amphidromic points in the neighborhood of the North Sea. This nicely explains why the tides are so very goofy in the North Sea. Images For those who like imagery, here are a few key images. I'm hoping that the owners of these images won't rearrange their websites. The tidal force Source: https://physics.mercer.edu/hpage/tidal%20asymmetry/asymmetry.html This is what Newton did get right. The tidal force is away from the center of the Earth when the Moon (or Sun) is at zenith or nadir, inward when the Moon (or Sun) is on the horizon. The vertical component is the driving force behind the response of the Earth as a whole to these tidal forces. This question isn't about the Earth tides. The question is about the oceanic tides, and there it's the horizontal component that is the driving force. The global M 2 tidal response Source: https://en.wikipedia.org/wiki/File:M2_tidal_constituent.jpg 
Source: http://volkov.oce.orst.edu/tides/global.html The M2 constituent of the tides is the roughly twice per day response to the tidal forcing function that results from the Moon. This is the dominant component of the tides in many parts of the world. The first image shows the M2 amphidromic points, points where there is no M2 component of the tides. Even though these points have zero response to this component, these amphidromic points are nonetheless critical in modeling the tidal response. The second image, an animated gif, shows the response over time. The M2 tidal response in the North Sea Archived source: www.geog.ucsb.edu/~dylan/ocean.html I mentioned the North Sea multiple times in my response. The North Atlantic is where 40% of the M2 tidal dissipation occurs, and the North Sea is the hub of this dissipation. Energy flow of the semi-diurnal, lunar tidal wave (M2) Source: http://www.altimetry.info/thematic-use-cases/ocean-applications/tides/ http://www.altimetry.info/wp-content/uploads/2015/06/flux_energie.gif The above image displays transfer of energy from places where tidal energy is created to places where it is dissipated. This energy transfer explains the weird tides in Patagonia, one of the places on the Earth where tides are highest and most counterintuitive. Those Patagonian tides are largely a result of energy transfer from the Pacific to the Atlantic. It also shows the huge transfer of energy to the North Atlantic, which is where 40% of the M2 tidal dissipation occurs. Note that this energy transfer is generally eastward. You can think of this as a representing "net tidal bulge." Or not. I prefer "or not." Extended discussions based on comments (... because we delete comments here) Isn't a Tsunami a shallow water wave as well as compared to the ocean basins? I know the wavelength is smaller but it is still a shallow water wave and hence would propagate at the same speed. Why don't they suffer from what you mentioned regarding the rotational velocity of the earth. Firstly, there's a big difference between a tsunami and the tides. A tsunami is the the result of a non-linear damped harmonic oscillator (the Earth's oceans) to an impulse (an earthquake). The tides are the response to a cyclical driving force. That said, As is the case with any harmonic oscillator, the impulse response is informative of the response to a cyclical driving force. Tsunamis are subject to the Coriolis effect. The effect is small, but present. The reason it is small is because tsunami are, for the most part, short term events relative to the Earth's rotation rate. The Coriolis effect becomes apparent in the long-term response of the oceans to a tsunami. Topography is much more important for a tsunami. The link that follows provides an animation of the 2004 Indonesian earthquake tsunami . References for the above: Dao, M. H., & Tkalich, P. (2007). Tsunami propagation modelling? a sensitivity study. Natural Hazards and Earth System Science , 7(6), 741-754. Eze, C. L., Uko, D. E., Gobo, A. E., Sigalo, F. B., & Israel-Cookey, C. (2009). Mathematical Modelling of Tsunami Propagation. Journal of Applied Sciences and Environmental Management , 13(3). Kowalik, Z., Knight, W., Logan, T., & Whitmore, P. (2005). Numerical modeling of the global tsunami: Indonesian tsunami of 26 December 2004. Science of Tsunami Hazards , 23(1), 40-56. This is an interesting answer full of cool facts and diagrams, but I think it's a little overstated. Newton's explanation wasn't wrong, it was an approximation. He knew it was an approximation -- obviously he was aware that the earth had land as well as water, that tides were of different heights in different places, and so on. I don't think it's a coincidence that the height of the bulge in the equipotential is of very nearly the right size to explain the observed heights of the tides. Newton's analysis was a good start. Newton certainly did describe the tidal force properly. He didn't have the mathematical tools to do any better than what he did. Fourier analysis, proper treatment of non-inertial frames, and fluid dynamics all post-date Newton by about a century. Besides the issues cited above, Newton ignored the horizontal component of the tidal force and only looked at the vertical component. The horizontal component wouldn't be important if the Earth was tidally locked to the Moon. The dynamical theory of the tides essentially ignores the vertical component and only looks at the horizontal component. This gives a very different picture of the tides. I'm far from alone in saying the tidal bulge doesn't exist. For example, from this lecture , the page on dynamic tides rhetorically asks "But how can water confined to a basin engage in wave motion at all like the “tidal bulges” that supposedly sweep around the globe as depicted in equilibrium theory?" and immediately responds (emphasis mine) " The answer is – it can’t. " In Affholder, M., & Valiron, F. (2001). Descriptive Physical Oceanography. CRC Press , the authors introduce Newton's equilibrium tide but then write (emphasis mine) "For the tidal wave to move at this enormous speed of 1600 km/h, the ideal ocean depth would have to be 22 km. Taking the average depth of the ocean as 3.9 km, the speed of the tidal elevations can only be 700 km/h. Therefore the equilibrium position at any instant required by this theory cannot be established. " Oceanographers still teach Newton's equilibrium tide theory for a number of reasons. It does give a proper picture of the tidal forcing function. Moreover, many students do not understand how many places can have two tides a day. For that matter, most oceanography instructors and textbook authors don't understand! Many oceanographers and their texts still hold that the inner bulge is a consequence of gravity but the other bulge is a consequence of a so-called centrifugal force. This drives geophysicists and geodocists absolutely nuts. That's starting to change; in the last ten years or so, some oceanography texts have finally started teaching that the only force that is needed to explain the tides is gravitation. | {
"source": [
"https://physics.stackexchange.com/questions/121830",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45649/"
]
} |
121,879 | The other day, I bumped my bookshelf and a coin fell down. This gave me an idea. Is it possible to compute the mass of a coin, based on the sound emitted when it falls? I think that there should be a way to do it. But how? | So, I decided to try it out. I used Audacity to record ~5 seconds of sound that resulted when I dropped a penny, nickel, dime, and quarter onto my table, each 10 times. I then computed the power spectral density of the sound and obtained the following results: I also recorded 5 seconds of me not dropping a coin 10 times to get a background measurement. In the plot, I've plotted all 50 traces on top of one another with each line being semi-transparent. There are several features worth noticing. First, there are some very distinct peaks, namely the 16 kHz and 9 kHz quarter spikes, as well as the 14 kHz nickel spike. But, it doesn't appear as though the frequencies follow any simple relationship like the $ \propto m^{-1/3}$ scaling the order of magnitude result Floris suggests. But, I had another idea. For the most part, we could make the gross assumption that the total energy radiated away as sound would be a fixed fraction of the total energy of the collision. The precise details of the fraction radiated as sound would surely depend on a lot of variables outside our control in detail, but for the most part, for a set of standard coins (which are all various, similar, metals), and a given table, I would expect this fraction to be fairly constant. Since the energy of a coin, if it's falling from a fixed height, is proportional to its mass, I would expect the sound energy to be proportional to its mass as well. So, this is what I did. I integrated the power spectral densities and fit them into a linear relationship with respect to the mass. I obtained: I did a Bayesian fit to get an estimate of the errors. On the left, I'm plotting the joint posterior probability distribution for the $\alpha$ intercept parameter and the $\beta$ slope parameter, and on the right, I'm plotting the best fit line, as well as $2\sigma$ contours around it to either side. For my priors, I took Jeffrey's priors. The model seems to do fairly well, so assuming you knew the height that coins were dropping and had already calibrated to the particular table and noise conditions in the room under consideration, it would appear as though, from a recording of the sound the coin made as it fell, you could expect to estimate the mass of the coin to within about a 2-gram window. For specificity, I used the following coins: Penny: 1970 Nickel: 1999 Dime: 1991 Quarter: 1995 Edit: Scaling Collapse Following Floris, we can check to see how accurate the model $ f \sim E^{1/2} m^{-1/3} \eta^{-1} $ is. We will use the data provided, and plot our observed power density versus a scaled frequency $f m^{1/3} \eta E^{-1/2}$ . We obtain: which looks pretty good. In order to see a little better how well they overlap, I will reproduce the plot but introduce an offset between each of the coins: It is pretty impressive how well the spectra line up. As for the secondary peaks for the quarter and nickel, see Floris' afterthought. Landing Material Someone in the comments asked what happens if we change the thing the coins fall onto. So, I did some drops where instead of falling onto the table directly, I had the coins fall onto a piece of paper on the table. If you ask me, these two cases sounded very different, but their spectra are very similar. This was for the quarter. You'll notice that the paper traces are noticeably below the table ones. Coin Materials The actual composition of the coin seems to have a fairly large effect. Next, I tried three different pennies, each dropped 5 times. A 1970s brass penny, A 2013 zinc penny and a 1956 bronze penny. Large Coins Hoping to better resolve the second harmonic, I tried some other larger coins: Notice that the presidential dollar has a nicely resolved second harmonic. Notice also that the Susan B dollars not only look and feel like quarters, they sound like them too. Repeatability Lastly, I worried about just how repeatable this all was. Could you actually hope to measure some of these spectra and then given any sound of a coin falling determine which coins were present, or perhaps as in spectroscopy tell the ratios of coins present in the fall. The last thing I tried was to drop 10 pennies at once, and 10 nickels at once to see how well resolved the spectra were. While it is fair to say that we can still nicely resolve the penny peak, it seems nickels in the real world have a lot of variations. For more on nickels, see Floris' second answer. | {
"source": [
"https://physics.stackexchange.com/questions/121879",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9220/"
]
} |
121,920 | (c.f Di Francesco, Conformal Field Theory chapters 2 and 4). The expression for the full generator, $G_a$, of a transformation is $$iG_a \Phi = \frac{\delta x^{\mu}}{\delta \omega_{a}} \partial_{\mu} \Phi - \frac{\delta F}{\delta \omega_a}$$
For an infinitesimal special conformal transformation (SCT), the coordinates transform like $$x'^{\mu} = x^{\mu} + 2(x \cdot b)x^{\mu} - b^{\mu}x^2$$ If we now suppose the field transforms trivially under a SCT across the entire space, then $\delta F/\delta \omega_a = 0$. Geometrically, a SCT comprises of a inversion, translation and then a further inversion.
An inversion of a point in space just looks like a translation of the point. So the constant vector $b^{\mu}$ parametrises the SCT. Then $$\frac{\delta x^{\mu}}{\delta b^{\nu}} = \frac{\delta x^{\mu}}{\delta (x^{\rho}b_{\rho})} \frac{\delta (x^{\gamma}b_{\gamma})}{\delta b^{\nu}} = 2 x^{\mu}x_{\nu} - x^2 \delta_{\nu}^{\mu}.$$
Now moving on to my question: Di Francesco makes a point of not showing how the finite transformation of the SCT comes from but just states it. $$x'^{\mu} = \frac{x^{\mu} - b^{\mu}x^2}{1-2x\cdot b + b^2 x^2}$$ I was wondering if somebody could point me to a link or explain the derivation. Is the reason for its non appearance due to complication or by being tedious? I am also wondering how, from either of the infinitesimal or finite forms, we may express the SCT as $$\frac{x'^{\mu}}{x'^2} = \frac{x^{\mu}}{x^2} - b^{\mu},$$ which is to say the SCT is an inversion $(1/x^2)$ a translation $-b^{\mu}$ and then a further inversion $(1/x'^2)$ which then gives $x'^{\mu}$, i.e the transformed coordinate. | In order to determine the finite SCT from its infinitesimal version, we need to solve for the integral curves of the special conformal killing field $X$ defined by
\begin{align}
X(x) = 2(b\cdot x) x - x^2 b.
\end{align}
I explain why this is equivalent to "integrating" the infinitesimal transformation below. This means we need to solve the differential equation $X(x(t)) = \dot x(t)$ for the function $x$. Explicitly, this differential equation is
\begin{align}
\dot x = 2(b\cdot x) x - x^2 b.
\end{align}
This can be done with a trick, namely a certain change of variables. Define
\begin{align}
y = \frac{x}{x^2},
\end{align}
then the resulting differential equation satisfied by $y$ becomes simple;
\begin{align}
\dot y = -b.
\end{align}
I urge you to perform the algebra yourself to confirm this. It's kind of magic that it works if you ask me, and the change of variables is precisely an inversion, so I think there's something deeper going on here, but I'm not sure what it is. The solution to this equation is simply $y = y_0 -tb$, so we find that the original function $x$ satisifes
\begin{align}
\frac{x}{x^2} = \frac{x_0}{x_0^2} - tb.
\end{align}
In other words, we've turned a monstrous nonlinear system of ODEs into a simple algebraic equation. In fact, one can show that the algebraic eqution $x/x^2 = A$ has the solution $x = A/A^2$, from which it follows that the solution to our original problem is
\begin{align}
x(t) = \frac{x_0 - x_0^2(tb)}{1-2x_0\cdot(tb) + x_0^2(tb)^2},
\end{align}
as desired, since this is precisely the form of the "finite" SCT. Note that these only are local integral curves; the solution hits a singularity when $t$ is such that the denominator vanishes. Why solve for integral curves? If you're wondering what your original question has anything to do with solving the integral curves of the special conformal vector field I wrote down, then read on. It helps to start with the concept of a flow . Transforming points via flows. Let a point $x\in\mathbb R^d$ be given, then for each $b\in\mathbb R^d$, we assume, at least in a neighborhood of that point, that there is a $\epsilon$-parameter family of transformations $\Phi_b(\epsilon):\mathbb R^d \to \mathbb R^d$ with $\epsilon\in [0,\bar\epsilon)$ such that $\Phi_b(\epsilon)(x)$ tells you what an SCT corresponding to the vector $b$ does to the point $x$ is you ``flow" in $\epsilon$. At $\epsilon = 0$, this flow just maps the point to itself;
\begin{align}
\Phi_b(0)(x) = x,
\end{align}
namely it starts at the identity. For $\epsilon >0$, the flow translates the point along a curve in $\mathbb R^d$. If you change $b$, then this corresponds to moving way from $x$ in a different initial direction under the flow. Infinitesimal generator of a flow. When we talk of an infinitesimal generator of such a transformation, we are talking about the term that generates the linear approximation for the flow in the parameter $\epsilon$. In other words, we expand
\begin{align}
\Phi_b(\epsilon)(x) = x + \epsilon G_b(x) + O(\epsilon^2),
\end{align}
and the vector field $G_b$ is called the infinitesimal generator of the flow. What you have pointed out in your question is that we know this infinitesimal generator;
\begin{align}
G_b(x) = 2(b\cdot x)x - x^2 b,
\end{align}
and we now want to reconstruct the whole flow simply by knowing this information corresponding to its linear approximation at every point. This is equivalent to solving some first order ordinary differential equations, which is why people often say we want to "integrate" the infinitesimal transformation to determine the finite one; integration is a perhaps somewhat archaic way of solving the corresponding differential equation. Finding the whole flow given its generator. Ok, so what differential equation do we solve? Well, note that the vector field $G_b$ is tangent to the curves generated by the flow by its very construction (we took a derivative with respect to $\epsilon$ with is the "velocity" of the curve generated by the flow), so the differential equation we want to solve is
\begin{align}
\dot x(\epsilon) = G_b(x(\epsilon)),
\end{align}
and we want to solve for $x(\epsilon)$. The solutions to this differential equation are referred to as integral curves of the vector field $G_b$. Acknowledgements. I didn't figure out the first part of this answer completely on my own. The idea for making the substitution $y=x/x^2$, which is really the crux of everything, came from here http://www.physicsforums.com/showthread.php?t=518316 , namely from user Bill_K. The idea for how to solve the algebraic equation $x/x^2 = A$ came from math.SE user @ HansLundmark after I posted essentially your question in mathy language on math.SE here . I should another math.SE user @ Kirill solved for the integral curves in a totally different way in his answer to the question I posted. Addendum. How does one get $\dot y = -b$ from the change of variable $y=x/x^2$ as claimed in the first section? Well, let's calculate:
\begin{align}
\dot y
&= \frac{x^2\dot x - x(2x\cdot \dot x)}{(x^2)^2} \\
&= \frac{x^2(2(b\cdot x) x - x^2 b) - x(2x\cdot (2(b\cdot x) x - x^2 b))}{(x^2)^2} \\
&= \frac{2x^2(b\cdot x) x - (x^2)^2b - 4x^2(b\cdot x) x + 2x^2(b\cdot x)x}{(x^2)^2} \\
&= -\frac{(x^2 )^2b}{(x^2)^2} \\
&= -b
\end{align}
Magic! | {
"source": [
"https://physics.stackexchange.com/questions/121920",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/17364/"
]
} |
122,102 | This isn't a question of how a wing works -- vortex flow, Bernoulli's principle, all of that jazz. Instead, it's a question of why we need a wing at all. A wing produces lift, but why is that necessary? I got to this by thinking of an airplane at a coarse level. The wing produces lift through some interesting physics , but it needs energy to do this. The engine is what ultimately provides all of this energy (let's assume no headwind, and in "ultimately" I'm not including chemical energy in the fuel, yadda yadda "it all comes from the sun"). That means the engine pushes enough air, and fast enough, to (a) offset gravity and (b) still propel the plane forward. So the question is: why can't we just angle the engine down a bit and get the same effect? To slightly reword: why do wings help us divert part of an engine's energy downward in a way that's more efficient than just angling the engine? One answer is that we can do exactly that; I'm guessing it's what helicopters and VTOL airplanes like the Harrier do. But that's less efficient. Why? One analogy that comes to mind is that of a car moving uphill. The engine doesn't have the strength to do it alone, so we use gears; for every ~2.5 rotations the engine makes, the wheel makes one, stronger rotation. This makes intuitive sense to me: in layman's terms, the gears convert some of the engine's speed-energy into strength-energy. Is this analogy applicable -- is the wing on a plane like the gearbox in my transmission? And if so, what's the wing doing, more concretely? If a gear converts angular speed to increased force, what X does a wing convert to what Y? None of the answers I could guess at satisfied my intuition. If the wing converts horizontal speed to vertical speed, tipping the engine downward would seem to have the same effect. If it's changing the volume/speed of the air (more air blown slower, or less air blown faster), it would still have to obey the conservation of energy, meaning that the total amount of kinetic energy of the air is the same -- again suggesting that the engine could just be tipped down. EDIT In thinking about this more from the answers provided, I've narrowed down my question. Let's say we want a certain amount of forward force $S$ (to combat friction and maintain speed) and a certain amount of lift $L$ (to combat gravity and maintain altitude). If we tilt our engine, the forces required look like this: The total amount of force required is $F = \sqrt{S^2 + L^2}$. That seems pretty efficient to me; how can a horizontal engine + wing produce the same $S$ and $L$ with a smaller $F'$? | Let's look at the relationship between momentum and energy. As you know, for a mass $m$ kinetic energy is $\frac12mv^2$ and momentum is $mv$ - in other words energy is $\frac{p^2}{2m}$ Now to counter the force of gravity we need to transfer momentum to the air: $F\Delta t = \Delta(mv)$ The same momentum can be achieved with a large mass, low velocity as with small mass, high velocity. But while the momentum of these two is the same, THE ENERGY IS NOT. And therein lies the rub. A large wing can "move a lot of air a little bit" - meaning less kinetic energy is imparted to the air. This means it is a more efficient way to stay in the air. This is also the reason that long thin wings are more efficient: they "lightly touch a lot of air", moving none of it very much. Trying to replicate this efficiency with an engine is very hard: you need compressors for it to work at all (so you can mix air with fuel and have the thrust come out the back) and this means you will have a small volume of high velocity gas to develop thrust. That means a lot of energy is carried away by the gas. Think about the noise of an engine - that's mostly that high velocity gas. Now think of a glider: why is it so silent? Because a lot of air moves very gently. I tried to stay away from the math but hope the principle is clear from this. | {
"source": [
"https://physics.stackexchange.com/questions/122102",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/52234/"
]
} |
122,231 | Newton's second law says $F=ma$.
Now if we put $F=0$ we get $a=0$ which is Newton's first law.
So why do we need Newton's first law ? Before asking I did some searching and I got this: Newtons first law is necessary to define inertial reference frame on which the second law can be applied. But why can't we just use Newton's second law to define an inertial frame? So if $F=0$ but $a$ is not equal to 0 (or vice versa), the frame is non-inertial. One can say (can one?) we cannot apply the second law to define a reference frame because it is only applicable to inertial frames. Thus unless we know in advance that a frame is inertial, we can't apply the second law. But then why this is not the problem for the first law? We don't need to know it in advance about the frame of reference to apply the first law. Because we take the first law as definition of an inertial reference frame. Similarly if we take the second law as the definition of an inertial frame, it should not be necessary to know whether the frame is inertial or not to apply the second law (to check that the frame is inertial). | Newton's second law says $F = ma$. Now if we put $F = 0$ we get $a = 0$ which is Newton's first law. So why do we need Newton's first law? I don't think this is obvious from Newton's statement of the Second Law. In his Principia Mathematica , Newton says that a force causes an acceleration. Without the first law, this doesn't necessarily imply that zero force means zero acceleration. One could conceive of other things that also cause acceleration. A modern person might be concerned about non-inertial reference frames. Someone from Newton's time would probably be more concerned about Aristotelian ideas of objects seeking their own level. But in either case, its necessary to emphasize that forces not only cause acceleration, but that they are the only things that do so (or in the modern formulation, that there exists a frame in which they are the only things that do so). | {
"source": [
"https://physics.stackexchange.com/questions/122231",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41250/"
]
} |
122,239 | How does energy remain conserved in a transformer if emf is increasing, or decreasing?
Does the current decreases to accomodate?
Does Ohm's law still hold here? Although we know, Ohm's law is not universal. | Newton's second law says $F = ma$. Now if we put $F = 0$ we get $a = 0$ which is Newton's first law. So why do we need Newton's first law? I don't think this is obvious from Newton's statement of the Second Law. In his Principia Mathematica , Newton says that a force causes an acceleration. Without the first law, this doesn't necessarily imply that zero force means zero acceleration. One could conceive of other things that also cause acceleration. A modern person might be concerned about non-inertial reference frames. Someone from Newton's time would probably be more concerned about Aristotelian ideas of objects seeking their own level. But in either case, its necessary to emphasize that forces not only cause acceleration, but that they are the only things that do so (or in the modern formulation, that there exists a frame in which they are the only things that do so). | {
"source": [
"https://physics.stackexchange.com/questions/122239",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/50363/"
]
} |
122,353 | This question (and answer) is an attempt to clear the air on what appears to be a very simple issue, with conflicting or unclear explanations on the internet. Arguments, negations, etc are invited. I'm classifying this as a physics question, since it has to do with resonances, attenuation, etc. Question: I recently Googled this, and found a ton of articles with confusing explanations: One of the top Google results from a site called Live "Science", says - "....their wavelengths stay the same regardless of whether the tract is filled with helium gas or air....That means the frequencies of the resonant harmonics must increase in a helium-filled cavity instead." The above appears to completely contradict the source university article as well as this other article in Scientific American that say the pitch of the sound (and the actual frequency of oscillations) doesn't change, only the timbre (and the distribution of power between low and high frequencies) changes. Additionally, neither the university article quoted nor Scientific American explain HOW the presence of helium leads to the presence of higher frequencies. For example, the university article simply says - "Inhaling helium changes the frequencies of the resonances, and therefore of the formants they produce" --- Okay, how? There also appears to no consensus on whether a LISTENER in a helium atmosphere would hear the same frequencies as normal (it's helium all the way so it shouldn't make any difference right?) OR hear the squeaky voices associated with helium. This should be explainable by physics, what is the answer? | In order to properly understand this without any unnecessary "controversy", let's break the whole process of sound generation and perception into 5 important, but completely separate parts. We'll then proceed to explain each part using a few different examples and pieces of derivative logic: Vibration of the vocal folds Transmission of energy from vocal folds to air in the vocal tract Resonance and Attenuation in the vocal tract Transmission of energy from the end of the vocal tract (mouth) to the surrounding medium Reception and perception of sound by another human. Now: 1. The frequency generated by the vocal folds depends on the tension exerted on them and surrounding muscles. This is a neuromuscular process and is NOT affected by Helium or any other gas (at least in the short term). So our vocal folds continue to vibrate at the same frequency in helium as in normal air. 2. Sound is produced by the transmission of the vibrations produced in the vocal fold, to the air in the vocal tract. This "transmission" doesn't occur by any magic. The vocal folds - as they vibrate - push and pull columns of air in their immediate vicinity, not very different from the way you may push a child on a swing at specific intervals, so as to produce sustained oscillations, and brief enjoyment. (The "pull" in this analogy though, is provided by gravity). The point is, the child oscillates at the same frequency at which you are pushing the swing. i.e. If you are pushing the swing once every N seconds, the child also completes a swing once every N seconds. This is true regardless of the weight of the child, correct? Similarly, the air in the vocal tract, also vibrates at the same frequency as the vocal chords . This fact, is also true regardless of the mass of the air particles . In other words, the frequency of sound does not change, regardless of the medium in which it is transmitted. Time-Out The last one was a doozy. Frequency of sound does not change? Then
why on earth does helium sound different from normal air? While the frequency of sound does not change, the SPEED of sound does .
Why? Consider this old classical physics equation: Kinetic Energy = 0.5 * m * v^2, where m = mass and v = speed (let's not say 'velocity' for now) Now the vocal chords, vibrate with the same Force N at the same frequency T . Thus the energy it conveys must be the same
in ALL media. In other words, for a given constant value of Kinetic Energy, v^2 is
inversely proportional to the mass of the particles. This naturally means sound travels faster in Helium than in
air . Now, we know the other old equation: Speed = Wavelength x Frequency Now since we know that the FREQUENCY of sound is the same in Helium
and in Air, and the speed of sound is greater in Helium, it follows
that the Wavelength of sound is greater in Helium than in Air .
This is a very important conclusion, that bears directly on our next
deduction. 3. Now, we have a very important conclusion in our kitty - "Wavelength of sound is greater in Helium than in Air". Remember that the vocal tract is often modelled (simplistically) as an open or closed tube. To refresh why that's important, see Wikipedia . The vocal tract is actually not really a cylinder, but a fairly complex shape. This means it has areas of constriction and expansion that change depending on the position of your tongue, tension in the tract, and several other factors. So in a sense, in these complex configurations, the vocal tract can be modelled as a series of tubes of varying diameters and varying levels of "closure" of either openings. Now this means, that different parts of the vocal tract , depending on their geometrical configuration and their material characteristics, resonate with different WAVELENGTHS of sound . Notice I said WAVELENGTHS and not FREQUENCIES. In common parlance, "Frequencies" is often used since W and F are directly inter-related in a common medium. However, even if we change the medium through which sound is being propagated, the interaction of sound waves with open and closed tubes depends strictly on its wavelength and not its frequency . Now would be a good point to return to the marquee conclusion we drew from point 2 - "Wavelength of sound is greater in Helium than in Air" . This leads us to the following KEY/FINAL CONCLUSIONS: In a vocal tract filled with Helium: 1. The frequencies of sound do not change 2. The wavelengths of sound DO change 3. Because the wavelengths have changed, the portions of the sound spectrum produced by the vocal chords that are attenuated and resonated by different portions of the vocal tract, also change. 4. This results in the sound spectrum output by the combination of the vocal chords and vocal tract in Helium, being different from the sound spectrum output in normal Air. 5. This means, the net distribution of energies among high and low frequencies (or the timbre ) changes with a change in sound medium. Whereas the fundamental frequency of the sound (closely related to pitch ) does not change . Let's look at the spectrogram of two sample sounds helpfully provided in the NSW article. Unfortunately due to the experimental conditions the two sounds do not have the same content (different sentences are spoken) and therefore the spectrogram cannot be exactly relied upon. However, the fundamental frequency in both is roughly the same and therefore supports our conclusion that the pitch is the same. Since different words are used in either sound, a timbre comparison cannot be made (since the difference in energy distributions visible in the spectrogram can be attributed to the different words spoken). Also, for simplicity and ease of understanding a "Melodic Spectrogram" has been used in favor of the raw, noisier spectrogram. It was generated using Sonic Visualizer. We are not Done! We started with the promise of explaining sound transmission and reception/perception in FIVE parts. We are done with only three. Let's get through the remaining parts very quickly. 4. Transmission of sound from mouth to air - As covered by point 2, with a change in medium, the sound frequency does not change, but the wavelength does. This means that the only effect of filling a room with helium as well (rather than just the vocal tract) is to increase the wavelength of the sound. 5. The above has no impact on sound perception . The ear and brain together are primarily a FREQUENCY receiver. The ear translates air pulsations into hair cell oscillations, which then translate to synchronous pulses on attached neurons. Since the timing of the pulses is correlated ONLY to frequency, and the timing of the pulses is what produces notions of pitch, timbre etc, we can safely assume that the ear transcribes sound to the brain faithfully based on frequency. Wavelength has no impact on this process. However, the ear, just like the vocal tract, is non-linear. Which means that it too, is going to attenuate/resonate some sounds (the specific non-linear properties of the cochlea are still being studied). However, UNLIKE the vocal tract, the ear/cochlea is a sealed, fluid-filled chamber. The properties of the cochlea are not affected by surrounding air but only by the fluid, which of course could be affected by blood composition and other biological factors. But NOT the immediate environment. Thus at the root of all the confusion around production and reception of sound in alternative media like Helium, is that the vocal tract's non-linear characteristics are affected by the surrounding medium , whereas the ear's are not. That's it. | {
"source": [
"https://physics.stackexchange.com/questions/122353",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/28073/"
]
} |
122,505 | Why is it that when you first fill up a balloon, it's hard to get air through, but after inflating it a bit, it becomes much easier to further inflate the balloon? | I think that most of the answers here are incorrect since it has nothing to do with decreasing resistance of rubber. In fact, the force required to stretch the balloon increases, not decreases while inflating. It's similar to stretching a string, ie. the reaction force is proportional to the increase in length of the string - this is why there is a point when you can no longer stretch a chest expander. The real reason that initially it's hard to inflate the balloon is that in the beginning, ie. with the first blow, you increase the total surface of the balloon significantly, thus the force (pressure on the surface) increases also significantly. With each subsequent blow, the increase of the total surface is smaller and so is the increase of force. This is the result of two facts: constant increase of volume with each blow volume of the balloon is proportional to the cube of radius while surface of the balloon is proportional to the square of the radius For a sphere you have: $$
A={4}\pi R^2 \\
V={4\over3}\pi R^3
$$
The equations says that the amount of work required to increase the volume of the balloon by one unit is smaller if the balloon is already inflated. | {
"source": [
"https://physics.stackexchange.com/questions/122505",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20168/"
]
} |
122,570 | My confusion about quantum theory is twofold: I lack an adequate understanding of how the mathematics of quantum theory is supposed to correspond to phenomena in the physical world I still have an incomplete picture in my mind of how cause and effect relationships occur at the quantum level of reality. This is why phenomena such as "entanglement" make absolutely no sense to me. So, in an attempt to come to some understanding of all of this, I would like to know that if what we conceptualize as a "field" is merely an interaction among particles, and particles themselves are actually fluctuations in "fields", then which comes first, particles or fields? | This is a tricky question because it asks about the meaning of words. People use the word "particle" to refer to various, not always well defined, notions in physics. In the end, I think the simplest and more correct single way to categorize the terms is to interpret "particle" as "excitation of a field". For example, if someone says There are two electrons in this box" I would mentally translate that to The electron field in this box has two units of excitation. This is all much easier to think about if you're familiar with the so-called "second quantization". $^{[1]}$ Second quantization Consider a one-dimensional infinite wall potential (i.e. "particle in a box"). The system has a set of discrete energy levels, which we can index as $$\left\{ A, B, C, D, \ldots \right\}$$ If we have only one particle, we can denote its state as e.g. $|\Psi \rangle_1 = |B\rangle + |D\rangle$ . $^{[2]}$ This is the so-called first quantization . If we have two particles, the situation is significantly more complex because, as you have probably learned, quantum particles are indistinguishable. You probably learned that you have to symmetrize (bosons) or antisymmetrize (fermions) the state vector to account for the fact that the particles are indistinguishable. For example, if you say that particle #1 is in state $|\Psi\rangle_1$ as written above, and particle #2 is in state $|\Psi\rangle_2=|C\rangle$ , then the total system state is (assuming boson particles): \begin{align}
\left \lvert \Phi \right \rangle
&= (|B\rangle_1 + |D\rangle_1)|C\rangle_2 + |C\rangle_1 (|B\rangle_2 + |D\rangle_2) \\
&= |B\rangle_1 |C\rangle_2 + |D\rangle_1 |C\rangle_2 + |C\rangle_1 |B\rangle_2 + |C\rangle_1 |D\rangle_2 \, .
\end{align} This notation is horrible. In symmetrization/antisymmetrization you are basically saying: "My notation contains information that it shouldn't, namely the independent states of particles which are actually indistinguishable, so let me add more terms to my notation to effectively remove the unwanted information." This should seem really awkward and undesirable, and it is. Let us consider an analogy for why the symmetrized state is such a bad representation. Consider a violin string with a set of vibrational modes. If we want to specify the state of the string, we enumerate the modes and specify the amplitude of each one, i.e. we write a Fourier series $$\text{string displacement}(x) = \sum_{\text{mode }n=0}^{\infty}c_n \,\,\text{[shape of mode }n](x).$$ The vibrational modes are like the quantum eigenstates, and the amplitudes $c_n$ are like the number of particles in each state. With this analogy, the first quantization notation, in which we index over the particles and specify each one's state, is like indexing over units of amplitude and specifying each one's mode. That's obviously backwards. In particular, you now see why particles are indistinguishable. If a particle is just a unit of excitation of a quantum state, then just like units of amplitude of a vibrating string, it doesn't make any sense to say that the particle has identity. Units of excitation have no identity because they're just mathematical constructs to keep track of how excited a particular mode is. A better way to specify a quantum state is to list each possible state and say how excited it is. In quantum mechanics, excitations come in discrete units $^{[3]}$ , so we could specify a state like this: $$|n_A\rangle_A |n_B\rangle_B |n_C\rangle_C |n_D\rangle_D$$ where $n_i$ is an integer. In this notation, the state $|\Psi\rangle_1$ from before is written $$|\Psi\rangle_1 = |0\rangle_A |1\rangle_B |0\rangle_C |0\rangle_D +
|0\rangle_A |0\rangle_B |0\rangle_C |1\rangle_D.$$ For compactness this would often be written $|\Psi\rangle_1=|0100\rangle + |0001\rangle$ . The more complex two particle state would be $$\left \lvert \Phi \right \rangle = |0\rangle_A |1\rangle_B |1\rangle_C |0\rangle_D + |0\rangle_A |0\rangle_B |1\rangle_C |1\rangle_D$$ or, more compactly, $$\left \lvert \Phi \right \rangle = |0110\rangle + |0011\rangle \, .$$ This is the so-called second quantization notation. Note that it has less terms than the first quantized version. This is because it doesn't need to undo information that it's not supposed to have. Back to fields vs. particles The second quantized notation is far better because it naturally accounts for the "indistinguishable" particles. But, what we really learned, is that particles are actually units of excitation of quantum states. In the field theory language, we'd say that the particle is a unit of excitation of the various modes of the field. I won't say that either fields or particles are more fundamental because one has little meaning without the other, but now that we understand what "particle" really means, the whole situation is hopefully much clearer to you. P.S. I do hope you'll ask for clarification as needed. [1] The term "second quantization" is stupid, so don't try to interpret it. [2] We ignore normalization. [3] Hence the term "quantum". | {
"source": [
"https://physics.stackexchange.com/questions/122570",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/52467/"
]
} |
122,575 | I was watching Allan Adams' lecture on energy eigenfunctions , and there's one part (around 43 minutes into the lecture) that confuses me. Suppose we have the initial wave function $\Psi (x,0)$ such that $\hat{E}\,\Psi (x,0)=E \,\Psi (x,0)$ for some constant $E$. Then, plugging this into the Schrödinger equation, we'd get: \begin{align}
i \hbar \frac{\partial}{\partial t} \Psi (x, 0) &= E \, \Psi (x,0) \\
\frac{\partial}{\partial t} \Psi (x, 0) &= \frac{E}{i \hbar} \, \Psi (x,0) \tag{1}\\
\therefore \Psi (x, t) &= \exp\left({-i \frac{E\,t}{\hbar}}\right) \Psi(x,0) \tag{2}
\end{align} I'm a bit confused about how to go from $(1)$ to $(2)$. Now if we make the additional assumption that $\hat{E}\,\Psi (x,t)=E \,\Psi (x,t)$ for all $t$, then the Schrödinger equation becomes:
\begin{align}
\frac{\partial}{\partial t} \Psi (x, t) &= \frac{E}{i \hbar} \, \Psi (x,t)
\end{align} and I can solve this differential equation easily and get $(2)$. But from watching that part of the lecture, it seems we only need to assume a weaker statement - that the initial wave function is an energy eigenfunction. But then, it's not clear to me how I can get the solution $(2)$ from $(1)$. Am I missing something? Update: Thanks for all the answers. After reading through the accompanying lecture note , we indeed need to assume that the energy operator is a constant over time. | This is a tricky question because it asks about the meaning of words. People use the word "particle" to refer to various, not always well defined, notions in physics. In the end, I think the simplest and more correct single way to categorize the terms is to interpret "particle" as "excitation of a field". For example, if someone says There are two electrons in this box" I would mentally translate that to The electron field in this box has two units of excitation. This is all much easier to think about if you're familiar with the so-called "second quantization". $^{[1]}$ Second quantization Consider a one-dimensional infinite wall potential (i.e. "particle in a box"). The system has a set of discrete energy levels, which we can index as $$\left\{ A, B, C, D, \ldots \right\}$$ If we have only one particle, we can denote its state as e.g. $|\Psi \rangle_1 = |B\rangle + |D\rangle$ . $^{[2]}$ This is the so-called first quantization . If we have two particles, the situation is significantly more complex because, as you have probably learned, quantum particles are indistinguishable. You probably learned that you have to symmetrize (bosons) or antisymmetrize (fermions) the state vector to account for the fact that the particles are indistinguishable. For example, if you say that particle #1 is in state $|\Psi\rangle_1$ as written above, and particle #2 is in state $|\Psi\rangle_2=|C\rangle$ , then the total system state is (assuming boson particles): \begin{align}
\left \lvert \Phi \right \rangle
&= (|B\rangle_1 + |D\rangle_1)|C\rangle_2 + |C\rangle_1 (|B\rangle_2 + |D\rangle_2) \\
&= |B\rangle_1 |C\rangle_2 + |D\rangle_1 |C\rangle_2 + |C\rangle_1 |B\rangle_2 + |C\rangle_1 |D\rangle_2 \, .
\end{align} This notation is horrible. In symmetrization/antisymmetrization you are basically saying: "My notation contains information that it shouldn't, namely the independent states of particles which are actually indistinguishable, so let me add more terms to my notation to effectively remove the unwanted information." This should seem really awkward and undesirable, and it is. Let us consider an analogy for why the symmetrized state is such a bad representation. Consider a violin string with a set of vibrational modes. If we want to specify the state of the string, we enumerate the modes and specify the amplitude of each one, i.e. we write a Fourier series $$\text{string displacement}(x) = \sum_{\text{mode }n=0}^{\infty}c_n \,\,\text{[shape of mode }n](x).$$ The vibrational modes are like the quantum eigenstates, and the amplitudes $c_n$ are like the number of particles in each state. With this analogy, the first quantization notation, in which we index over the particles and specify each one's state, is like indexing over units of amplitude and specifying each one's mode. That's obviously backwards. In particular, you now see why particles are indistinguishable. If a particle is just a unit of excitation of a quantum state, then just like units of amplitude of a vibrating string, it doesn't make any sense to say that the particle has identity. Units of excitation have no identity because they're just mathematical constructs to keep track of how excited a particular mode is. A better way to specify a quantum state is to list each possible state and say how excited it is. In quantum mechanics, excitations come in discrete units $^{[3]}$ , so we could specify a state like this: $$|n_A\rangle_A |n_B\rangle_B |n_C\rangle_C |n_D\rangle_D$$ where $n_i$ is an integer. In this notation, the state $|\Psi\rangle_1$ from before is written $$|\Psi\rangle_1 = |0\rangle_A |1\rangle_B |0\rangle_C |0\rangle_D +
|0\rangle_A |0\rangle_B |0\rangle_C |1\rangle_D.$$ For compactness this would often be written $|\Psi\rangle_1=|0100\rangle + |0001\rangle$ . The more complex two particle state would be $$\left \lvert \Phi \right \rangle = |0\rangle_A |1\rangle_B |1\rangle_C |0\rangle_D + |0\rangle_A |0\rangle_B |1\rangle_C |1\rangle_D$$ or, more compactly, $$\left \lvert \Phi \right \rangle = |0110\rangle + |0011\rangle \, .$$ This is the so-called second quantization notation. Note that it has less terms than the first quantized version. This is because it doesn't need to undo information that it's not supposed to have. Back to fields vs. particles The second quantized notation is far better because it naturally accounts for the "indistinguishable" particles. But, what we really learned, is that particles are actually units of excitation of quantum states. In the field theory language, we'd say that the particle is a unit of excitation of the various modes of the field. I won't say that either fields or particles are more fundamental because one has little meaning without the other, but now that we understand what "particle" really means, the whole situation is hopefully much clearer to you. P.S. I do hope you'll ask for clarification as needed. [1] The term "second quantization" is stupid, so don't try to interpret it. [2] We ignore normalization. [3] Hence the term "quantum". | {
"source": [
"https://physics.stackexchange.com/questions/122575",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/44860/"
]
} |
122,785 | The video “How Far Can Legolas See?” by MinutePhysics recently went viral. The video states that although Legolas would in principle be able to count $105$ horsemen $24\text{ km}$ away, he shouldn't have been able to tell that their leader was very tall. I understand that the main goal of MinutePhysics is mostly educational, and for that reason it assumes a simplified model for seeing. But if we consider a more detailed model for vision, it appears to me that even with human-size eyeballs and pupils $^\dagger$, one might actually be able to (in principle) distinguish smaller angles than the well known angular resolution :
$$\theta \approx 1.22 \frac \lambda D$$ So here's my question—using the facts that: Elves have two eyes (which might be useful as in e.g. the Very Large Array ). Eyes can dynamically move and change the size of their pupils. And assuming that: Legolas could do intensive image processing. The density of photoreceptor cells in Legolas's retina is not a limiting factor here. Elves are pretty much limited to visible light just as humans are. They had the cleanest air possible on Earth on that day. How well could Legolas see those horsemen? $^\dagger$ I'm not sure if this is an accurate description of elves in Tolkien's fantasy | Fun question! As you pointed out, $$\theta \approx 1.22\frac{\lambda}{D}$$ For a human-like eye, which has a maximum pupil diameter of about $9\ \mathrm{mm}$ and choosing the shortest wavelength in the visible spectrum of about $390\ \mathrm{nm}$ , the angular resolution works out to about $5.3\times10^{-5}$ (radians, of course). At a distance of $24\ \mathrm{km}$ , this corresponds to a linear resolution ( $\theta d$ , where $d$ is the distance) of about $1.2\ \mathrm m$ . So counting mounted riders seems plausible since they are probably separated by one to a few times this resolution. Comparing their heights which are on the order of the resolution would be more difficult, but might still be possible with dithering . Does Legolas perhaps wiggle his head around a lot while he's counting? Dithering only helps when the image sampling (in this case, by elven photoreceptors) is worse than the resolution of the optics. Human eyes apparently have an equivalent pixel spacing of something like a few tenths of an arcminute , while the diffraction-limited resolution is about a tenth of an arcminute, so dithering or some other technique would be necessary to take full advantage of the optics. An interferometer has an angular resolution equal to a telescope with a diameter equal to the separation between the two most widely separated detectors. Legolas has two detectors (eyeballs) separated by about 10 times the diameter of his pupils , $75\ \mathrm{mm}$ or so at most. This would give him a linear resolution of about $15\ \mathrm{cm}$ at a distance of $24\ \mathrm{km}$ , probably sufficient to compare the heights of mounted riders. However, interferometry is a bit more complicated than that. With only two detectors and a single fixed separation, only features with angular separations equal to the resolution are resolved, and direction is important as well. If Legolas' eyes are oriented horizontally, he won't be able to resolve structure in the vertical direction using interferometric techniques. So he'd at the very least need to tilt his head sideways, and probably also jiggle it around a lot (including some rotation) again to get decent sampling of different baseline orientations. Still, it seems like with a sufficiently sophisticated processor (elf brain?) he could achieve the reported observation. Luboš Motl points out some other possible difficulties with interferometry in his answer, primarily that the combination of a polychromatic source and a detector spacing many times larger than the observed wavelength lead to no correlation in the phase of the light entering the two detectors. While true, Legolas may be able to get around this if his eyes (specifically the photoreceptors) are sufficiently sophisticated so as to act as a simultaneous high-resolution imaging spectrometer or integral field spectrograph and interferometer. This way he could pick out signals of a given wavelength and use them in his interferometric processing. A couple of the other answers and comments mention the potential difficulty drawing a sight line to a point $24\rm km$ away due to the curvature of the Earth. As has been pointed out, Legolas just needs to have an advantage in elevation of about $90\ \mathrm m$ (the radial distance from a circle $6400\ \mathrm{km}$ in radius to a tangent $24\ \mathrm{km}$ along the circumference; Middle-Earth is apparently about Earth-sized, or may be Earth in the past, though I can't really nail this down with a canonical source after a quick search). He doesn't need to be on a mountaintop or anything, so it seems reasonable to just assume that the geography allows a line of sight. Finally a bit about "clean air". In astronomy (if you haven't guessed my field yet, now you know.) we refer to distortions caused by the atmosphere as "seeing" . Seeing is often measured in arcseconds ( $3600'' = 60' = 1^\circ$ ), referring to the limit imposed on angular resolution by atmospheric distortions. The best seeing, achieved from mountaintops in perfect conditions, is about $1''$ , or in radians $4.8\times10^{-6}$ . This is about the same angular resolution as Legolas' amazing interferometric eyes. I'm not sure what seeing would be like horizontally across a distance of $24\ \mathrm{km}$ . On the one hand there is a lot more air than looking up vertically; the atmosphere is thicker than $24\ \mathrm{km}$ but its density drops rapidly with altitude. On the other hand the relatively uniform density and temperature at fixed altitude would cause less variation in refractive index than in the vertical direction, which might improve seeing. If I had to guess, I'd say that for very still air at uniform temperature he might get seeing as good as $1\rm arcsec$ , but with more realistic conditions with the Sun shining, mirage-like effects probably take over limiting the resolution that Legolas can achieve. | {
"source": [
"https://physics.stackexchange.com/questions/122785",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24791/"
]
} |
122,802 | Thank you for taking the time to read this.
I am lay and will understand if the question is closed. My understanding is that in a vacuum, a constant force of $n$ applied to an object of mass $m$, will cause the object to accelerate.
However, at some point (relativistic) mass increases and I assume this affects inertia and therefore reduces acceleration. My question is: Given a certain set of values for $m$ and $n$, would inertia and the applied force ever be in equilibrium, resulting in zero acceleration (constant velocity) or does the object continue to accelerate albeit at an ever decreasing rate? | Fun question! As you pointed out, $$\theta \approx 1.22\frac{\lambda}{D}$$ For a human-like eye, which has a maximum pupil diameter of about $9\ \mathrm{mm}$ and choosing the shortest wavelength in the visible spectrum of about $390\ \mathrm{nm}$ , the angular resolution works out to about $5.3\times10^{-5}$ (radians, of course). At a distance of $24\ \mathrm{km}$ , this corresponds to a linear resolution ( $\theta d$ , where $d$ is the distance) of about $1.2\ \mathrm m$ . So counting mounted riders seems plausible since they are probably separated by one to a few times this resolution. Comparing their heights which are on the order of the resolution would be more difficult, but might still be possible with dithering . Does Legolas perhaps wiggle his head around a lot while he's counting? Dithering only helps when the image sampling (in this case, by elven photoreceptors) is worse than the resolution of the optics. Human eyes apparently have an equivalent pixel spacing of something like a few tenths of an arcminute , while the diffraction-limited resolution is about a tenth of an arcminute, so dithering or some other technique would be necessary to take full advantage of the optics. An interferometer has an angular resolution equal to a telescope with a diameter equal to the separation between the two most widely separated detectors. Legolas has two detectors (eyeballs) separated by about 10 times the diameter of his pupils , $75\ \mathrm{mm}$ or so at most. This would give him a linear resolution of about $15\ \mathrm{cm}$ at a distance of $24\ \mathrm{km}$ , probably sufficient to compare the heights of mounted riders. However, interferometry is a bit more complicated than that. With only two detectors and a single fixed separation, only features with angular separations equal to the resolution are resolved, and direction is important as well. If Legolas' eyes are oriented horizontally, he won't be able to resolve structure in the vertical direction using interferometric techniques. So he'd at the very least need to tilt his head sideways, and probably also jiggle it around a lot (including some rotation) again to get decent sampling of different baseline orientations. Still, it seems like with a sufficiently sophisticated processor (elf brain?) he could achieve the reported observation. Luboš Motl points out some other possible difficulties with interferometry in his answer, primarily that the combination of a polychromatic source and a detector spacing many times larger than the observed wavelength lead to no correlation in the phase of the light entering the two detectors. While true, Legolas may be able to get around this if his eyes (specifically the photoreceptors) are sufficiently sophisticated so as to act as a simultaneous high-resolution imaging spectrometer or integral field spectrograph and interferometer. This way he could pick out signals of a given wavelength and use them in his interferometric processing. A couple of the other answers and comments mention the potential difficulty drawing a sight line to a point $24\rm km$ away due to the curvature of the Earth. As has been pointed out, Legolas just needs to have an advantage in elevation of about $90\ \mathrm m$ (the radial distance from a circle $6400\ \mathrm{km}$ in radius to a tangent $24\ \mathrm{km}$ along the circumference; Middle-Earth is apparently about Earth-sized, or may be Earth in the past, though I can't really nail this down with a canonical source after a quick search). He doesn't need to be on a mountaintop or anything, so it seems reasonable to just assume that the geography allows a line of sight. Finally a bit about "clean air". In astronomy (if you haven't guessed my field yet, now you know.) we refer to distortions caused by the atmosphere as "seeing" . Seeing is often measured in arcseconds ( $3600'' = 60' = 1^\circ$ ), referring to the limit imposed on angular resolution by atmospheric distortions. The best seeing, achieved from mountaintops in perfect conditions, is about $1''$ , or in radians $4.8\times10^{-6}$ . This is about the same angular resolution as Legolas' amazing interferometric eyes. I'm not sure what seeing would be like horizontally across a distance of $24\ \mathrm{km}$ . On the one hand there is a lot more air than looking up vertically; the atmosphere is thicker than $24\ \mathrm{km}$ but its density drops rapidly with altitude. On the other hand the relatively uniform density and temperature at fixed altitude would cause less variation in refractive index than in the vertical direction, which might improve seeing. If I had to guess, I'd say that for very still air at uniform temperature he might get seeing as good as $1\rm arcsec$ , but with more realistic conditions with the Sun shining, mirage-like effects probably take over limiting the resolution that Legolas can achieve. | {
"source": [
"https://physics.stackexchange.com/questions/122802",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37047/"
]
} |
122,911 | I know black conducts heat while white reflects it. But they are colors after all. If a metal is painted black, it conducts more heat or at a rapid speed than it would do before it was coated. But, as far as I know, colors don't have any special "substance" in them, which might trigger the sudden absorption of heat or reflection of the same. What is the physics behind this? Are colors by themselves, some catalyst kinda thing? | I know black conducts heat while white reflects it. The correct term is "black absorbs light while white reflects it". We have named colors of light we see in the visible spectrum . White reflects most of the energy falling from the visible spectrum, black absorbs it. When the energy of light is absorbed it turns into heat . Any material painted black will absorb this heat further and its temperature will be raised but it will depend on the material how far the heat is transferred. If it is metal painted black, metal is a good conductor of heat and will distribute the energy fast on the whole body. But they are colors after all. They change the surface properties of materials on which they are painted thus changing the ability of absorption and emission of radiation. The energy coming from the sun covers a much larger electromagnetic spectrum than the visible. The visible has about half of the energy coming from the sun on the surface, as seen in the link. So a metal door in the sun will transfer the heat of the visible spectrum to the interior if painted black, will reflect it back and keep the interior cooler if painted white. It is a good reason for painting roofs and walls white in hot countries. A white car is also better in hot countries for this reason . It is not always sure that the color properties ( absorption/reflection) are followed by the invisible part of the sun spectrum, infrared or ultraviolet. Each paint has to be studied as far as its response to the impinging radiation to be used efficiently for thermal protection. | {
"source": [
"https://physics.stackexchange.com/questions/122911",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40854/"
]
} |
122,933 | Suppose a man falls into very cold water and gets their foot stuck under a heavy rock. Fortunately, his head is above water and someone is able to call for help. The paramedics want to keep him warm while they work on freeing his foot. They put a hat on his head. Should they also wrap him in a blanket? | Yes. There are three mechanisms of heat loss (this applies generally, not just to the man in this example) radiation, conduction and convection. In most everyday cases radiation can be neglected so we just have conduction and convection. Conduction is just the transfer of heat along a static object. For example if you hold the end of a metal bar in a flame pretty soon the heat will conduct along the bar and the end you're holding will get hot too. In this case if our man is surrounded by a region of still water he will lose heat by conduction into the water. Convection is the transport of heat by a moving fluid. For example if you stand in still air on a winter day it may not feel too cold, but if a gale is blowing you'll start feeling cold very quickly. This happens because your body heats the air immediately around it then the wind whisks that warmer air away and replaces it with cold air. Now back to your question. We can't do anything about conduction. The man is in the water and we can't change the thermal conductivity of the water. But we can do something about convection. Our man's body heat will soon heat the water immediately around him, and we don't want water currents carrying away that warm water and replacing it with cold water. Wrapping a blanket tightly around the man will trap a layer of water near his skin and prevent convection from cooling him. | {
"source": [
"https://physics.stackexchange.com/questions/122933",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/52719/"
]
} |
122,934 | From the Gross-Pitaevskii equation \begin{equation}i\hbar\frac{\partial\psi}{\partial t}=\left(-\frac{\hbar^2}{2m}\nabla^2+V+g|\psi|^2\right)\psi\end{equation}
using the variational relation
\begin{equation}i\hbar\frac{\partial\psi}{\partial t}=\frac{\partial\varepsilon}{\partial \psi^*}\end{equation}
we find the energy density
\begin{equation}\varepsilon=\frac{\hbar^2}{2m}|\nabla\psi|^2+V|\psi|^2+\frac{g}{2}|\psi|^4\end{equation}
The energy would be $E=\int d^3r \varepsilon$
and this is a prime integral of the motion, meaning it is a conserved quantity. My questions are: 1) How do we get the variational relation? 2)How can we prove that $E$ is a conserved quantity? | Yes. There are three mechanisms of heat loss (this applies generally, not just to the man in this example) radiation, conduction and convection. In most everyday cases radiation can be neglected so we just have conduction and convection. Conduction is just the transfer of heat along a static object. For example if you hold the end of a metal bar in a flame pretty soon the heat will conduct along the bar and the end you're holding will get hot too. In this case if our man is surrounded by a region of still water he will lose heat by conduction into the water. Convection is the transport of heat by a moving fluid. For example if you stand in still air on a winter day it may not feel too cold, but if a gale is blowing you'll start feeling cold very quickly. This happens because your body heats the air immediately around it then the wind whisks that warmer air away and replaces it with cold air. Now back to your question. We can't do anything about conduction. The man is in the water and we can't change the thermal conductivity of the water. But we can do something about convection. Our man's body heat will soon heat the water immediately around him, and we don't want water currents carrying away that warm water and replacing it with cold water. Wrapping a blanket tightly around the man will trap a layer of water near his skin and prevent convection from cooling him. | {
"source": [
"https://physics.stackexchange.com/questions/122934",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/22866/"
]
} |
123,643 | In astrophysics there is a lot going on about strong, large scale magnetic fields: in stars (prominences), magnetic dynamos, compact accretors collimating jets, etc. There's even a special computational formalism called magnetohydrodynamics (MHD), which allows to deal with space plasma. Yet I've never read about large scale electric fields. I know that most of the matter we model in astrophysics is plasma but, naively, one would assume that this introduces both $\mathbf{E}$ and $\mathbf{B}$ fields on an equal footing. So where does this asymmetry come from? | Many astrophysical plasmas are well modeled as perfect conductors. Ideal MHD assumes this limit. As a result, there is no electric field in the fluid's rest frame. In other frames, we generally have $\vec{E} = -\vec{v} \times \vec{B}$, so there is an electric field. However, the perfect conductivity constraint means we don't have to model the electric field - if we evolve just the magnetic field (and the other properties of the fluid like its velocity and density), then we have the complete picture. The natural followup question is, "Why can we assume infinite conductivity?" Most people's intuition about space is that it is mostly vacuum, and vacuum seems like as good an insulator as one can find. The thing about vacuum is that even though there are few charge carriers per unit volume, what charge carriers there are can proceed uninterrupted and respond to any electric field. The book Physics of the Interstellar and Intergalactic Medium (Bruce Draine) gives some equations to quantify this. In eq. 35.48 it gives the conductivity of a pure hydrogen fully ionized plasma at temerature $T$ as
$$ \sigma = 4.6\times10^{9}\ \mathrm{s}^{-1} \left(\frac{T}{100\ \mathrm{K}}\right)^{3/2} \left(\frac{30}{\log\Lambda}\right) $$
(CGS units), where kinetic effects and the Debye length are approximately captured by the Coulomb logarithm
$$ \log\Lambda = 22.1 + \log\left(\left(\frac{E}{kT}\right) \left(\frac{T}{10^4\ \mathrm{K}}\right)^{3/2} \left(\frac{n_e}{\mathrm{cm}^{-3}}\right)^{-1}\right) $$
(eq. 2.17). Here $E$ is the particle kinetic energy, and $n_e$ is the number density of electrons. To give some sense to these numbers, take a look at the conductivities in this table on Wikipedia. Copper has a conductivity of about $6.0\times10^7\ \mathrm{S/m} = 5.4\times10^{17}\ \mathrm{s}^{-1}$, so a $100\ \mathrm{K}$ hydrogen plasma is not nearly as conductive. However, drinking water has a conductivity of no more than $5\times10^{-2}\ \mathrm{S/m} = 4.5\times10^{8}\ \mathrm{s}^{-1}$, and air's conductivity is at most $8\times10^{-15}\ \mathrm{S/m} = 7\times10^{-5}\ \mathrm{s}^{-1}$. Thus astrophysical plasmas are not particularly insulating. Bruce Draine's book also quotes a timescale for a magnetic field to decay over a lengthscale $L$:
$$ \tau = 5\times10^{8}\ \mathrm{yr}\ \left(\frac{T}{100\ \mathrm{K}}\right)^{3/2} \left(\frac{30}{\log\Lambda}\right) \left(\frac{L}{\mathrm{AU}}\right)^2 $$
(eq. 35.49). Thus if the smallest length scales in your problem are at least $10\ \mathrm{AU}$ (and you are working around $100\ \mathrm{K}$), the magnetic field decay time due to the finite conductivity of the plasma (due in turn to e.g. ion collisions) is well over the current age of the universe. On smaller scales you may have to model such effects, and indeed many astrophysicists do just that. | {
"source": [
"https://physics.stackexchange.com/questions/123643",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40621/"
]
} |
123,644 | The question is: if I were to insert a brass plate between two charges, what will happen to the force between the charges? Would it increase, decrease or stay the same? Does the brass plate increase the value of permittivity of the medium and therefore the force decreases? The correct answer is that it will increase. But I do not understand how. | Many astrophysical plasmas are well modeled as perfect conductors. Ideal MHD assumes this limit. As a result, there is no electric field in the fluid's rest frame. In other frames, we generally have $\vec{E} = -\vec{v} \times \vec{B}$, so there is an electric field. However, the perfect conductivity constraint means we don't have to model the electric field - if we evolve just the magnetic field (and the other properties of the fluid like its velocity and density), then we have the complete picture. The natural followup question is, "Why can we assume infinite conductivity?" Most people's intuition about space is that it is mostly vacuum, and vacuum seems like as good an insulator as one can find. The thing about vacuum is that even though there are few charge carriers per unit volume, what charge carriers there are can proceed uninterrupted and respond to any electric field. The book Physics of the Interstellar and Intergalactic Medium (Bruce Draine) gives some equations to quantify this. In eq. 35.48 it gives the conductivity of a pure hydrogen fully ionized plasma at temerature $T$ as
$$ \sigma = 4.6\times10^{9}\ \mathrm{s}^{-1} \left(\frac{T}{100\ \mathrm{K}}\right)^{3/2} \left(\frac{30}{\log\Lambda}\right) $$
(CGS units), where kinetic effects and the Debye length are approximately captured by the Coulomb logarithm
$$ \log\Lambda = 22.1 + \log\left(\left(\frac{E}{kT}\right) \left(\frac{T}{10^4\ \mathrm{K}}\right)^{3/2} \left(\frac{n_e}{\mathrm{cm}^{-3}}\right)^{-1}\right) $$
(eq. 2.17). Here $E$ is the particle kinetic energy, and $n_e$ is the number density of electrons. To give some sense to these numbers, take a look at the conductivities in this table on Wikipedia. Copper has a conductivity of about $6.0\times10^7\ \mathrm{S/m} = 5.4\times10^{17}\ \mathrm{s}^{-1}$, so a $100\ \mathrm{K}$ hydrogen plasma is not nearly as conductive. However, drinking water has a conductivity of no more than $5\times10^{-2}\ \mathrm{S/m} = 4.5\times10^{8}\ \mathrm{s}^{-1}$, and air's conductivity is at most $8\times10^{-15}\ \mathrm{S/m} = 7\times10^{-5}\ \mathrm{s}^{-1}$. Thus astrophysical plasmas are not particularly insulating. Bruce Draine's book also quotes a timescale for a magnetic field to decay over a lengthscale $L$:
$$ \tau = 5\times10^{8}\ \mathrm{yr}\ \left(\frac{T}{100\ \mathrm{K}}\right)^{3/2} \left(\frac{30}{\log\Lambda}\right) \left(\frac{L}{\mathrm{AU}}\right)^2 $$
(eq. 35.49). Thus if the smallest length scales in your problem are at least $10\ \mathrm{AU}$ (and you are working around $100\ \mathrm{K}$), the magnetic field decay time due to the finite conductivity of the plasma (due in turn to e.g. ion collisions) is well over the current age of the universe. On smaller scales you may have to model such effects, and indeed many astrophysicists do just that. | {
"source": [
"https://physics.stackexchange.com/questions/123644",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/53053/"
]
} |
123,674 | This article claims that because the universe appears to be flat, it must be infinite. I've heard this idea mentioned in a few other places, but they never explain the reasoning at all. | We need to be precise about the phrase the size of the universe . Specifically I'm going to take it to mean the maximum possible separation between any two points. In an infinite universe two points can be separated by an arbitrarily large distance, so if the maximum distance between two points is finite this means the universe must not be infinite. The point of all this is that the distance between any two points is calculated using the metric. For a Friedmann universe like ours (at least we believe our universe to be a Friedmann universe) the metric is (in polar coordinates): $$ ds^2 = -dt^2 + a^2(t) \left[ \frac{dr^2}{1 - kr^2} + r^2d\Omega^2 \right] $$ The value of the parameter $k$ determines whether the universe is closed, flat or open. Specifically $k > 0$ is a closed universe, $k = 0$ is a flat universe and $k < 0$ is an open universe. The variable $s$ is the proper distance. Now, suppose we choose an origin at some starting point, choose a fixed time, and calculate the proper distance, $s$ as we move radially away from the starting point. The question is whether $s$ can reach infinity or not. Because only $r$ is changing $dt = d\Omega = 0$, so the expression for the proper distance simplifies to: $$ ds^2 = a^2(t) \frac{dr^2}{1 - kr^2} $$ We'll choose our units of distance to make $a = 1$, and we'll consider only closed or flat space, $k \ge 0$, in which case we can integrate the above equation to give: $$ s(r) = \frac{\sin^{-1}(\sqrt{k}r)}{\sqrt{k}} $$ So the maximum possible value for $s(r)$ is when $\sqrt{k}r = 1$, in which case: $$ s_{max} = \frac{\pi}{2\sqrt{k}} $$ And there's the result we want. For a closed universe $k > 0$ and therefore the maximum possible distance between two points is finite. However as $k \rightarrow 0$ the maximum possible distance $s_{max} \rightarrow\infty$. That's why a flat universe is infinite. However we should note that, As Rexcirus points out in his answer, even a flat universe can be finite if it has a non-trivial topology. | {
"source": [
"https://physics.stackexchange.com/questions/123674",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37632/"
]
} |
123,753 | At a simple level, speeding in a car attempts to minimize the time required to travel a distance by utilizing the basic relationship: $$d=st$$ So for a given distance, time should be inversely proportional to speed: the faster you go the less time it takes. My question is, on a practical level, does this actually help you get to your destination much faster? Say you are travelling, on average, 5 mph (sorry for non SI units, as an American driving in kph just seems wrong) over the speed limit, which is often considered 'safe' for avoiding a traffic ticket. Does this shave seconds off your commute time? Minutes? I believe that traffic can be modeled using fluid dynamics (haven't actually seen those models myself just have been told this is the case) so, how does this come to in to play while speeding? It seems to me that it would depend on where you speed rather than how fast, i.e. speeding to avoiding getting stuck at a traffic light. Any insights or calculations would be greatly appreciated,
Thanks! | Alright, let's start with your direct question. Since $d = vt$ the time it takes to travel a certain distance is inversely proportional to your speed $ t \propto v^{-1} $, and so the fractional change in time is proportional to the negative fractional change in your speed. $$ \frac{dt}{t} = - \frac{dv}{v} $$
So, if we consider typical a typical highway speed of 65 mph and a 5 mph difference, this is a fractional change of about 8%. So going 5 mph faster on the highway will shave 8% off your travel time. So if you had an hour drive, it'll shave off 5 minutes. Drag But. Let's try to consider the added cost of going faster. If you go faster, the wind drag is higher, so your car needs more power to maintain speed. More power means more energy, more energy means more fuel, more fuel means more money. If we consider just the contribution of wind resistance, we know that $ F \propto v^2 $ for cars, and power is $ P = F v $ so the power consumed by drag goes as $ P \propto v^3 $. Energy consumption is $ E = P t $, so if we consider a drive of fixed length, since $ t \propto v^{-1}$ we have for the contribution of air drag, $ E \propto v^2 $. Now, the energy you get from fuel is proportional to the number of gallons you buy, and the cost scales as the number of gallons so $ \text{fuel cost} \propto v^2 $. So the fractional change in the fuel consumed due to wind drag is:
$$ \frac{ d(\text{fuel cost}_{\text{drag}} ) }{ \text{ fuel cost}_{\text{drag}} } = 2 \frac{ dv }{ v } $$
So, for the same 8% increase in speed, you pay an additional 16% in fuel costs due to the loss to air drag. Naturally air drag isn't the only way we use up fuel to keep a car running, there are all kinds of losses in a car, from inefficiencies in the engine itself, to friction in the various components of the engine, etc. As a simple model, let's say that the power a car consumes is the sum of the air drag and a constant term independent of speed: $P \sim \alpha v^3 + \beta $ for some appropriate choices of $\alpha$ and $\beta$ This would mean that our energy consumption would still be $ E = P t $, so for a constant distance drive, we're talking
$$ E \sim \alpha v^2 + \frac{\beta}{v} $$. We can test this model against data from a government study (figure from wikipedia:Fuel_economy_in_automobiles ) where for our model, we have
$$ \text{mpg} = \frac{ \alpha }{ v^2 + \frac{\beta}{v} } $$ Here I've shown the figure as well as an example fit of our model: The fit is overlayed in red, and corresponds to $\alpha = 1.5 \times 10^5, \beta = 1.28 \times 10^5 $. Notice that our simple model does pretty well and corresponds to a car that has a highway mph of about 25 mpg. Notice that at high speeds we are seeing the scaling we expect due to air drag alone, for at high speed our operating costs are dominated by air resistance, but it was useful to create the simple model and do the fit in this case because the region of interest is in the overlap region. Per hour Now that we know how the efficiency of our car varies with speed, knowing the average price of gas of $\$3.752$/gallon (from wolfram alpha ) we can compute the cost to operate an average car at a given speed: An in particular, we can compute the additional cost per hour per 5mph increase in speed as a function of speed: Per 10 miles Here I've shown the operating costs as a function of the time spent driving, so as to give costs per hour, which I think is useful for longer drives and something people have a handle on from other areas of life. If we instead want to look at it as function of the distance travelled, we can look at the car efficiency as the cost to travel 10 miles as a function of speed. Or we can consider again the change created by a 5 mph increase in speed for a fixed travel distance. Where here it becomes clear that for a fixed travel distance, as long as you are going less than 40 miles per hour (which for our model was the maximum fuel efficiency speed, and varies per car but the data seems to indicate is about 40 mph across the board), you can always justify speeding by 5 mph from purely economic terms, but at something like highway speeds, it costs you an additional 15 cents or so per 10 miles to go 5 over. Traffic Lights So, up till now we have considered the effectiveness of speeding from an economic perspective in the limit that we are travelling unimpeded down the road. As people have requested in the comments, let's try to figure out how effective speeding is in a more city type environment. This is a difficult problem to address, as traffic lights can have fairly complicated controllers. In particular, in some regions there are Green waves where the lights are designed to allow people travelling at the proper speed to pass unimpeded down long stretches of road. Obviously in this case, you would want to travel at the speed of the green wave and speeding wouldn't help you and would in fact hurt you. But, sophisticated traffic light controllers are not all that common outside of rich large cities. So, let's try to adopt a spherical cow type approximation to traffic lights and assume that traffic lights are independent and just operating on some fixed cycle of green and red. $p$ will be the fraction of time the average traffic light is green, $\tau$ will be the length of a red light, $d$ will be the average distance between traffic lights. If the lights are all operating independently, the distribution of waiting times when we reach a light can be modeled as
$$ P(t) = p \delta(t) + \frac{1}{\tau} ( 1 - p ) \quad 0 \leq t \leq \tau $$ or in words, with probability $p$ we don't have to wait at all, otherwise our waiting time will be uniform up to $\tau$. This distribution has mean and variance
$$ \mu = \frac{\tau}{2} ( 1 - p ) $$
$$ \sigma^2 = \frac{\tau^2}{12} ( 1 - p) ( 3 p + 1 ) $$ Now, if we travel for $N$ blocks, we will have for the average time it takes
$$ \langle t \rangle = N \left( \frac{d}{v} + \frac{\tau}{2} ( 1 - p ) \right) $$
$$ \sigma^2_t = N \frac{\tau^2}{12} ( 1- p) ( 3 p + 1 ) $$
where we have added in the travel time between the lights themselves. So, for instance, with $d = 1/10$ mile between lights on average, $\tau = 30$ seconds, and $p = 0.65$ we get for an average city speed as function of target speed: So, for a target speed of about 45 mph for a major road in a city, we get for an average speed something like 28 mph, which seems to agree moderately well with observations . Now, as we have modeled it, if you speed you will get there faster, but what we should compare against is the intrinsic variability introduced by the traffic lights, and a case could be made that speeding 5 mph over is really only worth it if the gains you get in timing are larger than the natural variations in times you would have given the lights, otherwise you'll hardly notice the effect. So in particular, we can compare the fractional reduction in your travel time for going 5 mph over, versus the fractional change in your travel time due to the intrinsic variation due to the random light timings $(\sigma/\mu)$ for different number of blocks. We obtain: Here the solid line shows the fractional change in your travel time you'd get by going 5 mph over the target speed at the bottom. Notice that it scales as $1/v$ just as the very top of the post. The dotted lines show fractional change in travel time induced by a 1 sigma variation in the behavior of the traffic lights, for different number of blocks. Notice that at around 40 mph, the time you would shave off by going 5 mph is comparable to the natural variations you would expect in travel times due to your luck with the traffic lights if you are travelling 10 blocks, and both of these are at about the 10% level. At this point it starts to become difficult to justify speeding as its effect will be hard to notice over the natural variation. But, notice that if you are travelling a longer distance, there is a clear gain given by speeding, as the variations in travel time start to be suppressed through averaging. On the flip side, for very short trips of a few blocks, the variations in your travel time given by your luck at the lights completely dominates any gain you'd get by speeding. Here I've simulated traveling for 5, 15 or 50 blocks according to our model, both at 45 mph and going 50 mph. I ran the simulation 10,000 times and here I show the time gained from speeding in different trials. Here we can really see that there are no noticeable gains for going 5 over over 5 blocks, it's completely washed out by our luck with the lights, but a noticeable change in our expected time over 50 blocks. The code used for this answer is available as an ipython notebook Addendum: I've explore a more realistic model of power losses in cars in this more recent answer | {
"source": [
"https://physics.stackexchange.com/questions/123753",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/52768/"
]
} |
123,758 | Two equal amplitude wave pulses approaching each other through some medium such as a string may form a region of zero amplitude when they overlap completely. At this point, the location of overlap is (apparently) indistinguishable from any other region in the medium with zero amplitude. However, the two pulses will emerge from the blank region and continue to travel through the medium. How is the region at which total destructive interference occurs different from any other region of zero amplitude in the medium? Where is the energy and information present within each wave pulse stored during superposition? I assume that the molecules in a string gain potential energy during superposition, but where are wave energy and information stored in superimposed states at the molecular and quantum levels? | Alright, let's start with your direct question. Since $d = vt$ the time it takes to travel a certain distance is inversely proportional to your speed $ t \propto v^{-1} $, and so the fractional change in time is proportional to the negative fractional change in your speed. $$ \frac{dt}{t} = - \frac{dv}{v} $$
So, if we consider typical a typical highway speed of 65 mph and a 5 mph difference, this is a fractional change of about 8%. So going 5 mph faster on the highway will shave 8% off your travel time. So if you had an hour drive, it'll shave off 5 minutes. Drag But. Let's try to consider the added cost of going faster. If you go faster, the wind drag is higher, so your car needs more power to maintain speed. More power means more energy, more energy means more fuel, more fuel means more money. If we consider just the contribution of wind resistance, we know that $ F \propto v^2 $ for cars, and power is $ P = F v $ so the power consumed by drag goes as $ P \propto v^3 $. Energy consumption is $ E = P t $, so if we consider a drive of fixed length, since $ t \propto v^{-1}$ we have for the contribution of air drag, $ E \propto v^2 $. Now, the energy you get from fuel is proportional to the number of gallons you buy, and the cost scales as the number of gallons so $ \text{fuel cost} \propto v^2 $. So the fractional change in the fuel consumed due to wind drag is:
$$ \frac{ d(\text{fuel cost}_{\text{drag}} ) }{ \text{ fuel cost}_{\text{drag}} } = 2 \frac{ dv }{ v } $$
So, for the same 8% increase in speed, you pay an additional 16% in fuel costs due to the loss to air drag. Naturally air drag isn't the only way we use up fuel to keep a car running, there are all kinds of losses in a car, from inefficiencies in the engine itself, to friction in the various components of the engine, etc. As a simple model, let's say that the power a car consumes is the sum of the air drag and a constant term independent of speed: $P \sim \alpha v^3 + \beta $ for some appropriate choices of $\alpha$ and $\beta$ This would mean that our energy consumption would still be $ E = P t $, so for a constant distance drive, we're talking
$$ E \sim \alpha v^2 + \frac{\beta}{v} $$. We can test this model against data from a government study (figure from wikipedia:Fuel_economy_in_automobiles ) where for our model, we have
$$ \text{mpg} = \frac{ \alpha }{ v^2 + \frac{\beta}{v} } $$ Here I've shown the figure as well as an example fit of our model: The fit is overlayed in red, and corresponds to $\alpha = 1.5 \times 10^5, \beta = 1.28 \times 10^5 $. Notice that our simple model does pretty well and corresponds to a car that has a highway mph of about 25 mpg. Notice that at high speeds we are seeing the scaling we expect due to air drag alone, for at high speed our operating costs are dominated by air resistance, but it was useful to create the simple model and do the fit in this case because the region of interest is in the overlap region. Per hour Now that we know how the efficiency of our car varies with speed, knowing the average price of gas of $\$3.752$/gallon (from wolfram alpha ) we can compute the cost to operate an average car at a given speed: An in particular, we can compute the additional cost per hour per 5mph increase in speed as a function of speed: Per 10 miles Here I've shown the operating costs as a function of the time spent driving, so as to give costs per hour, which I think is useful for longer drives and something people have a handle on from other areas of life. If we instead want to look at it as function of the distance travelled, we can look at the car efficiency as the cost to travel 10 miles as a function of speed. Or we can consider again the change created by a 5 mph increase in speed for a fixed travel distance. Where here it becomes clear that for a fixed travel distance, as long as you are going less than 40 miles per hour (which for our model was the maximum fuel efficiency speed, and varies per car but the data seems to indicate is about 40 mph across the board), you can always justify speeding by 5 mph from purely economic terms, but at something like highway speeds, it costs you an additional 15 cents or so per 10 miles to go 5 over. Traffic Lights So, up till now we have considered the effectiveness of speeding from an economic perspective in the limit that we are travelling unimpeded down the road. As people have requested in the comments, let's try to figure out how effective speeding is in a more city type environment. This is a difficult problem to address, as traffic lights can have fairly complicated controllers. In particular, in some regions there are Green waves where the lights are designed to allow people travelling at the proper speed to pass unimpeded down long stretches of road. Obviously in this case, you would want to travel at the speed of the green wave and speeding wouldn't help you and would in fact hurt you. But, sophisticated traffic light controllers are not all that common outside of rich large cities. So, let's try to adopt a spherical cow type approximation to traffic lights and assume that traffic lights are independent and just operating on some fixed cycle of green and red. $p$ will be the fraction of time the average traffic light is green, $\tau$ will be the length of a red light, $d$ will be the average distance between traffic lights. If the lights are all operating independently, the distribution of waiting times when we reach a light can be modeled as
$$ P(t) = p \delta(t) + \frac{1}{\tau} ( 1 - p ) \quad 0 \leq t \leq \tau $$ or in words, with probability $p$ we don't have to wait at all, otherwise our waiting time will be uniform up to $\tau$. This distribution has mean and variance
$$ \mu = \frac{\tau}{2} ( 1 - p ) $$
$$ \sigma^2 = \frac{\tau^2}{12} ( 1 - p) ( 3 p + 1 ) $$ Now, if we travel for $N$ blocks, we will have for the average time it takes
$$ \langle t \rangle = N \left( \frac{d}{v} + \frac{\tau}{2} ( 1 - p ) \right) $$
$$ \sigma^2_t = N \frac{\tau^2}{12} ( 1- p) ( 3 p + 1 ) $$
where we have added in the travel time between the lights themselves. So, for instance, with $d = 1/10$ mile between lights on average, $\tau = 30$ seconds, and $p = 0.65$ we get for an average city speed as function of target speed: So, for a target speed of about 45 mph for a major road in a city, we get for an average speed something like 28 mph, which seems to agree moderately well with observations . Now, as we have modeled it, if you speed you will get there faster, but what we should compare against is the intrinsic variability introduced by the traffic lights, and a case could be made that speeding 5 mph over is really only worth it if the gains you get in timing are larger than the natural variations in times you would have given the lights, otherwise you'll hardly notice the effect. So in particular, we can compare the fractional reduction in your travel time for going 5 mph over, versus the fractional change in your travel time due to the intrinsic variation due to the random light timings $(\sigma/\mu)$ for different number of blocks. We obtain: Here the solid line shows the fractional change in your travel time you'd get by going 5 mph over the target speed at the bottom. Notice that it scales as $1/v$ just as the very top of the post. The dotted lines show fractional change in travel time induced by a 1 sigma variation in the behavior of the traffic lights, for different number of blocks. Notice that at around 40 mph, the time you would shave off by going 5 mph is comparable to the natural variations you would expect in travel times due to your luck with the traffic lights if you are travelling 10 blocks, and both of these are at about the 10% level. At this point it starts to become difficult to justify speeding as its effect will be hard to notice over the natural variation. But, notice that if you are travelling a longer distance, there is a clear gain given by speeding, as the variations in travel time start to be suppressed through averaging. On the flip side, for very short trips of a few blocks, the variations in your travel time given by your luck at the lights completely dominates any gain you'd get by speeding. Here I've simulated traveling for 5, 15 or 50 blocks according to our model, both at 45 mph and going 50 mph. I ran the simulation 10,000 times and here I show the time gained from speeding in different trials. Here we can really see that there are no noticeable gains for going 5 over over 5 blocks, it's completely washed out by our luck with the lights, but a noticeable change in our expected time over 50 blocks. The code used for this answer is available as an ipython notebook Addendum: I've explore a more realistic model of power losses in cars in this more recent answer | {
"source": [
"https://physics.stackexchange.com/questions/123758",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24699/"
]
} |
125,903 | I have seen many questions of this type but I could nowhere find the answer to "why". I know this is a phenomenon which has been seen and discovered and we know it happens and how it happens. But my question is why would wavelength affect the amount of diffraction? I am looking for a very simple logical explanation rather than a complex mathematical answer. Why will a blue ray bend lesser than a red ray through a slit of the size a little bigger than the wavelength of the blue ray? I need an answer that will answer "why" does diffraction depend on wavelength of light. Image sources: http://www.olympusmicro.com/primer/java/diffraction/index.html | Why will a blue ray bend lesser than a red ray through a slit of the size a little bigger than the wavelength of the blue ray? Don't think of bending. Think of diffraction like this: if you have a plane wave incident on a slit, then you can think about the space in the slit as being a line of infinitely many point sources that radiate in phase. If you are looking straight down the slit, then all those point sources are in phase. There's not much unusual going on here. However, if you move a bit to the side, then all those point sources aren't in phase. They are, really, but since they are not at equal distances to you, the radiation from each is delayed by a different amount. Depending on your position, the point sources interfere constructively or destructively, and this is what yields the diffraction pattern. If you look closely at this image, it appears it was generated by an approximation of four point sources in the slit. Now, the number of these point sources there are, and the maximum difference in phase between them, is a function of the size of the slit, obviously. If the slit is wider, then when viewed from some direction slightly off center, the phase difference from the left-most source and the right-most source will be greater, because the difference in distance between them is greater. Compare a small slit: To a bigger slit: The significance of the size of the slit is apparent, right? Well, changing the wavelength is equivalent to changing the size of the slit. If we make the slit bigger, and make the wavelength bigger by the same amount, then the difference in distance between the sources is greater, but the rate of change in the wave function is slower, so the phase difference between the two extremes of the slit is the same. But, if we just make the wavelength smaller, and leave the slit the same, the rate of change in the wave function is faster, which is equivalent to making the slit bigger without changing the wavelength. Images from Wikipedia | {
"source": [
"https://physics.stackexchange.com/questions/125903",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/35837/"
]
} |
126,077 | Related: Why does earth have a minimum orbital period? I was learning about GPS satellite orbits and came across that Low Earth Orbits ( LEO ) have a period of about 88 minutes at an altitude of 160 km. When I took a mechanics course a couple of years ago, we were assigned a problem that assumed that if one could drill a hole through the middle of the Earth and then drop an object into it, what would your period of oscillation be? It just happens to be a number that I remembered and it was 84.5 minutes (see Hyperphysics ). So if I fine-tuned the LEO orbit to a vanishing altitude, in theory, I could get its period to be 84.5 minutes as well. Of course, I am ignoring air drag. My question is: why are these two periods (oscillating through the earth and a zero altitude LEO) the same? I am sure that there is some fundamental physical reason that I am missing here. Help. | Intuitive explanation Suppose you drill two , perpendicular holes through the center of the Earth. You drop an object through one, then drop an object through the other at precisely the time the first object passes through the center. What you have now are two objects oscillating in just one dimension, but they do so in quadrature. That is, if we were to plot the altitude of each object, one would be something like $\sin(t)$ and the other would be $\cos(t)$. Now consider the motion of a circular orbit, but think about the left-right movement and the up-down movement separately. You will see it is doing the same thing as your two objects falling through the center of the Earth, but it is doing them simultaneously. image source caveat: an important assumption here is an Earth of uniform density and perfect spherical symmetry, and a frictionless orbit right at the surface. Of course all those things are significant deviations from reality. Mathematical proof Let's consider just the vertical acceleration of two points, one inside the planet and another on the surface, at equal vertical distance ($h$) from the planet's center: $R$ is the radius of the planet $g$ is the gravitational acceleration at the surface $a_p$ and $a_q$ are just the vertical components of the acceleration on each point If we can demonstrate that these vertical accelerations are equal, then we demonstrate that the differing horizontal positions have no relevance to the vertical motion of the points. Then we can free ourselves to think of vertical and horizontal motion independently, as in the intuitive explanation. Calculating $a_q$ is simple trigonometry. It's at the surface, so the magnitude of its acceleration must be $g$. Just the vertical component is simply: $$ a_q = g (\sin \theta) $$ If you have worked through the "dropping an object through a tunnel in Earth" problem , then you already know that in the case of $p$, its acceleration linearly decreases with its distance from the center of the planet (this is why the "uniform density" assumption is important): $$ a_p = g \frac{h}{R} $$ $h$ is equal for our two points, and finding it is again simple trigonometry: $$ h = R (\sin \theta) $$ So: $$ \require{cancel}
a_p = g \frac{\cancel{R} (\sin \theta)}{\cancel{R}} \\
a_p = g (\sin \theta) = a_q $$ Q.E.D. This also gives some insight to an unfortunate consequence: this method can be applied only to orbits on or inside the surface of the planet. Outside of the planet, $p$ no longer experiences an acceleration proportional to the distance from the center of mass ($a_p \propto h$), but instead proportional to the inverse square of distance ($a_p \propto 1/h^2$), according to Newton's law of universal gravitation . | {
"source": [
"https://physics.stackexchange.com/questions/126077",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11361/"
]
} |
126,079 | Reading about the Maxwell relations has left me confused, and I want a basic sanity check regarding the notation. The Wikipedia article breezes over the following switch of notation without really describing it: $$ \left(\frac{\partial T}{\partial V}\right)_S = \frac{\partial^2 U }{\partial S \partial V} $$ Probably, it is also true to say: $$ \left(\frac{\partial T}{\partial V}\right)_S = \frac{\partial^2 T }{\partial S \partial V} $$ There are many examples, this is just the first case-in-point. I'm familiar with the subscripts being presented as "at constant x". In the above case, that would be saying, "dT/dV at constant entropy". Firstly, is that a correct interpretation? Secondly (and if it is), how are the above two things the same? For another example, how would the following two be the same things: differentiating pressure with respect to temperature at constant volume vs. differentiating temperature with respect to volume and then pressure? While I ask this, I can already see that my mental picture is incomplete. It's nonsense to ask to differentiate temperature with respect with volume with no other specifiers because that doesn't respect the total degrees of freedom. So could you introduce another auxiliary variable, and then and then explicitly show the above equality? I'm sure this is often taken as trivial, but I find it non-trivial for myself. | Intuitive explanation Suppose you drill two , perpendicular holes through the center of the Earth. You drop an object through one, then drop an object through the other at precisely the time the first object passes through the center. What you have now are two objects oscillating in just one dimension, but they do so in quadrature. That is, if we were to plot the altitude of each object, one would be something like $\sin(t)$ and the other would be $\cos(t)$. Now consider the motion of a circular orbit, but think about the left-right movement and the up-down movement separately. You will see it is doing the same thing as your two objects falling through the center of the Earth, but it is doing them simultaneously. image source caveat: an important assumption here is an Earth of uniform density and perfect spherical symmetry, and a frictionless orbit right at the surface. Of course all those things are significant deviations from reality. Mathematical proof Let's consider just the vertical acceleration of two points, one inside the planet and another on the surface, at equal vertical distance ($h$) from the planet's center: $R$ is the radius of the planet $g$ is the gravitational acceleration at the surface $a_p$ and $a_q$ are just the vertical components of the acceleration on each point If we can demonstrate that these vertical accelerations are equal, then we demonstrate that the differing horizontal positions have no relevance to the vertical motion of the points. Then we can free ourselves to think of vertical and horizontal motion independently, as in the intuitive explanation. Calculating $a_q$ is simple trigonometry. It's at the surface, so the magnitude of its acceleration must be $g$. Just the vertical component is simply: $$ a_q = g (\sin \theta) $$ If you have worked through the "dropping an object through a tunnel in Earth" problem , then you already know that in the case of $p$, its acceleration linearly decreases with its distance from the center of the planet (this is why the "uniform density" assumption is important): $$ a_p = g \frac{h}{R} $$ $h$ is equal for our two points, and finding it is again simple trigonometry: $$ h = R (\sin \theta) $$ So: $$ \require{cancel}
a_p = g \frac{\cancel{R} (\sin \theta)}{\cancel{R}} \\
a_p = g (\sin \theta) = a_q $$ Q.E.D. This also gives some insight to an unfortunate consequence: this method can be applied only to orbits on or inside the surface of the planet. Outside of the planet, $p$ no longer experiences an acceleration proportional to the distance from the center of mass ($a_p \propto h$), but instead proportional to the inverse square of distance ($a_p \propto 1/h^2$), according to Newton's law of universal gravitation . | {
"source": [
"https://physics.stackexchange.com/questions/126079",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1255/"
]
} |
126,471 | Can anyone please explain me on this matter along with day to day examples? | A pendulum is a day to day example of this. If you watch a pendulum swinging from left to right as it passes the mid point the velocity and acceleration are: The acceleration always point towards the mid point, so as the pendulum passes through the mid point the acceleration reverses direction but the velocity does not. | {
"source": [
"https://physics.stackexchange.com/questions/126471",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/46712/"
]
} |
126,512 | The ghostly passage of one body through another is obviously out of the question if the continuum assumption were valid, but we know that at the micro, nano, pico levels (and beyond) this is not even remotely the case. My understanding is that the volume of the average atom actually occupied by matter is a vanishingly small fraction of the atom's volume as a whole. If this is the case, why can't matter simply pass through other matter? Are the atom's electrons so nearly omnipresent that they can simultaneously prevent collisions/intersections from all possible directions? | Things are not empty space. Our classical intuition fails at the quantum level. Matter does not pass through other matter mainly due to the Pauli exclusion principle and due to the electromagnetic repulsion of the electrons. The closer you bring two atoms, i.e. the more the areas of non-zero expectation for their electrons overlap, the stronger will the repulsion due to the Pauli principle be, since it can never happen that two electrons possess exactly the same spin and the same probability to be found in an extent of space. The idea that atoms are mostly "empty space" is, from a quantum viewpoint, nonsense. The volume of an atom is filled by the wavefunctions of its electrons, or, from a QFT viewpoint, there is a localized excitation of the electron field in that region of space, which are both very different from the "empty" vacuum state. The concept of empty space is actually quite tricky, since our intuition "Space is empty when there is no particle in it" differs from the formal "Empty space is the unexcited vacuum state of the theory" quite a lot. The space around the atom is definitely not in the vacuum state, it is filled with electron states. But if you go and look, chances are, you will find at least some "empty" space in the sense of "no particles during measurement". Yet you are not justified in saying that there is "mostly empty space" around the atom, since the electrons are not that sharply localized unless some interaction (like measurements) takes place that actually forces them to. When not interacting, their states are "smeared out" over the atom in something sometimes called the electron cloud , where the cloud or orbital represents the probability of finding a particle in any given spot. This weirdness is one of the reasons why quantum mechanics is so fundamentally different from classical mechanics – suddenly, a lot of the world becomes wholly different from what we are used to at our macroscopic level, and especially our intuitions about "empty space" and such fail us completely at microscopic levels. Since it has been asked in the comments, I should probably say a few more words about the role of the exclusion principle: First, as has been said, without the exclusion principle, the whole idea of chemistry collapses: All electrons fall to the lowest 1s orbital and stay there, there are no "outer" electrons, and the world as we know it would not work. Second, consider the situation of two equally charged classical particles: If you only invest enough energy/work, you can bring them arbitrarily close. The Pauli exclusion principle prohibits this for the atoms – you might be able to push them a little bit into each other, but at some point, when the states of the electrons become too similar, it just won't go any further. When you hit that point, you have degenerate matter , a state of matter which is extremely difficult to compress, and where the exclusion principle is the sole reason for its incompressibility. This is not due to Coulomb repulsion, it is that that we also need to invest the energy to catapult the electrons into higher energy levels since the number of electrons in a volume of space increases under compression, while the number of available energy levels does not. (If you read the article, you will find that the electrons at some point will indeed prefer to combine with the protons and form neutrons, which then exhibit the same kind of behaviour. Then, again, you have something almost incompressible, until the pressure is high enough to break the neutrons down into quarks (that is merely theoretical). No one knows what happens when you increase the pressure on these quarks indefinitely, but we probably cannot know that anyway, since a black hole will form sooner or later) Third, the kind of force you need to create such degenerate matter is extraordinarily high . Even metallic hydrogen , the probably simplest kind of such matter, has not been reliably produced in experiments. However, as Mark A has pointed out in the comments (and as is very briefly mentioned in the Wikipedia article, too), a very good model for the free electrons in a metal is that of a degenerate gas, so one could take metal as a room-temperature example of the importance of the Pauli principle. So, in conclusion, one might say that at the levels of our everyday experience, it would probably enough to know about the Coulomb repulsion of the electrons (if you don't look at metals too closely). But without quantum mechanics, you would still wonder why these electrons do not simply go closer to their nuclei, i.e. reduce their orbital radius/drop to a lower energy state, and thus reduce the effective radius of the atom. Therefore, Coulomb repulsion already falls short at this scale to explain why matter seems "solid" at all – only the exclusion principle can explain why the electrons behave the way they do. | {
"source": [
"https://physics.stackexchange.com/questions/126512",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/46211/"
]
} |
126,536 | I tried to mimic the mechanism of typical screens to produce white color out of red, green and blue. What I did is displayed the attached image on the screen, and moved far away as to let the diffraction effects take place, so that the three colors appear as if they're coming from the same point. Nonetheless, quite paradoxically, what I have seen was black instead of white. I don't know if this question fits this place, so excuse me. | What you are seeing at a distance is not black. It is a darkish shade of gray, RGB gray 85,85,85. The reason you aren't seeing "white" is because each of those three rectangles has an HSV value of only 33% and you are seeing that merged square against a white background. That merged square will appear to be whitish if you make the background black rather than white and view the screen in a very dark setting. | {
"source": [
"https://physics.stackexchange.com/questions/126536",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41996/"
]
} |
126,919 | While answering the question GPS Satellite - Special Relativity it occurred to me that time would run more slowly at the equator than at the North Pole, because the surface of the Earth is moving at about 464 m/s compared to the North Pole. The difference should be given by: $$ \frac{1}{\gamma} \approx 1 - \frac{1}{2}\frac{v^2}{c^2} $$ and at $v = 464\ \mathrm{m/s}$ we get: $$ \frac{1}{\gamma} \approx 1 - 1.2 \times 10^{-12} $$ This is a tiny difference – about 4 days over the 13.7 billion year lifetime of the universe – but according to Wikipedia the accuracy of current atomic clocks is about $1$ part in $10^{14}$ , so the difference should be measurable. However, I have never heard of any measurements of the difference. Is there a flaw in my reasoning or have I simply not been reading the right journals? | The difference would indeed be measurable with state-of-the-art atomic clocks but it's not there: it cancels. The reasons actually boil down to the very first thought experiments that Einstein went through when he realized the importance of the equivalence principle for general relativity – it was in Prague around 1911-1912. See e.g. the end of http://motls.blogspot.com/2012/09/albert-einstein-1911-12-1922-23.html?m=1 to be reminded about Einstein's original derivation of the gravitational red shift involving the carousel. The arguments for John's setup may be seen e.g. in this paper: http://arxiv.org/abs/gr-qc/0501034 There is a sense in which the "geocentric" reference frame rotating along with the Earth every 24 hours is more inertial than the frame in which the Earth is spinning. Consider one liter of water somewhere – near the poles or the equator – at the sea level. Keep its speed relatively to the (rotating) Earth's surface tiny, just like what is easy to get in practice. Now, let's check the energy conservation in the Earth's rotating frame. The energy is conserved because this background – even in the "seemingly non-inertial" rotating coordinates – is asymptotically static, invariant under translations in time. The energy is conserved but the potential energy of one static (in this frame) liter of water may be calculated as
$$m_c^2 \sqrt{|g_{00}|}. $$
Because the $00$-component of the metric tensor is essentially the gravitational (which is normally called "gravitational plus centrifugal" in the "naive inertial" frame where the Earth is spinning) potential and it is constant at the sea level across the globe, $g_{00}$ which encodes the gravitational slow down as a function of the place in the gravitational field must be constant in everywhere at the sea level, too. In the "normal inertial" frame where the Earth is spinning, the special relativistic time dilation is compensated by the fact that the Earth isn't spherical, and the gravitational potential is therefore less negative i.e. "less bound" at the sea level near the equator. Some calculations involving the ellipsoid shape of the Earth may yield an inaccurate cancellation. (That error may be attributed to not quite correct assumptions that the Earth's mass density is uniform etc., assumptions that are usually made to make the problem tractable.) But a more conceptual argument shows that the non-spherical shape of the Earth is a consequence of the centrifugal force. Quantitatively, this force is derived from the centrifugal potential, and this centrifugal potential must therefore be naturally added to the normal gravitational potential to calculate the full special-relativistic-plus-gravitational time dilation. That makes it clear why this particular calculation is easier to do in the frame that rotates along with the Earth's surface and the effect cancels exactly. Let me mention that the spacetime metric in the frame rotating along with the Earth isn't the flat Minkowski metric. If we allow the frame to rotate with the Earth, we just "maximally" get rid of the effects linked to the centrifugal force and the corresponding corrections to the red shift. However, in this frame spinning along with the Earth, there is still the Coriolis force. In the language of the general relativistic metric, the Coriolis acceleration adds some nontrivial off-diagonal elements to the metric tensor. These deviations from the flatness are responsible for the geodetic effect as well as frame dragging. Every argument showing the exact cancellation of the special relativistic effect must use the equivalence principle at one point or another; any argument avoiding this principle – or anything else from general relativity – is guaranteed to be incorrect because separately (without gravity and its effects), the special relativistic effect is certainly there. | {
"source": [
"https://physics.stackexchange.com/questions/126919",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1325/"
]
} |
127,282 | While playing racquetball, I frequently hear a very prominent "boing" sound (or more formally, a chirp ). For example, you can hear it in this video when the ball hits the front wall. Does anyone know what the origin of this sound is, and why the pitch rises? Here is the spectrogram from the above video: A careful examination shows that there are at least four linear chirps, which I've highlighted below. If you really listen carefully, all four of these are audible. (However I can only distinguish between the two high frequency chirps when the audio is played at half speed.) | After much investigation, simulation and a deep literature search, I've figured out the true answer. You perceive a chirp because you are being hit with the echos of the sharp noise that generated the sound. The times between the arrival of those echos is decreasing inversely with time, so it sounds as if it were a tone with a fundamental frequency increasing linearly in time, hence the chirp. To get a feel for the phenomenon, consider a simulation : Above you see a slowed down version of the simulated pressure wave inside a 2D racquetball court. I threw up the generated sound on soundcloud . If you watch the simulation, pick a particular point and watch the reflected sounds go by, you'll notice the different instances of the multiple echos arrive faster and faster as time goes on. You can clearly hear the chirps in the generated sound, and if you listen closely you can hear secondary chirps as well. These are also visible in the spectrogram: This phenomenon was studied and published recently by Kenji Kiyohara, Ken'ichi Furuya, and Yutaka Kaneda: "Sweeping echoes perceived in a regularly shaped reverberation room ," J. Acoust. Soc. Am. Vol.111, No.2, 925-930 (2002). more info In particular, they explain not only the main sweep, but the appearance of the secondary sweeps using some number theory. Worth reading in full. This suggests that for the best sweep one should both stand and listen in the center of the room, though they should be generic at any location. Simple geometric argument Following the paper, we can give a simple geometric argument. If you imagine standing in the middle of a standard racquetball court, which is twice as long as it is tall or wide, and clap, your clap will start propagating and reflecting off the walls. A simple way to study the arrival times is with the method of images, so you imagine other claps generated by reflecting your clap across the walls, and then reflections of those claps and so on. This will generate a whole set of "image" claps, located at positions
$$ ( m, l, 2k) L $$
where $m,l,k$ are integers and $L$ is 20 feet for a racquetball court, the time for any particular clap to reach you is $t = d/c$ and so we have
$$ t = \sqrt{m^2 + l^2 + 4k^2} \frac{L}{c} $$
for our arrival times. If we look at how these distribute in time: It becomes clear why we perceive a chirp. The various sets of missing bars, which themselves are spaced like a chirp, give rise to our perceived subchirps. Details of the 2D Simulation For the simulation, I numerically solved the wave equation:
$$ \frac{\partial^2 p}{dt^2} = c^2 \nabla^2 p $$
and used impedance boundary conditions on the walls
$$ \nabla p \cdot \hat n = -c \eta \frac{\partial p}{\partial t} $$
I used a collocation method spatially, with a Chebyshev basis of order 64 in the short axis and 128 on the long axis. and used RK4 for the time integration. I modeled the room as 20 feet by 40 feet and started it of with a gaussian pressure pulse in one corner of the room. I listened near the back wall towards the top corner. I put up an ipython notebook of my code , with the embedded audio and video. I recommend playing with it yourself. On my desktop it takes about minute to do a full simulation of the sound. Effect of listening location I've updated the code to generate sound at multiple locations, and generate their sounds. I can't seem to embed audio on stackexchange, but if you click through to the IPython notebook view, you can listen to all of the generated sounds. But what I can do here is show the spectrograms: These are laid out in roughly their locations inside of the room. Here the noise was generated in the lower left, but the chirps should be generic for any listening and generation location. | {
"source": [
"https://physics.stackexchange.com/questions/127282",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9083/"
]
} |
127,326 | So this is possibly a misunderstanding of the meaning of work , but all the Physics texts, sites, and wiki that I've read don't clear this up for me: In the simplest case with the simplest statement, work is force times distance. If you push with a force $F_{1}$ on an object that doesn't budge because of friction, you do no work. If your friend helps push and you still apply the same force $F_{1}$ and the thing moves, all the sudden you're doing work and it's not really because of what you're doing. Moreover, if you continue applying the same force, and your friend increases her force so that the thing moves faster, and covers a greater distance, again you're doing more work and by no fault of your own. This just seems paradoxical, and maybe the only sensible answer to this paradox is "Well, the physical notion of work is not the same as the everyday notion of work," but I'm wondering if anyone can say anything about this to make it feel more sensible than just accepting a technical definition for a word that doesn't seem like the right word to use. | If you're pushing a 10-ton truck and it's not moving, you are not doing any work on the truck because the distance $ds=0$ and the nonzero force $F$ isn't enough for the product $F\cdot ds$ to be nonzero. Your muscles may get tired so you feel that you're "doing something" and "spending energy" but it's not the work done on truck. You're just burning the energy from your breakfast by hopelessly stretching your muscles. The energy gets converted to heat and your body is really losing it, but when we talk about "work", we usually mean "mechanical work" done on an external object, and it is zero. If someone loosens the brakes and you suddenly manage to move the truck, your perception how "hard" it is may be the same as before. You may be spending the same amount of energy obtained from the breakfast. But there is a difference. A part of this energy is converted not to useless heat of your muscles but to the kinetic energy of the truck. Your impression that the work changes "not because of what you're doing" is an artifact of the fact that a big part of the energy is spent on heat in the muscles in one way or another. But it's really the usefully spent part of the energy, however small, that does the mechanical work. It may be a small part so it may be hard to notice it. Physical terms often deviate – and they are more accurate than – their counterparts in everyday English (or another language). But I would argue that the physics definition of (mechanical) work does agree with the everyday life usage. If you're hired to do some work with the truck and move it and the truck doesn't move an inch, your boss will conclude that you haven't done your work and you won't be paid a penny, just like what physics seems to calculate. You may have spent your energy by stretching and heating muscles but that's not called (mechanical) work. Work is actually supposed to be something useful – both in everyday life and in physics. In both cases, the conversion of energy into useless heat isn't included to "work". Just to re-emphasize this insight. There are many forms of energy and work and many "quantities with the units of one joule". But the words denoting them are not synonymous. So energy isn't quite the same thing as work and it isn't the same thing as heat or mechanical work or something else (also, debt and profit aren't the same despite the same unit of one U.S. dollar). The energy conservation law says that the sum of several quantities of this kind are zero or equal etc. but the different terms have to be distinguished and in these contexts, "work" really means "mechanical work". | {
"source": [
"https://physics.stackexchange.com/questions/127326",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/46758/"
]
} |
127,329 | In classical physics, we have second-order equations like Newton's laws, so we need to specify both position (zeroth order) and velocity (first order) of a particle as initial conditions, in order to pick out a unique solution. In non-relativistic quantum mechanics, we have Schrödinger's equation, which is first-order. As initial data we can therefore choose only the wavefunction's value at each point in space, but not its time derivative. This seems to dovetail with the uncertainty principle, which says a quantum particle does not have independent position and momentum degrees of freedom. We can choose the wavefunction to specify either a position or a momentum, but not both. (Or we can also choose a wavefunction that has uncertain position and momentum, within the bounds of the uncertainty principle.) In quantum field theory of fermions, we have the Dirac equation which is again first-order like Schrödinger's. But not every quantum particle is a fermion, and, AFAIK relativistic non-fermion particles obey the Klein-Gordon equation, which is second-order! So with the Klein-Gordon equation we can apparently choose both the wavefunction and its time derivative at each point in space, giving more freedom than the Schrödinger equation. Why do we have this extra freedom, and how can it be reconciled with the uncertainty principle as applied to relativistic bosons? | If you're pushing a 10-ton truck and it's not moving, you are not doing any work on the truck because the distance $ds=0$ and the nonzero force $F$ isn't enough for the product $F\cdot ds$ to be nonzero. Your muscles may get tired so you feel that you're "doing something" and "spending energy" but it's not the work done on truck. You're just burning the energy from your breakfast by hopelessly stretching your muscles. The energy gets converted to heat and your body is really losing it, but when we talk about "work", we usually mean "mechanical work" done on an external object, and it is zero. If someone loosens the brakes and you suddenly manage to move the truck, your perception how "hard" it is may be the same as before. You may be spending the same amount of energy obtained from the breakfast. But there is a difference. A part of this energy is converted not to useless heat of your muscles but to the kinetic energy of the truck. Your impression that the work changes "not because of what you're doing" is an artifact of the fact that a big part of the energy is spent on heat in the muscles in one way or another. But it's really the usefully spent part of the energy, however small, that does the mechanical work. It may be a small part so it may be hard to notice it. Physical terms often deviate – and they are more accurate than – their counterparts in everyday English (or another language). But I would argue that the physics definition of (mechanical) work does agree with the everyday life usage. If you're hired to do some work with the truck and move it and the truck doesn't move an inch, your boss will conclude that you haven't done your work and you won't be paid a penny, just like what physics seems to calculate. You may have spent your energy by stretching and heating muscles but that's not called (mechanical) work. Work is actually supposed to be something useful – both in everyday life and in physics. In both cases, the conversion of energy into useless heat isn't included to "work". Just to re-emphasize this insight. There are many forms of energy and work and many "quantities with the units of one joule". But the words denoting them are not synonymous. So energy isn't quite the same thing as work and it isn't the same thing as heat or mechanical work or something else (also, debt and profit aren't the same despite the same unit of one U.S. dollar). The energy conservation law says that the sum of several quantities of this kind are zero or equal etc. but the different terms have to be distinguished and in these contexts, "work" really means "mechanical work". | {
"source": [
"https://physics.stackexchange.com/questions/127329",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/19917/"
]
} |
127,382 | A mirror is under normal circumstance used to reflect Electromagnetic radiation also known as photons (light) and in airport security or medical facilities, they use X-rays to detect anomalies inside objects or bodies to detect narcotics or injuries. However, I always wonder what if I add a mirror inside the luggage or put a mirror in front of me during scanning? That in mind, how would an X-ray scanner see the mirror? Would it be invisible? I am sure I am not the first one to think of this, as a lot of security and criminals thought of this, however I never got an answer, so can someone tell me please? If there are X-ray reflecting mirrors? Why don't Airport security ban these items and Mirrors all together? Would X-Ray mirror look like a normal mirror? Do they reflect visible light spectrum as well? | The thing that makes a mirror a mirror is a that it has a high reflectivity (and is very smooth of course, but that doesn't enter into this issue), but all optical properties including reflectivity are functions of wavelength. The mirror is not reflective in the x-ray band, so it looks like a layer of glass (moderately dense) and a very thin layer of heavy metal (rather denser). It can be seen in the image but is in no way remarkable. It will look just like any other thin layer of moderately dense material. | {
"source": [
"https://physics.stackexchange.com/questions/127382",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
127,531 | Apparently (first paragraph of this article ) the lack of inversion symmetry is some crystals allows all sort of nonlinear optic phenomena. Now. Does anyone know of an intuitive or just physical explanation as to why this is the case? What does inversion symmetry mean and what is so special about it? | The presence or absence of inversion symmetry in a medium has a direct impact on the types of nonlinear interactions that it can support; specifically, media which do have inversion symmetry cannot support nonlinear effects of even order. The reason for this is that adding an even harmonic to the fundamental will yield an asymmetric dependence of the electric field, and this is only possible if the medium itself is asymmetric. Unfortunately, the double-negative formulation of the symmetry result makes you do a bunch of mental gymnastics, so it is normally phrased in its contrapositive form, the lack of inversion symmetry permits a medium to support nonlinear effects of even order, so long as there are no other external factors that also impede those effects. In terms of the result that you get from symmetry considerations this is the more convoluted statement, but it is more useful practically so that's the way people usually phrase it. "Inversion symmetry" is the property that the material remain the same when you change the position $\mathbf r_j$ of each particle $j$ to its 'inverse', $-\mathbf r_j$. Because we can typically move materials around, this is equivalent to saying that the medium is identical to its mirror image. This is true, for example, for a gas (if the $\mathbf r_j$ are random, then the $-\mathbf r_j$ will also be random), or for crystals like body-centred cubic lattices: On the other hand, certain lattices do not have this symmetry, like you get when you displace the atom in the centre towards one of the faces of the cubic unit cell : Here the symmetry is broken, and if you invert all the coordinates with respect to (say) the middle atom, you no longer recover the original lattice. It is fairly easy to see why this asymmetry is necessary for second-harmonic generation. Suppose that the second harmonic has a maximum in the $+z$ direction at the same time as the fundamental, so that they add constructively. If you wait for half a period of the fundamental, its electric field will be in the $-z$ direction, but the second harmonic will have undergone a full period and will be pointing towards $+z$, so that the two interfere destructively. This means that the maximum of the total field is stronger in the $+z$ direction than it is in the $-z$ direction. This is actually quite remarkable! In particular, the medium itself needs to be asymmetric to "know" which direction the stronger fields need to go towards. If the medium has inversion symmetry, then the $+z$ and $-z$ directions are equivalent, and an asymmetric output like this is impossible. Consider, on the other hand, a process with odd order like third harmonic generation. Here a half-integral period of the fundamental is also a half-integral period of the harmonic, which means that they add in the same direction in each half-cycle, and the output is symmetric. In fact, this selection rule - the forbiddenness of even harmonics in inversion-symmetric media - goes all the way up the harmonic scale, including phenomena where the field is strong enough to break out of the perturbative treatment in David's answer . The nicest example is high-harmonic generation , which you get in gas jets when the driving laser is intense enough that the laser's electric field roughly equals the internal electric fields of the atom. In that case, you get a reasonable response at very high harmonic orders (the record is around 5000 ( doi )), and you get a very flat plateau in which the response doesn't really drop with the harmonic order: Note, in particular, that all the even harmonics are missing. In here I am plotting the response of a single, symmetrical atom, which means that even orders cannot appear. For inversion-symmetric media, then, this relation holds all the way up the even-integers scale. | {
"source": [
"https://physics.stackexchange.com/questions/127531",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37677/"
]
} |
127,590 | There is an idea that the geometry of physical space is not observable(i.e. it can't be fixed by mere observation). It was introduced by H. Poincare. In brief it says that we can formulate our physical theories with the assumption of a flat or curved space by changing some assumptions and these two formulations are empirically indistinguishable. Here is a topic related to it: Can general relativity be completely described as a field in a flat space? . This idea is also accepted by many contemporary physicists including Kip Thorne, see: Thorne, K. S. 1996. Black Holes and Time Warps, New York: W.W. Norton, pp.400-2, But here is my question: Is the topology of the universe observable? By observable, I mean can we find it by observations, or is the topology also subject to indeterminacy by experiment (similar to geometry)? What do physicists claim about the topology of the universe? This question is of importance because when we move from a formulation in a curved and closed geometry to a flat and infinite geometry and we claim both formulations are equivalent, then it seems that, at least ,global topological properties like boundedness are not observable. In the post mentioned it has been stated that only differential topology is observable, in agreement to what I said. I want to know what physicists say about observability of topology, and why?
(I am not asking what is the topology of space, I know that question was asked before. I am asking about a deeper issue: whether it can be observed in principle or not?) | I'm not going to provide a full answer here, because I don't know the answer , but I want to give some statements that illustrate quite nicely the kind of problems one would face when determining topology of anything : We know spacetime is a manifold. That means, locally, it looks just like $\mathbb{R}^4$. That's already a bummer. We can't do jack at one place to find out anything about topology. But, as soon as we move, we get into all the complications of reference frames and whatnot. So, experimentally, whether or not we can principally detect topology, it's going to be one hell of a challenge. But it gets worse. You know how we always suppose that fields fall off at infinity? That's one of the natural reasons principal bundles arise in gauge theories. If we want to make precise the notion of a field $A$ falling off at infinity, we say it has to be a smooth function and have a well-defined value $A(\infty)$. And what's $\mathbb{R}^n$ together with $\infty$? The one-point compactification, also known as the sphere $S^n$. But it is not quite feasible to find global solutions to the equations of motion of a gauge theory on $S^n$, thanks to the hairy ball theorem and others. So we say: Alright, let's solve the e.o.m. locally on some open sets $U_\alpha,U_\beta$ homeomorphic to the disk (think of the hemispheres overlapping a bit at the equator), and patch the solutions $A_\alpha,A_\beta$ together on the overlap by a gauge transformation on $U_\alpha \cap U_\beta$. Now, we've got our field living naturally on the sphere $S^n$ if we want a global solution. Does this mean that we actually live on an $S^n$, or just that we are inept to find a coherent description of the physics on $\mathbb{R}^n$? What would that even mean ? I can hear the people saying "We can always examine what the curvature is - $S^n$ has non-vanishing one, $\mathbb{R}^n$ has vanishing one.". That's alright, but the above gauge argument forces us the either accept that there is no globally well-defined gauge potential $A$ on $\mathbb{R}^n$ or to think of some $S^n$ on which a patched-together solution lives. What's more real? What would it even mean to say one of these views is more meaningful than the other? So, you might be inclined to say: "Screw these weird gauge potentials, we're living on a spacetime, and not some bundle!" But there are topological effects of these bundles such as instantons or the Aharonov-Bohm effect . Spacetime alone is not enough. And what would be a meaningful distinction between "These bundles are not where we live, they're 'above' spacetime" and "We live on the bundles, and most often only experience the projection on spacetime"? What I am trying to say is that it is not even clear what we should regard as the universe we live in. The ordinary, 4D spacetime is not enough to account for all the strange things that might happen. And as I said in the beginning, don't take this as an answer. I am biased from being immersed in gauge theories, and only having superficial knowledge of the intricacies of GR. But from what I see, all "non-trivial topology" can also be seen as arising from patching together local solutions to the physical laws that otherwise don't match well. | {
"source": [
"https://physics.stackexchange.com/questions/127590",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
127,695 | Apparently, the air inside a soap bubble is under higher pressure than the surrounding air. This is for instance apparent in the sound bubbles make when they burst . Why is the pressure inside the bubble higher in the first place? | I drew an image to illustrate the forces at play. For any curved surface of the bubble, the tension pulls parallel to the surface. These forces mostly cancel out, but create a net force inward. This compresses the gas inside the bubble, until the pressure inside is large enough to counteract both the outside pressure, as well as this additional force from the surface tension. | {
"source": [
"https://physics.stackexchange.com/questions/127695",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45640/"
]
} |
127,752 | Just as circumference of circle will remain $\pi$ for unit diameter, no matter what standard unit we take, are the speeds of light and sound irrational or rational in nature ? I'm talking about theoretical speeds and not empirical, which of course are rational numbers. | Something I posted on reddit answers this question quite well, I think: "Rational" and "irrational" are properties of numbers . Quantities with units aren't numbers, so they're neither rational nor irrational. A quantity with units is the product of a number and something else (the unit) that isn't a number. By choosing the unit you use to express a quantity, you can arrange for the numeric part of the quantity to be pretty much any number you want (though switching units won't let you change its sign or direction). In particular, it can be rational or irrational. And choices of units are a human convention, so it wouldn't make any sense to extend the idea of rationality or irrationality to the quantity itself. You can use a natural unit system, where certain physical quantities are represented by pure numbers. For example, if you use the same units to measure time and space, $c = 1$. In such a unit system, it does make sense to say the speed of light is rational, but that's kind of a special case. That reasoning doesn't really work with other physical quantities. And you really do have to be using natural units. (Technically, you could make a natural unit system where $c = \pi$, but it would have very complicated and perhaps even inconsistent behavior under Lorentz transforms, so nobody does that.) By the way, empirical measurements always have some uncertainty associated with them, so they're not really numbers either and are also neither rational nor irrational. A measurement is probably better thought of as a range (or better yet, a probability distribution) which will necessarily include both rational and irrational numbers. | {
"source": [
"https://physics.stackexchange.com/questions/127752",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/28258/"
]
} |
127,791 | This is a very well known problem, but I can't find an answer in the specific case I'm looking for. Let's consider two balls : Ball 1 weighs 10 kg Ball 2 weighs 1 kg Balls have identical volumes (so Ball 1 is much more dense) Balls have identical shapes (perfect spheres) Let's drop them from a rather important height, on earth, WITH air. (That's the important thing, because all the proofs that I browse take place in a vaccum). I am arguing with a colleague. He thinks that ball 1 will fall faster in air, and that the two balls will fall at the same speed in a vacuum. I think that the identical shapes and volumes make air friction identical too and that the vaccum has no importance here. Could someone tell who's right and provide a small proof? | I am sorry to say, but your colleague is right. Of course, air friction acts in the same way. However, the friction is, in good approximation, proportional to the square of the velocity, $F=kv^2$. At terminal velocity, this force balances gravity, $$ m g = k v^2 $$ And thus $$ v=\sqrt{\frac{mg}{k}}$$ So, the terminal velocity of a ball 10 times as heavy, will be approximately three times higher. In vacuum $k=0$ and there is no terminal velocity (and no friction), thus $ma=mg$ instead of $ma=mg-F$. | {
"source": [
"https://physics.stackexchange.com/questions/127791",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/55945/"
]
} |
128,374 | When discussing physics with laypersons, I'm often in the situation where I have to explain what the Standard Model is, and why it's a successful theory of particle physics. To help in such situations, I'm looking for explicit examples of where the Standard Model has been successful - i.e. the prediction of certain particles, incredible agreement between experimental/theoretical results and so on. Conversely, what, if any, are the failures of the Standard Model (apart from the the obvious not-including-gravity one). Are there theoretical predictions which differ significantly from experimental results? | The lists will end up being huge, therefore I will only mention a few of each. This is my attempt of an answer: Successes of the Standard Model: Perhaps the biggest success of the Standard Model is the prediction of the Higgs Boson. The particle has been experimentally verified in 2012 (if my memory serves me well) after it has been theorised for over 50 years. Other successes of the Standard model include the prediction of the W and Z bosons, the gluon, and the top and charm quark, before they have even been observed. Another prediction also includes the anomalous magnetic dipole moment of the electron, which is given by $a = 0.001 159 652 180 73(28)$ which results in our most precise value of the fine structure constant : $α^{−1} = 137.035 999 070 (98)$, which is a precision of better than one part in a billion ! Wikipedia has a table with the prediction of the masses of the W and Z boson compared with experimental data. It is evident that those are extremely accurate predictions: $$
\begin{align*}
&\textrm{Quantity}&&\textrm{Measured (GeV)}&&\textrm{SM prediction (GeV)}\\
\hline
&\textrm{Mass of W boson}&&80.387\phantom0\pm0.019\phantom0&&80.390\phantom0\pm0.018\phantom0\\
&\textrm{Mass of Z boson}&&91.1876\pm0.0021&&91.1874\pm0.0021
\end{align*}
$$ Failures of the Standard Model: The biggest one in my opinion is the complete absence of gravity in the SM. As you mentioned in your question though, you are interested in other failures, perhaps less known. These include: The SM predicts neutrinos to be massless. We have observed neutrino oscillations which implies that neutrinos are massive (by massive I mean they have mass, there actual mass is tiny!). The Hierarchy Problem . In a nutshell, the SM cannot explain the large differences in the coupling constants of forces at low energy scales. The contribution of Dark Energy arising from the SM is many, many orders of magnitude higher than observed. CP violation in Cosmology | {
"source": [
"https://physics.stackexchange.com/questions/128374",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41448/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.