source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
128,408
In this blog post , I found this picture: There are other similar photos: and Does the water really form golden ratio spiral in such cases? Or is the photo just a provocative example, without physics grounds for claims about "goldness" of the spiral?
Firstly, a Fibonacci spiral and a golden spiral are not quite the same thing, although they are pretty close. In this image from Wikipedia, the green curve is a Fibonacci spiral, and the red curve a golden spiral, with overlapping areas in yellow: They are close enough that for the purposes of your question we can consider them to be the same. In any case, the image provided is not close to either one. Consider some basic properties of the golden spiral, and see if they are true in the image: 1) each section is a square 2) the size of each square is in the golden ratio 3) the spiral is tangent to the square at each corner What we can say, however, is that this is a very attractive picture of some kind of spiral. It's also notable that just about any spiral-like curve can be made to fit within some kind of recursive subdivision of rectangles, as long as one is not too careful about the rules and does not divide too many times. For example, take this completely arbitrary spiral I just drew with my fist: Notice how it works for about three rectangles, and then gets confused.
{ "source": [ "https://physics.stackexchange.com/questions/128408", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/45688/" ] }
128,409
Several years ago, I was laying on my bed and had a CD shaped transparent plastic disk (which was covering a 100 CD stack), basically a transparent CD. I don't know why but I took my phone and took a picture of the light bulb in my room through the hole of that plastic disk. Here is the result: Why does it appear like that? does it have anything to do with Thin-film interference? And would it look the same if there was no hole in the middle?
Firstly, a Fibonacci spiral and a golden spiral are not quite the same thing, although they are pretty close. In this image from Wikipedia, the green curve is a Fibonacci spiral, and the red curve a golden spiral, with overlapping areas in yellow: They are close enough that for the purposes of your question we can consider them to be the same. In any case, the image provided is not close to either one. Consider some basic properties of the golden spiral, and see if they are true in the image: 1) each section is a square 2) the size of each square is in the golden ratio 3) the spiral is tangent to the square at each corner What we can say, however, is that this is a very attractive picture of some kind of spiral. It's also notable that just about any spiral-like curve can be made to fit within some kind of recursive subdivision of rectangles, as long as one is not too careful about the rules and does not divide too many times. For example, take this completely arbitrary spiral I just drew with my fist: Notice how it works for about three rectangles, and then gets confused.
{ "source": [ "https://physics.stackexchange.com/questions/128409", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/44617/" ] }
128,468
Consider the experiment in this link . The experiment includes using a ruler as a lever, with an inflated balloon on one side and a balloon which is not inflated on the other. The aim of the experiment is to show that air has mass. I have seen many kids performing similar experiments. But, if the air pressure inside the balloon is equal to that outside, then the buoyant force will cancel out the weight of the air inside the balloon, won't it?
I can think of at least four things going on in this experiment that need pointing out: When you inflate a balloon by mouth, the air is warm: this makes the air inside the inflated balloon slightly lighter than the air it displaced The air inside the balloon has 100% relative humidity at 37C, and condensation will quickly form on the inside of the balloon as the air inside cools down. The air inside the balloon contains carbon dioxide, which has higher density than room air (molecular mass of 12+16+16 = 44 amu, vs oxygen at 32 amu and nitrogen at 28 amu - ignoring small isotopic effects, and ignoring Argon). The pressure inside the balloon is larger than outside - this increases the density So how large are each of these effects? Warm air: 37C vs 20C results in drop in density of 0.945x (293 / 310) or -5.5% Moisture: partial pressure of water at 37C is 47.1 mm Hg source which is about 0.061 atmospheres. Assuming that pressure is constant, this water (mass 18 amu) displaces air (mean mass 29 amu), so the density of the air decreases by 0.061 * (29 - 18) / 29 = 2.3%. If we allow the air outside the balloon to have 60% relative humidity (with saturated vapor pressure of 10.5 mm Hg), it would be slightly less dense than dry air (10.5*0.6/760*(29-18)/29 = 0.3%) making the net difference -2.0% . Note that much of this moisture will condense when the balloon cools down - little droplets will form on the inside of the balloon. With the air inside still saturated, its density will be 0.1% lower than on the outside; the net result amounts to 2.9% of the mass of the air in the balloon. Carbon dioxide: the exhaled air has 4 - 5 % carbon dioxide source: wikipedia , with an equivalent drop in oxygen. The density of exhaled air is therefore higher than that of inhaled air by 0.045 * (44 - 32) / 29 = +1.9% Pressure in the balloon: from this youtube video - time point 3:43 I estimate the pressure increase in the balloon at 23 mm Hg, resulting in an increase in density of 2.9% Summarizing in a table: factor effect at room T temperature -5.5% 0.0% moisture -2.0% 2.9% CO2 1.9% 1.9% pressure 2.9% 2.9% net -2.7% 7.7% A freshly inflated balloon will thus have only a slightly lower density than the air it displaced, because the temperature + moisture effect is greater than the other two. After you wait a little while, the temperature will equalize and the density of the air inside the balloon will be greater - by 7.7%, with more than half of that not caused by the pressure in the balloon... In summary: the experiment described in your link measures the difference in density between air in a balloon, and ambient air. Since the density of the air inside the balloon is higher than the density outside the balloon, one may conclude that the air inside the balloon has finite density. One may NOT conclude that the medium outside the balloon (which we believe to be "dry air") has any density at all - since nothing in this measurement tells us about the air outside the balloon. If you did the experiment carefully with a balloon initially filled with warm air, and you allowed the air to cool down, you might be able to tell that the balance shifts - in other words, that there must be a change in the buoyancy experienced by the balloon as it cools down. THAT would be an experiment to demonstrate "air has mass" (volume of balloon decreases, and it experiences less buoyancy). From the experiment as described (popping the balloon), we learn that " exhaled air has mass". That is not the same thing. If you used an air pump (balloon pump) to inflate the balloon, the first three components would go away and you are left with the difference due to the pressure only - 2.9% of the mass of the air in the balloon.
{ "source": [ "https://physics.stackexchange.com/questions/128468", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/51843/" ] }
128,512
In EM radiation, the magnetic field is $ 3*10^8$ times smaller than the electric field, but is it valid to say it's "weaker". These fields have different units, so I don't think you can compare them, but even so it seems like we only interact with the electric field of EM radiation, not the magnetic field. Why is this?
As you already indicated, physical units need to be considered. When working in SI units, the ratio of electric field strength over magnetic field strength in EM radiation equals 299 792 458 m/s, the speed of light $c$. However, the numerical value for $c$ depends on the units used. When working in units in which the speed of light $c=1$, one would conclude that both fields are equal in magnitude. A better way to look at this is to consider the energy carried by an electromagnetic wave. It turns out that the energy associated with the electric field is equal to the energy associated with the magnetic field. So in terms of energies electric and magnetic fields are equals.
{ "source": [ "https://physics.stackexchange.com/questions/128512", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/51971/" ] }
128,517
This is motivated by this question in the Puzzling.SE beta about measuring 90 minutes of time using two candles that burn for one hour. (Feel free to read up on that before I spoil the puzzle!) The solution given was to light one of the candles on both ends by holding it horizontally, but as one user mentioned, the candle's burning rate would be significantly affected by its orientation. This reminded me that I have heard of this puzzle before but using sticks of incense instead of candles. Does that actually improve the rigour of the puzzle though? How would the burn rate of incense be affected by the incense's orientation?
As you already indicated, physical units need to be considered. When working in SI units, the ratio of electric field strength over magnetic field strength in EM radiation equals 299 792 458 m/s, the speed of light $c$. However, the numerical value for $c$ depends on the units used. When working in units in which the speed of light $c=1$, one would conclude that both fields are equal in magnitude. A better way to look at this is to consider the energy carried by an electromagnetic wave. It turns out that the energy associated with the electric field is equal to the energy associated with the magnetic field. So in terms of energies electric and magnetic fields are equals.
{ "source": [ "https://physics.stackexchange.com/questions/128517", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/27481/" ] }
128,627
Since $\renewcommand{\unit}[1]{\,\mathrm{#1}} 1\unit{dm} = 10^{-1}\unit{m}$, it follows that $1\unit{dm^3} = 10^{-1} \times 10^{-1} \times 10^{-1} \unit{m^3} = 10^{-3} \unit{m^3}$. However, in regular mathematics the following equation holds true: $$a\,b^{3} = a\,b\,b\,b$$ By the above, the cube unit should expand as follows: $$\mathrm{dm^3} = \mathrm{dmmm}$$ While in actual usage (as seen in the second equation) the expansion is $\mathrm{dddmmm}$, which would arise from using $\mathrm{(dm)^3}$ instead. $$\mathrm{(dm)^3} = \mathrm{dddmmm}$$ So shortly: why aren't parentheses (commonly?) used in units?
The thing is that $\mathrm{dm}$ is a single symbol, not a combination of two symbols. Yes, it can be understood in terms of a prefix and a base indicator, but it is still a single symbol. An analogy to the concatenation of variable is inappropriate. Reference to an authoritative statement: The grouping formed by a prefix symbol attached to a unit symbol constitutes a new inseparable unit symbol (forming a multiple or submultiple of the unit concerned) that can be raised to a positive or negative power and that can be combined with other unit symbols to form compound unit symbols. Example: $\renewcommand{\unit}[1]{\,\mathrm{#1}} 2.3\unit{cm^3} = 2.3\unit{(cm)^3} = 2.3 \unit{(10^{–2}\,m)^3} = 2.3 \times 10^{–6} \unit{m^3}$
{ "source": [ "https://physics.stackexchange.com/questions/128627", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/43412/" ] }
128,650
We can think that the electric field and the gravitational field operate similarly in the sense that the forms of their governing laws (namely, Coulomb's law and Newton's law respectively) are strikingly similar. The only difference one can point out is that while the electric charges come in two flavors, the gravitational masses come in just one. Now, I have read that when a charged particle moves, the electric field lines associated with are distorted in a fashion because of the finite time required for the information about the change in the position of the charge to get propagated. And, I have led to the understanding that this is the cause of the existence of the magnetic field (and that if calculus is used it can be proven mathematically). So (if this is true then) why doesn't the same thing happen to the gravitational field? Why is there nothing like a gravitational magnetic field? Or, is there? Note I have changed the language and the tone of the question massively. Although the question was fairly well received, I believe it was really ill-posed. As pointed out by ACuriousMind in the comment, the ''reason''described here behind the existence of the magnetic field is something that cannot be found a good support for. But still, due to the similarity between the equations describing their static behavior of electric and gravitational fields, one can still ask as to whether a boost would create some sort of a gravitational magnetic field if the original frame only had a static gravitational field. As the accepted answer points out, the answer is, roughly, a yes but the Maxwell-type equations for gravity aren't as well-behaved as are the original Maxwell equations of electromagnetism--one should note. In particular, the equations for gravity take the Maxwell-like form in appropriate weak limits only in some appropriately chosen gauges--and aren't Lorentz covariant.
There is a sort of analog called gravitomagnetism (or gravitoelectromagnetism ), but it is not discussed that often because it applies only in a special case. It is an approximation of general relativity (i.e. the Einstein Field Equations ) in the case where: The weak field limit applies. The correct reference frame is chosen (it's not entirely clear to me exactly what conditions the reference frame must fulfill). In this special case, the equations of GR reduce to: $$ \begin{align} \nabla\cdot \vec{E}_g &~=~ -4\pi G \rho_g \\[5px] \nabla\cdot \vec{B}_g &~=~ 0 \\[5px] \nabla\times \vec{E}_g &~=~ -\frac{\partial \vec{B}_g}{\partial t} \\[5px] \nabla\times \vec{B}_g &~=~ 4\left(-\frac{4\pi G}{c^2}\vec{J}_g+\frac{1}{c^2}\frac{\partial \vec{E}_g}{\partial t}\right) \end{align} $$ These are of course a close analogy to Maxwell's equations of electromagnetism.
{ "source": [ "https://physics.stackexchange.com/questions/128650", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/87745/" ] }
128,705
How did they take photos of Jupiter - I mean Jupiter is illuminated and that's a lot of light to produce. Am I missing something, and there was some sort of dark photo technology used, or was there simply enough light from Sun to begin with? Or is this photo a fake?
You can see Jupiter in the night sky with your naked eyes due to its reflected sunlight (although I believe that in July and August of 2014 Jupiter is very close to the Sun in the sky and is visible only for a little while near twilight). You can take a picture of Jupiter in the sky with any old camera. If you want a high-quality picture, your camera needs to have a lens arrangement that will make the image of Jupiter on the camera's CCD larger than the image of Jupiter on your retina. The thing to look for is a lens with a long focal length . If the focal length of the lens 1 is long enough, it will need to stand some distance away from the camera's CCD on a rigid mount; this is usually called a telescope . You can replace the camera with your eye and see Jupiter's cloud bands directly. 1 Actually most telescopes use a curved mirror rather than a lens, for several technical reasons. Images as nice as that one usually come (possibly) from professional astronomical observatories on the ground, or from the Hubble Telescope , probably NASA's most successful instrument ever (after a rocky start). Your particular image seems to have been taken by the robotic spacecraft Cassini when it passed near Jupiter en route to Saturn, where it has been orbiting and collecting data for the last ten years. In that case the camera had the advantage of being much closer to Jupiter than I'll ever be :-(
{ "source": [ "https://physics.stackexchange.com/questions/128705", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/56328/" ] }
128,717
Looking to read up on the impact the discovery of Higgs boson has on the String Theory I came upon these two paragraphs in an article about the Higgs boson Nobel Prize: One possibility has been brought up that even physicists don’t like to think about. Maybe the universe is even stranger than they think. Like, so strange that even post-Standard Model models can’t account for it. Some physicists are starting to question whether or not our universe is natural. This cuts to the heart of why our reality has the features that it does: that is, full of quarks and electricity and a particular speed of light. This problem, the naturalness or unnaturalness of our universe, can be likened to a weird thought experiment. Suppose you walk into a room and find a pencil balanced perfectly vertical on its sharp tip. That would be a fairly unnatural state for the pencil to be in because any small deviation would have caused it to fall down. This is how physicists have found the universe: a bunch of rather well-tuned fundamental constants have been discovered that produce the reality that we see. I thought this was a gross exaggeration of how weird and unnatural the universe is (it was after all written by someone who starts his sentences with "Like") so I wanted to get other opinions on whether the state of our universe is really as weird as "finding a pencil balanced perfectly vertical on its sharp tip"?
You can see Jupiter in the night sky with your naked eyes due to its reflected sunlight (although I believe that in July and August of 2014 Jupiter is very close to the Sun in the sky and is visible only for a little while near twilight). You can take a picture of Jupiter in the sky with any old camera. If you want a high-quality picture, your camera needs to have a lens arrangement that will make the image of Jupiter on the camera's CCD larger than the image of Jupiter on your retina. The thing to look for is a lens with a long focal length . If the focal length of the lens 1 is long enough, it will need to stand some distance away from the camera's CCD on a rigid mount; this is usually called a telescope . You can replace the camera with your eye and see Jupiter's cloud bands directly. 1 Actually most telescopes use a curved mirror rather than a lens, for several technical reasons. Images as nice as that one usually come (possibly) from professional astronomical observatories on the ground, or from the Hubble Telescope , probably NASA's most successful instrument ever (after a rocky start). Your particular image seems to have been taken by the robotic spacecraft Cassini when it passed near Jupiter en route to Saturn, where it has been orbiting and collecting data for the last ten years. In that case the camera had the advantage of being much closer to Jupiter than I'll ever be :-(
{ "source": [ "https://physics.stackexchange.com/questions/128717", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2628/" ] }
128,785
Different kinds of white light have a different spectrum. Light from a white LED will have blue at the peak intensity while white light from a CFL or something else will have a different looking spectrum. I don't understand how this works. Shouldn't pure white light have a unique spectrum no matter what? For example, a certain white LED's spectrum looks like this: $\hspace{100px}$ .
Shouldn't pure white light have a unique spectrum no matter what? White is not a spectral color. It's a perceived color. The human eye has three kinds of color receptors, commonly called red, green, and blue. $\hspace{100px}$ Note that there's no receptor for yellow. A spectral yellow light source will trigger both the red and green receptors in a certain way. We see "yellow" even though we don't have yellow receptors. Any spectrum of light that triggers the same response will also be seen as "yellow". Computer screen and your TV screen manufacturers depend on this spoofing. Those displays only have three kinds of light sources, red, green, and blue. They generate the perception of other colors by emitting a mix of light that triggers the desired response in the human eye. What about white? White isn't a spectral color. There's no point on the spectrum that you could label as "white". White is a mixture of colors such that our eyes and brain can't distinguish which of red, green, or blue is the winner. Just as any mix of colors that trigger our eyes and mind to see "yellow" will be perceived as "yellow", so will any mixture of colors that trigger the same responses in our eye. Aside: There is a mistake in the above image in the labels "bluish purple" and "purplish blue". That should be "blue-violet" and "violet-blue" (or possibly "indigo"). Purple is a beast of a very different color. It is a non-spectral color. The spectrum is just that, a linear range. Our eyes don't perceive it as such. We view color as a wheel, with blue circling back to red via the purples.
{ "source": [ "https://physics.stackexchange.com/questions/128785", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/132357/" ] }
128,947
In our everyday experience termperature is due to the motion of atoms, molecules, etc. A neutron star, where protons and electrons are fused together to form neutrons, is nothing but a huge nucleus made up of neutrons. So, how does the concept of temperature arise?
First, strictly speaking a neutron star is not a nucleus since it is bound together by gravity rather than the strong force. Measuring a surface temperature for any star is deceptively simple. All that is needed is a spectrum, which gives the luminous flux (or similar quantity) as a function of photon wavelength. There will be a broad thermal peak somewhere in the spectrum, whose peak wavelength can be converted to a temperature using Wien's displacement law : $$T=\frac{b}{\lambda_{\rm max}}$$ with $b\sim2.9\times10^{-3}\rm mK^{-1}$. Neutron stars peak in the x-ray, and picking a wavelength of $1\;\rm nm$ (roughly in the middle of the logarithmic x-ray spectrum) gives a temperature of about $3$ million $\rm K$, which is in the ballpark of what is typically quoted for a neutron star. More broadly than the motion of atoms or molecules, you can think of temperature as a measurement of the internal (not bulk) kinetic energy of a collection of particles, and energy is trivially related to temperature via Boltzmann's constant (though to get a more carefully defined concept of temperature requires a bit more work, see e.g. any derivation of Wien's displacement law).
{ "source": [ "https://physics.stackexchange.com/questions/128947", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/55513/" ] }
129,048
According to my understanding of SR, if I travel at 0.8c relative to a line of clocks, I should see the clocks in front of me going 3 times faster than my own, and those behind me going 3 times slower than my own (Doppler effect). OK, so what happens at my exact location? I reckon that as I look nearer and nearer to my origin, I would see a discontinuity between the forwards and backwards directions. That's bad enough, but at my origin there is no light travel delay, so my local time is my proper time, and the speed of the clocks should then be in accordance with gamma (4/3) at 0.8c. So, my question is: how do I reconcile the 3 contradictory speeds that I should observe for a clock at my origin? I think it should be the value given by gamma, but I can't explain the discontinuity resulting from the Doppler effect both forward and back in close proximity to the origin.
I assume you used the formulae $f_o = fs\sqrt{\frac{1+v/c}{1-v/c}}$ for the clocks ahead of you and $f_o = fs\sqrt{\frac{1-v/c}{1+v/c}}$ for the clocks behind you. Those formulae do imply a singularity for the clock that is closest to you. Which equation to use? The answer is neither. Those expressions assume the travel is along the line of sight to the source. There is a singularity because collisions are singularities. Your spaceship is plowing through the line of clocks. What you'll see in front of you are a series of clocks ticking faster than yours. Behind you, you'll see a cloud of pulverized clocks. Your spaceship had better have some very good forward shields. Your spacecraft presumably isn't doing that. Instead, you are flying parallel to the line of clocks, with some constant, non-zero distance between the spacecraft and the line of clocks. You need to use the more generic expession $$f_o = fs \frac{\sqrt{1-\left(\frac v c\right)^2}} {1-\frac v c \cos \theta_o}$$ where $\theta_o$ is the angle between the clock in question and your line of travel, as observed by you. The sign convention here is that $\theta_o$ is positive for clocks in front of you, negative for clocks behind you. The above expression reduces to the simpler expressions at the start of my answer for clocks very far in front of you and very far behind you. In between, you'll get a nice continuous change from faster in front, slower behind. The clock right next to you? It's a bit redshifted, and hence slower. Here $\cos \theta_0=0$, so in this case $fo=fs\sqrt{1-(v/c)^2}$. This is called the transverse Doppler redshift. This means that there's a clock just slightly ahead of you that is ticking at your own clock's rate.
{ "source": [ "https://physics.stackexchange.com/questions/129048", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/24362/" ] }
129,134
I have seen similar posts, but I haven't seen what seems to be a clear and direct answer. Why do only a certain number of electrons occupy each shell? Why are the shells arranged in certain distances from the nucleus? Why don't electrons just collapse into the nucleus or fly away? It seems there are lots of equations and theories that describe HOW electrons behave (pauli exclusion principle), predictions about WHERE they may be located (Schrödinger equation, uncertainty principle), etc. But hard to find the WHY and/or causality behind these descriptive properties. What is it about the nucleus and the electrons that causes them to attract/repel in the form of these shells at regular intervals and numbers of electrons per shell? Please be patient with me, new to this forum and just an amateur fan of physics.
Any answer based on analogies rather than mathematics is going to be misleading, so please bear this in mind when you read this. Most of us will have discovered that if you tie one end of a rope to a wall and wave the other you can get standing waves on it like this: Depending on how fast you wave the end of the rope you can get half a wave (A), one wave (B), one and a half waves (C), and so on. But you can't have 3/5 of a wave or 4.4328425 waves. You can only have a half integral number of waves. The number of waves is quantised. This is basically why electron energies in an atom are quantised. You've probably heard that electrons behave as waves as well as particles. Well if you're trying to cram an electron into a confined space you'll only be able to do so if the electron wavelength fits neatly into the space. This is a lot more complicated than just waving a rope because an atom is a 3D object so you have 3D waves. However take for example the first three $s$ wavefunctions, which are spherically symmetric, and look how they vary with distance - you get (these are for a hydrogen atom) $^1$: Unlike the rope the waves aren't all the same size and length because the potential around a hydrogen atom varies with distance, however you can see a general similarity with the first three modes of the rope. And that's basically it. Energy increases with decreasing wavelength, so the "half wave" $1s$ level has a lower energy than the "one wave" $2s$ level, and the $2s$ has a lower energy than the "one and a half wave" $3s$ level. $^1$ the graphs are actually the electron probability distribution $P(r) = \psi\psi^*4\pi r^2$. I did try plotting the wavefunction, but it was less visually effective.
{ "source": [ "https://physics.stackexchange.com/questions/129134", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/56511/" ] }
129,137
Please I want to Know that if the radius of the black hole = 0 then how it have its surface and how it absorbs things? $$\begin{align} g &= \frac{Gm}{0^2} \\ &= \infty \end{align}$$ then without surface and volume how it absorbs things?
Any answer based on analogies rather than mathematics is going to be misleading, so please bear this in mind when you read this. Most of us will have discovered that if you tie one end of a rope to a wall and wave the other you can get standing waves on it like this: Depending on how fast you wave the end of the rope you can get half a wave (A), one wave (B), one and a half waves (C), and so on. But you can't have 3/5 of a wave or 4.4328425 waves. You can only have a half integral number of waves. The number of waves is quantised. This is basically why electron energies in an atom are quantised. You've probably heard that electrons behave as waves as well as particles. Well if you're trying to cram an electron into a confined space you'll only be able to do so if the electron wavelength fits neatly into the space. This is a lot more complicated than just waving a rope because an atom is a 3D object so you have 3D waves. However take for example the first three $s$ wavefunctions, which are spherically symmetric, and look how they vary with distance - you get (these are for a hydrogen atom) $^1$: Unlike the rope the waves aren't all the same size and length because the potential around a hydrogen atom varies with distance, however you can see a general similarity with the first three modes of the rope. And that's basically it. Energy increases with decreasing wavelength, so the "half wave" $1s$ level has a lower energy than the "one wave" $2s$ level, and the $2s$ has a lower energy than the "one and a half wave" $3s$ level. $^1$ the graphs are actually the electron probability distribution $P(r) = \psi\psi^*4\pi r^2$. I did try plotting the wavefunction, but it was less visually effective.
{ "source": [ "https://physics.stackexchange.com/questions/129137", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/42035/" ] }
129,492
Background: I've lived in Philippines for several years, and visited other parts of Asia occasionally (Singapore, Indonesia, Hongkong). I just moved to Western Australia a few months ago and I realized that the sun is brighter here, in the sense that every after sunrise and every nearing sunset, the sun shines too bright that it is blinding. This happens almost everyday, so this isn't just some one off thing. In Asia, this never occurred to me. The sun was always bearable to the eyes. Why is this so?
Clean dry air lets sunlight through; dirty moist air scatters it. Aerosols (small air borne particulate contamination) are more prominent near areas of dense population - due to power plants, cars, fires, ... These particles form nucleation sites for moisture - and these small water drops become very effective scatterers of sunlight. The humidity is high in the Philippines , and it's low in Western Australia (Perth) . A map of the nitrogen dioxide concentrations in the earth's atmosphere (a proxy for 'man made pollution') shows that the region around Western Australia is quite low in pollution, while a lot of South East Asia is quite high (map from http://www.esa.int - European Space Agency): A map of the particulate pollution (PM2.5 - particulate matter less than 2.5 micron) confirms the picture (credit: Aaron van Donkelaar, Dalhousie University. Source at http://www.nasa.gov/images/content/483910main1_Global-PM2.5-map-670.jpg ): Although it's not terribly easy to see on this map, the air in Western Australia is quite clear - so there will be less "stuff" for light to travel through / scatter off. This is especially noticeable near sunrise/sunset, when the length of the path through the atmosphere is longest. This amplifies the difference. A bit more data to back this up: Map of typical humidity distribution in Manila (source: http://weatherspark.com/averages/33313/Metro-Manila-Philippines ): And for Perth (source: http://weatherspark.com/averages/34080/Redcliffe-Western-Australia ): These plots show the distribution of the "average daily high and low" values of humidity as a function of date, for both locations. Thus, you can see that average high for humidity is lowest on April 23 - at which point it's still 89%. The inner (darker colored) band represents the 25 - 75 percentile of the distribution, and the outer (lighter colored) band represents the 10-90 percentile. In other word - on April 23, maximum humidity in Manila might be at or below 82% one day in four; but on August 17 it is above 95% more than half the time. Note that the vertical scale on the two plots is different - the minimum values in Perth are considerably lower than for Manila... Here is a link to a very interesting and unusual photo sequence of a setting sun showing the phenomenon of the "green flash". This particular sequence was taken in Libya, and the photographer states: The air was so clean and dry that it was difficult to look directly at the Sun even when it was only a sliver above the horizon. I have never seen the sky quite like this before. As the sun was going down, you could not look at it at all naked-eye; even to the very last moment it was too bright. That supports my understanding that dry, clean air == bright sunsets. UPDATE in the comments, somebody asked the question: "what is this stuff that is doing the absorbing?". As was pointed out, water vapor is not a very good absorber of light in the optical regime - the vibration modes of water molecules are excited in the infrared. However, on page 12 of http://www.learner.org/courses/envsci/unit/pdfs/unit11.pdf we read: Air molecules are inefficient scatterers because their sizes are orders of magnitude smaller than the wavelengths of visible radiation (0.4 to 0.7 micrometers). Aerosol particles, by contrast, are efficient scatterers. When relative humidity is high, aerosols absorb water, which causes them to swell and increases their cross-sectional area for scattering, creating haze. Without aerosol pollution our visual range would typically be about 200 miles, but haze can reduce visibility significantly This agrees with @WhatRoughBeast's observation that haze aerosols are ultimately the "stuff" that scatters the light - a combination of particles in the air (many of which are man made, and will be present in higher concentrations near densely populated regions - especially ones where coal fired power plants operate) and humidity which causes the aerosols to increase in size, making them more effective scatterers.
{ "source": [ "https://physics.stackexchange.com/questions/129492", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/55991/" ] }
129,626
Why can't cables used for computer networking transfer data really fast, say at the speed of light? I ask this because electricity travels at the speed of light. Take Ethernet cables for example, I looked them up on wikipedia . Propagation speed 0.64 c Why only 64% What does propagation speed mean? I know there are other variables affecting the latency and perceived speed of computer network connections, but surely this is a bottle neck. In other words, I'm asking, what is it about a fiber-optics cable that makes it faster than an Ethernet cable?
As you've probably guessed the speed of light isn't the limitation. Photons in a vacuum travel at the speed of light ($c_o$). Photons in anything else travel slower, like in your cable ($0.64c_o$). The amount the speed is reduced by depends on the material by the permittivity . Information itself is slower still. One photon doesn't carry much information. Information is typically encoded in the change of states of the energy. And these changes of states can only be propagated at lower rates than the fundamental transmission speed. Detecting both the energy and the rates of change require physical materials to convert the photons into something more usable. This is because the channel used for transmission usually conducts energy at a maximum rate called bandwidth. The bandwidth of the channel is the first limit in network speeds. Fiber optics can transmit signals with high bandwidths with less loss than copper wires. Secondly the encoded signals have a lot of overhead. There is a lot of extra data transmitted with error correction, routing information, encryption and other protocol data in addition to the raw data. This overhead also slows down data throughput. Lastly the amount of traffic on a network can slow down the overall system speed as data gets dropped, collisions occur and data has to be resent. EDIT: I see you've changed your question some.... In other words I'm asking, what is it about a fiber-optics cable that makes it faster than an Ethernet cable? Fiber optics has the ability to conduct higher energy charges. Photons with higher energies, by definition are at higher frequencies. $E_{photon}=hf$ where $h$ is the plank constant (h=6.63*10^-34 J.s) and $f$ is the frequency of the photon. Why does frequency matter? Because of how communication systems work. Typically we setup a strong signal oscillating at the most efficient frequency for the transmission channel to conduct it. If the frequency is too low and we lose our signal's power and likewise too high and we lose power. This is due to how the medium responds to different levels of charge energy. So there's a $F_{max}$ and a $F_{min}$. Then we add information to the oscillation by changing it at some rate. There are a many ways to add information but in general the amount of information you can add is proportional to the rate the channel can respond to or bandwidth of the system. Basically you have to stay in between $F_{max}$ and $F_{min}$. It just so happens that the higher the operating frequency the easier it is to get wider and wider bandwidths. For example a radio at 1GHz with 10% channel width only allows for 100MHz max switching rates. But a fiber optic signal at 500THz a 10% channel width means a 50THz max switching rate. Big difference! You might be wondering why channels have frequency limits and why 10%. I just picked 10% as a typical example. But transmission channels of all types have limits to what kind of energy levels they absorb, reflect, and conduct. For a rough example x-rays which are high frequency or high energy charges, they go right though a lot of materials, whereas heat which is a frequency lower than optical light doesn't transmit well through paper but it can through glass. So there are frequencies where photons can be used to carry energy and frequencies where they can't. Yes they do all travel at $c_o$ in free space and slower in other media, but they can't carry information at that same rate or higher. You might be interested to read Shannon-Hartley's Theorem .
{ "source": [ "https://physics.stackexchange.com/questions/129626", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/56699/" ] }
129,731
Suppose we have a cube of metal inside a room at temperature 27°. If we heat the metal up to 600° using uniform radiation of that energy , no part of it should have higher T°, but nevertheless it will start emitting visible light, that is thermal photons with a temperature in the excess of 4000°. How is that possible? I am aware of thermal radiation , blackbody etc., the first question is: 1) - when the metal is at thermal equilibrium with the room (27°) are there inside it or inside the room any molecules/atoms with energy in the range of 4000°? if the answer is affirmative the question has been fully answered, if is negative we need a follow-up question
If you heat a metal (or anything else) up to a temperature $T$ then the average energy of any degree of freedom of the metal will be of order $kT$. At 600ºC this is about 0.075eV, and as you say the energy of visible light is around 2 - 3eV, which is a factor of 30 or so higher. The reason that visible light can be produced is because the thermal energy is randomly distributed. That means some bits of the metal will have substantially lower energy than 0.075eV and some bits will have substantially higher energy than 0.075eV. You get small parts of the metal where the energy is as high as 2 - 3eV, and it's those parts that are emitting the light. The intensity of the emitted radiation is given by Planck's law : $$ B(\lambda) = \frac{2hc^2}{\lambda^5} \frac{1}{\exp\left( \frac{hc}{\lambda k_B T} \right) - 1} $$ If you take your temperature of 600ºC (873K) and calculate the intensity as a function of wavelength using Planck's law you get: so you can see the intensity peaks between 3 and 4 microns, which is in the infra-red. Looking at the graph the intensity appears to fall to zero at about 1 micron, which is still in the infra-red. However if you calculate the intensity at the wavelength of red light (0.7 microns) you find it isn't zero, but it's pretty small at about 0.002% of the peak intensity. So at 600ºC the metal will be producing a little red light.
{ "source": [ "https://physics.stackexchange.com/questions/129731", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/56578/" ] }
129,745
I am trying to get a better understanding of why positronium decays while a hydrigen atom is stable. In the case of positronium, I can write an elementary process were the leptons annihilate into two photons. But for the case of an hydrogen atom, I can not write a similar simple process where quark and electron would annihilate, due to charge conservation. Is it indeed impossible to write such a process at any order? More speculatively, if we lived in a different world were quarks had integer charge, would a quark/electron bound state be stable?
If you heat a metal (or anything else) up to a temperature $T$ then the average energy of any degree of freedom of the metal will be of order $kT$. At 600ºC this is about 0.075eV, and as you say the energy of visible light is around 2 - 3eV, which is a factor of 30 or so higher. The reason that visible light can be produced is because the thermal energy is randomly distributed. That means some bits of the metal will have substantially lower energy than 0.075eV and some bits will have substantially higher energy than 0.075eV. You get small parts of the metal where the energy is as high as 2 - 3eV, and it's those parts that are emitting the light. The intensity of the emitted radiation is given by Planck's law : $$ B(\lambda) = \frac{2hc^2}{\lambda^5} \frac{1}{\exp\left( \frac{hc}{\lambda k_B T} \right) - 1} $$ If you take your temperature of 600ºC (873K) and calculate the intensity as a function of wavelength using Planck's law you get: so you can see the intensity peaks between 3 and 4 microns, which is in the infra-red. Looking at the graph the intensity appears to fall to zero at about 1 micron, which is still in the infra-red. However if you calculate the intensity at the wavelength of red light (0.7 microns) you find it isn't zero, but it's pretty small at about 0.002% of the peak intensity. So at 600ºC the metal will be producing a little red light.
{ "source": [ "https://physics.stackexchange.com/questions/129745", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1559/" ] }
129,873
In a video ( Here ), I saw crocodiles jump vertically about three meters without using any solid surface. The wonderful thing is that when they start to jump, their vertical velocity is approximately zero, unlike fish who jump using initial velocity. It seems that crocodiles create an upward force that counteracts gravity, because when they are rising, their velocity seems to be constant. How is this possible? Could anyone explain this phenomenon using physics laws?
If you look closely at the crocodiles' tails you'll see that they wave their tails from side to side to provide propulsion for the jump. Compare this to a fish swimming: The side to side motion of the fish's tail propels it forward, and the crocodiles are using exactly the same sort of side to side motion to propel themselves upwards.
{ "source": [ "https://physics.stackexchange.com/questions/129873", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/50712/" ] }
130,098
My understanding of pseudovectors vs vectors is pretty basic. Both transform in the same way under a rotation, but differently upon reflection. I might even be able to summarize that using an equation, but that's about it. Similarly, I can follow arguments that pseudovectors behave differently in "mirrors" than vectors. But my response to this is always: Okay, so what? When would I ever "do physics" in a mirror? The usefulness eludes me. I'd like to gain a better understanding of the importance of this difference. When is it useful for an experimental physicist to distinguish between the two? When is it useful for a theoretical physicist to distinguish between the two? I believe symmetry is important to at least one of these, but would appreciate a practical rather than abstract argument of when one has to be careful about the distinction.
[Disclaimer: I'm not providing an argument where the distinction would be useful . I am providing an argument that pseudovectors and vectors describe intrinsically different geometrical concepts, and should, for clarify of argument, never be conflated just because they look so similar] The point is that pseudovectors, by their very nature, are not the same objects as vectors: A vector , as commonly understood in physics, is an element of the vector space $\mathbb{R}^n$ spanned by the standard basis $e_i$. It points in a direction , and is geometrically connected to a line , i.e. a one-dimensional subspace of $\mathbb{R}^n$. A pseudovector , as almost no one ever explicitly will tell you, is an element of the sub-top degree of the exterior algebra $\Lambda^{n-1}\mathbb{R}^n$, the space spanned by $e_{i_1} \wedge \dots \wedge e_{i_{n-1}}$. This does not directly point into a direction, but is geometrically the $n-1$-dimensional hyperplane spanned by the vectors $e_{i_1},\dots,e_{i_{n-1}}$, and can then be interpreted as pointing in the direction perpendicular to that hyperplane. Formally, this translation from hyperplanes into normal vectors is the Hodge dual mapping $\Lambda^k\mathbb{R}^n$ to $\Lambda^{n-k}\mathbb{R}^n$. And there you see why pseudovectors are different from vectors under reflection, geometrically: In $\mathbb{R}^3$, i.e. our ordinary world, the planes are spanned by two vectors - if both change their signs, the pseudovector described by them will not (since the wedge $\wedge$ is linear and anticommutative). One importance of these considerations is when you want to step from $\mathbb{R}^3$ to higher dimensions. You lose the cross product (which is really just the concatenation of the wedge and the Hodge), and your former pseudovectors are now suddenly no vectors in the ordinary sense at all anymore, since $\Lambda^2 \mathbb{R}^n$ (the "space of planes") does not map to unique normal vectors by the Hodge dual in dimensions that are not three. Now you need to genuinely tell your former pseudovectors and vectors apart, since they now have a different number of independent coordinate entries.
{ "source": [ "https://physics.stackexchange.com/questions/130098", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/29216/" ] }
130,116
This might be a stupid question, but nonetheless, it has been bothering me. If you take a photon, make it go through some atoms in a solid, liquid or whatever, then you have the chance of this photon being absorbed by an electron, and thereby exciting the electron. This requires the photon to have enough energy to actually excite the electron to another energy level. My question is then: How does the photon know if it has enough energy or not? Do they interact very quickly to determine if it's okay or not, or is it just something it "knows?"
If you take an isolated hydrogen atom then the electron sits in well defined atomic orbitals that are eigenfunctions of the Schrodinger equation. This is a stable system that doesn't change with time. If you now introduce an oscillating electromagnetic field (i.e. light) then this changes the potential term in the Schrodinger equation and the hydrogen atomic orbitals are no longer eigenfunctions of Schrodinger equation. So the electron can no longer be described as a $1s$ or $2s$ or whatever orbital, but rather the electron and the photon now have a single time dependant wavefunction that describes both. What happens next depends on how this new wavefunction evolves with time. As the photon moves away we expect the new wavefunction to evolve into one of three possible final states: the electron orbital is unchanged the electron in a different atomic orbital (i.e. it's been excited) and no photon the electron in a different atomic orbital (i.e. it's been excited) and a photon with a different energy You can't predict which will happen, but you can calculate the probability of the three final states. What you find is that the probability of (2) is only high when the photon energy is the same as the energy spacing between atomic orbitals, the probability of (1) approaches unity when the photon energy doesn't match an energy spacing in the atom, and the probability of (3) is generally negligable. So the photon doesn't need to know whether or not it has the correct energy. The photon and atom interact to form a single system, and this evolves with time in accordance with Schrodinger's equation.
{ "source": [ "https://physics.stackexchange.com/questions/130116", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/23816/" ] }
130,128
We all know that matter converts into energy, but will energy convert into matter? Does it form antimatter by converting? Illustrate me with example.
If you take an isolated hydrogen atom then the electron sits in well defined atomic orbitals that are eigenfunctions of the Schrodinger equation. This is a stable system that doesn't change with time. If you now introduce an oscillating electromagnetic field (i.e. light) then this changes the potential term in the Schrodinger equation and the hydrogen atomic orbitals are no longer eigenfunctions of Schrodinger equation. So the electron can no longer be described as a $1s$ or $2s$ or whatever orbital, but rather the electron and the photon now have a single time dependant wavefunction that describes both. What happens next depends on how this new wavefunction evolves with time. As the photon moves away we expect the new wavefunction to evolve into one of three possible final states: the electron orbital is unchanged the electron in a different atomic orbital (i.e. it's been excited) and no photon the electron in a different atomic orbital (i.e. it's been excited) and a photon with a different energy You can't predict which will happen, but you can calculate the probability of the three final states. What you find is that the probability of (2) is only high when the photon energy is the same as the energy spacing between atomic orbitals, the probability of (1) approaches unity when the photon energy doesn't match an energy spacing in the atom, and the probability of (3) is generally negligable. So the photon doesn't need to know whether or not it has the correct energy. The photon and atom interact to form a single system, and this evolves with time in accordance with Schrodinger's equation.
{ "source": [ "https://physics.stackexchange.com/questions/130128", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/52076/" ] }
130,209
As far as I know, a black body is an ideal emitter. So how can it be that a non-ideal emitter emits more radiation than a black body? This happens only in a very limited area at around 500nm , but it still happens: it looks like at the maximum it is around 15% above black body. This seems impossibile for my understand of a black body. Especially because just for that there is the emissivity value ε , or better ε(λ) Wikipedia Emissivity: Quantitatively, emissivity is the ratio of the thermal radiation from a surface to the radiation from an ideal black surface at the same temperature. The ratio varies from 0 to 1 Means 0 ≤ ε(λ) ≤ 1 What is the right interpretation? What is the sun doing there, seems like ε(500nm)=1.15?
The total radiative power emitted by the Sun is equivalent to the total radiative power emitted by an ideal black body with a temperature of 5778 K and a surface area equal to that of the Sun. This 5778 K is the Sun's effective temperature. The spectrum of the Sun is very close to that of a 5778 K black body, but there are deviations. Some are due to absorption and emission, but others result from three key items: There is no such thing as black body. The concept of a black body is an idealization based on some simplifying assumptions. The Sun doesn't exactly satisfy those simplifying assumptions. That effective temperature of 5778 K is based on total radiative power, the area under the curve of the Planck distribution. If the spectrum of sunlight falls short of the 5778 K black body spectrum some wavelengths it must necessarily rise above the 5778 K black body spectrum at others. The primary reason the Sun fails to satisfy the assumptions that underly the Planck distribution is that we are seeing light from multiple temperature sources. The rest of this answer goes into this in detail The Sun is not a solid body. It doesn't have a surface from which the radiation originates. The radiation we see from the Sun comes primarily from the Sun's photosphere, a roughly 500 kilometer thick layer near the top of the Sun. The chromosphere, transition region, and corona are above the photosphere. While these higher layers do make solar radiation deviate from the ideal black body curve, the primary source is the photosphere itself. The amount of light that is transmitted into empty space is a sharply increasing function of distance from the center. However, it is not a delta distribution. The light that does get through from those deeper layers has a higher temperature than the layers above it. The bulk of the radiation we see from the Sun comes from a ~500 km thick layer called the photosphere. The top of the photosphere has a temperature of about 4400 K and has a pressure of about 86.8 pascals. The bottom has a temperature of about 6000 K and a pressure of about 12500 pascals. What we see is a blend of the radiation from throughout the photosphere. Some of the light comes from the top of the photosphere, some from the middle, some from the bottom, roughly weighted by pressure. The total spectrum looks close to that of a 5778 K black body, but the contribution from the bottommost part of the photosphere tilts the spectrum away from the ideal a bit, making the a tiny bit heavy for shorter wavelength radiation.
{ "source": [ "https://physics.stackexchange.com/questions/130209", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/56709/" ] }
130,231
The temperature and pressure everywhere inside the Sun reach the critical point to start nuclear reactions - there is no reason for it to take such a long time to complete the reaction process. Just like a nuclear bomb will complete all the reaction within $10^{-6}$seconds. Why does most of the hydrogen of the Sun still not react even though it reaches the critical point, and why take stars billions of years to run out of fuel?
The bottleneck in Solar fusion is getting two hydrogen nuclei, i.e. two protons, to fuse together. Protons collide all the time in the Sun's core, but there is no bound state of two protons because there aren't any neutrons to hold them together. Protons can only fuse if one of them undergoes beta plus decay to become a neutron at the moment of the collision. The neutron and the remaining proton fuse to form a deuterium nucleus, and this can react with another proton to form $^{3}\text{He}$. The beta plus decay is mediated by the weak force so it's relatively slow process anyway, and the probability of the beta plus decay happening at just the right time is extremely low, which is why proton fusion is relatively slow in the Sun. It takes gazillions of proton-proton collisions to form a single deuterium nucleus. Nuclear fusion weapons bombs fuse fast because they use a mixture of deuterium and tritium. They don't attempt to fuse $^{1}\text{H}$ so they don't have the bottleneck that the Sun has to deal with.
{ "source": [ "https://physics.stackexchange.com/questions/130231", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/56975/" ] }
130,236
In looking at the answers to this question regarding light from distant galaxies ever being visible to us: Expansion of the Universe, will light from some galaxies never reach us? I came across a few concepts that were quite surprising to me. In particular: Movement faster than the speed of light The big bang was not an explosion outwards from a single point. Granted I am just a rank beginner and self-studier, yet I did study a QM course from Oxford, have read several sets of notes on SR, and readily went through the first hundred pages of "Student Friendly QFT." Yet I have never encountered these notions. My question is where does one acquire this type of information. Not necessarily the technicalities (of, e.g., GR); but just a correct awareness.
The bottleneck in Solar fusion is getting two hydrogen nuclei, i.e. two protons, to fuse together. Protons collide all the time in the Sun's core, but there is no bound state of two protons because there aren't any neutrons to hold them together. Protons can only fuse if one of them undergoes beta plus decay to become a neutron at the moment of the collision. The neutron and the remaining proton fuse to form a deuterium nucleus, and this can react with another proton to form $^{3}\text{He}$. The beta plus decay is mediated by the weak force so it's relatively slow process anyway, and the probability of the beta plus decay happening at just the right time is extremely low, which is why proton fusion is relatively slow in the Sun. It takes gazillions of proton-proton collisions to form a single deuterium nucleus. Nuclear fusion weapons bombs fuse fast because they use a mixture of deuterium and tritium. They don't attempt to fuse $^{1}\text{H}$ so they don't have the bottleneck that the Sun has to deal with.
{ "source": [ "https://physics.stackexchange.com/questions/130236", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
130,552
I understand that a black hole bends the fabric of space time to a point that no object can escape. I understand that light travels in a straight line along spacetime unless distorted by gravity. If spacetime is being curved by gravity then light should follow that bend in spacetime. In Newton's Law of Universal Gravitation, the mass of both objects must be entered, but photon has no mass, why should a massless photon be affected by gravity in by Newton's equations? What am I missing?
Newton's law does predict the bending of light. However it predicts a value that is a factor of two smaller than actually observed. The Newtonian equation for gravity produces a force: $$ F = \frac{GMm}{r^2} $$ so the acceleration of the smaller mass, $m$, is: $$ a = \frac{F}{m} = \frac{GM}{r^2}\frac{m}{m} $$ If the particle is massless then $m/m = 0/0$ and this is undefined, however if we take the limit of $m \rightarrow 0$ it's clear that the acceleration for a massless object is just the usual $a = GM/r^2$. That implies a photon will be deflected by Newtonian gravity, and you can use this result to calculate the deflection due to a massive object with the result: $$ \theta_{Newton} = \frac{2GM}{c^2r} $$ The calculation is described in detail in this paper . The relativistic calculation gives: $$ \theta_{GR} = \frac{4GM}{c^2r} $$ The point of Eddington's 1919 expedition was not to show that light was bent when no bending was expected, but rather to show that the bending was twice as great as expected.
{ "source": [ "https://physics.stackexchange.com/questions/130552", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/56925/" ] }
130,594
Has there been any experiments, or are there any references, demonstrating gravity between atoms? If so, what are the key experiments/papers? Or if not, what is the smallest thing that has actually experimentally been shown to be affected by gravity? I don't know of specific papers demonstrating gravity between larger objects, but I can vaguely remember learning about them in my classical physics class as an undergraduate. However, I have never heard of experiments demonstrating gravity at atomic or subatomic levels. I don't have a physics background so it's not obvious to me, so just looking to see the actual research/evidence behind it, so I can start to try to imagine how gravity works at a quantum level.
Groups in Seattle, Colorado, and perhaps others managed to measure and verify Newton's inverse-square law at submillimeter distances comparable to 0.1 millimeters, see e.g. Sub-millimeter tests of the gravitational inverse-square law: A search for "large" extra dimensions Motivated by higher-dimensional theories that predict new effects, we tested the gravitational $\frac{1}{r^{2}}$ law at separations ranging down to 218 micrometers using a 10-fold symmetric torsion pendulum and a rotating 10-fold symmetric attractor. We improved previous short-range constraints by up to a factor of 1000 and find no deviations from Newtonian physics. This is a 14 years old paper (with 600+ citations) and I think that these experiments were very hot at that time because the warped- and large-dimensions models in particle physics that may predict violations of Newton's law had been proposed in the preceding two years. But I believe that there's been some extra progress in the field. At that time, the very fine measurement up to 200 microns etc. allowed them to deduce something about the law of gravity up to 10 microns. These are extremely clever, fine mechanical experiments with torsion pendulums, rotating attractors, and resonances. The force they are able to see is really tiny. To see the gravitational force of a single atom is obviously too much to ask (so far?) – the objects whose gravity is seen in the existing experiments contain billions or trillions of atoms. Note that the (attractive) gravitational force between two electrons is about $10^{45}$ times weaker than the (repulsive) electrostatic one! Most of the research in quantum gravity has nothing whatever to do with proposals to modify Newton's laws at these distance scales. Indeed, gravity is the weakest force and it's so weak that for all routinely observable phenomena involving atoms, it can be safely neglected. The research in quantum gravity is dealing with much more extreme phenomena – like the evaporation of tiny black holes – that can't be seen in the lab. Plots and links to new papers available over here (thanks, alemi )
{ "source": [ "https://physics.stackexchange.com/questions/130594", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16731/" ] }
130,688
I found the problem described in the attached picture on the internet. In the comment sections there were two opposing solutions. So it made me wonder which of those would be the actual solution. So basically the question would be the following. Assume we would have two identical beakers, filled with the same amount of the same liquid, lets say water. In the left beaker a ping pong ball would be attached to the bottom of the beaker with a string and above the right beaker a steel ball of the same size (volume) as the ping pong ball would be hung by a string, submerging the steel ball in the water as shown in the picture. If both beakers would be put on to a scale, what side would tip? According to the internet either of the following answers was believed to be the solution. The left side would tip down, because the ping pong ball and the cord add mass to the left side, since they are actually connected to the system. The right side would tip down, because of buoyancy of the water on the steel ball pushing the steel ball up and the scale down. Now what would the solution be according to physics?
Here is a free body diagram of the balls: … and one of the water volume: The four balance equations are $$ \begin{align} B_1 - T_1 - m_1 g & =0 \\ B_2 + T_2 - m_2 g & = 0 \\ F_1 + T_1 - B_1 - M g & = 0 \\ F_2 - B_2 - M g & = 0 \end{align} $$ where $\color{magenta}{B_1}$,$\color{magenta}{B_2}$ are the buoyancy forces, $\color{red}{T_1}$,$\color{red}{T_2}$ are the cord tensions and $M g$ is the weight of the water, $m_1 g$ the weight of the ping pong ball and $m_2 g$ the weight of the steel ball. Solving the above gives $$\begin{align} F_1 & = (M+m_1) g \\ F_2 & = M g + B_2 \\ T_1 & = B_1 - m_1 g \\ T_2 & = m_2 g - B_2 \end{align} $$ So it will tip to the right if the buoyancy of the steel ball $B_2$ is more than the weight of the ping pong ball $m_1 g$. $$\boxed{F_2-F_1 = B_2 - m_1 g > 0}$$ This is the same answer as @rodrigo but with diagrams and equations.
{ "source": [ "https://physics.stackexchange.com/questions/130688", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/50465/" ] }
130,721
I have no background in physics but there is a question that has been bothering me, so I'm asking you. Are there at least 2 physical theories that are : Mathematically identical, which means that they would yield identical predictions for EVERY situation that these theories can cover, and therefore can not be compared through experimentation : the validity of one of them is equivalent to the validity of the other. Physically different, that is to say, based on a different spatio-temporal-whatever realities, whose differences are not only semantic. If there are at least two theories that satisfy those requirements, it would mean that the "absolute", "metaphysical" reality can never be known. However, if we are capable of mathematically demonstrating that such theories can not mathematically exist, it would mean that absolute reality can be known. When I say "mathematically identical", I am not speaking of theories that can not be experimented on, due to technological constraints (like atomism at the time when this was still debated) but really of theories that can theoretically not be compared, even by a Laplace demon. Do you agree with my assumptions? If so is there such theories and/or a demonstration that they can not exist?
Special relativity and Lorentz ether theory (LET). From the linked Wikipedia article: Because the same mathematical formalism occurs in both, it is not possible to distinguish between LET and SR by experiment. However, in LET the existence of an undetectable aether is assumed and the validity of the relativity principle seems to be only coincidental, which is one reason why SR is commonly preferred over LET.
{ "source": [ "https://physics.stackexchange.com/questions/130721", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/44346/" ] }
130,800
This is a question I've been asked several times by students and I tend to have a hard time phrasing it in terms they can understand. This is a natural question to ask and it is not usually well covered in textbooks, so I would like to know of various perspectives and explanations that I can use when teaching. The question comes up naturally in what is usually students' second course in quantum physics / quantum mechanics. At that stage one is fairly comfortable with the concept of wavefunctions and with the Schrödinger equation, and has had some limited exposure to operators. One common case, for example, is to explain that some operators commute and that this means the corresponding observables are 'compatible' and that there exists a mutual eigenbasis; the commutation relation is usually expressed as $[A,B]=0$ but no more is said about that object. This naturally leaves students wondering what is, exactly, the physical significance of the object $[A,B]$ itself? and this is not an easy question. I would like answers to address this directly, ideally at a variety of levels of abstraction and required background. Note also that I'm much more interested in the object $[A,B]$ itself than what the consequences and interpretations are when it is zero, as those are far easier and explored in much more depth in most resources. One reason this is a hard question (and that commutators are such confusing objects for students) is that they serve a variety of purposes, with only thin connecting threads between them (at least as seen from the bottom-up perspective). Commutation relations are usually expressed in the form $[A,B]=0$ even though, a priori , there appears to be little motivation for the introduction of such terminology. A lot of stock is placed behind the canonical commutation relation $[x,p]=i \hbar$, though it is not always clear what it means. (In my view, the fundamental principle that this encodes is essentially de Broglie's relation $\lambda=h/p$; this is made rigorous by the Stone-von Neumann uniqueness theorem but that's quite a bit to expect a student to grasp at a first go.) From this there is a natural extension to the Heisenberg Uncertainty Principle, which in its general form includes a commutator (and an anticommutator, to make things worse). Canonically-conjugate pairs of observables are often introduced, and this is often aided by observations on commutators. (On the other hand, the energy-time and angle-angular momentum conjugacy relations cannot be expressed in terms of commutators, making things even fuzzier.) Commutators are used very frequently, for example, when studying the angular momentum algebra of quantum mechanics. It is clear they play a big role in encoding symmetries in quantum mechanics but it is hardly made clear how and why, and particularly why the combination $AB-BA$ should be important for symmetry considerations. This becomes even more important in more rigorous treatments of quantum mechanics, where the specifics of the Hilbert space become less important and the algebra of observable operators takes centre stage. The commutator is the central operation of that algebra, but again it's not very clear why that combination should be special. An analogy is occasionally made to the Poisson brackets of hamiltonian mechanics, but this hardly helps - Poisson brackets are equally mysterious. This also ties the commutator in with time evolution, both on the classical side and via the Heisenberg equation of motion. I can't think of any more at the moment but they are a huge number of opposing directions which can make everything very confusing, and there is rarely a uniting thread. So: what are commutators, exactly, and why are they so important?
Self adjoint operators enter QM, described in complex Hilbert spaces, through two logically distinct ways. This leads to a corresponding pair of meanings of the commutator. The former way is in common with the two other possible Hilbert space formulations (real and quaternionic one): Self-adjoint operators describe observables . Two observables can be compatible or incompatible , in the sense that they can or cannot be measured simultaneously (corresponding measurements disturb each other when looking at the outcomes). Up to some mathematical technicalities, the commutator is a measure of incompatibility , in view of the generalizations of Heisenberg principle you mention in your question. Roughly speaking, the more the commutator is different form $0$, the more the observables are mutually incompatible. (Think of inequalities like $\Delta A_\psi \Delta B_\psi \geq \frac{1}{2} |\langle \psi | [A,B] \psi\rangle|$. It prevents the existence of a common eigenvector $\psi$ of $A$ and $B$ - the observables are simultaneously defined - since such an eigenvector would verify $\Delta A_\psi =\Delta B_\psi =0$.) The other way self-adjoint operators enter the formalism of QM (here real and quaternionic versions differ from the complex case) regards the mathematical description of continuous symmetries. In fact, they appear to be generators of unitary groups representing (strongly continuous) physical transformations of the physical system. Such a continuous transformation is represented by a unitary one-parameter group $\mathbb R \ni a \mapsto U_a$. A celebrated theorem by Stone indeed establishes that $U_a = e^{iaA}$ for a unique self-adjoint operator $A$ and all reals $a$. This approach to describe continuous transformations leads to the quantum version of Noether theorem just in view of the (distinct!) fact that $A$ also is an observable . The action of a symmetry group $U_a$ on an observable $B$ is made explicit by the well-known formula in Heisenberg picture: $$B_a := U^\dagger_a B U_a$$ For instance, if $U_a$ describes rotations of the angle $a$ around the $z$ axis, $B_a$ is the analog of the observable $B$ measured with physical instruments rotated of $a$ around $z$. The commutator here is a first-order evaluation of the action of the transformation on the observable $B$, since (again up to mathematical subtleties especially regarding domains): $$B_a = B -ia [A,B] +O(a^2) \:.$$ Usually, information encompassed in commutation relations is very deep. When dealing with Lie groups of symmetries, it permits to reconstruct the whole representation (there is a wonderful theory by Nelson on this fundamental topic) under some quite mild mathematical hypotheses. Therefore commutators play a crucial role in the analysis of symmetries.
{ "source": [ "https://physics.stackexchange.com/questions/130800", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/8563/" ] }
130,918
I've seen in a documentary that when a star collapses and becomes a black hole, it starts to eat the planets around. But it has the same mass, so how does its gravitational field strength increase?
Actually, it doesn't have the same mass, it has significantly less mass than its precursor star. Something like 90% of the star is blown off in the supernova event (Type II) that causes the black holes. The Schwarzschild radius is the radius at which, if an object's mass where compressed to a sphere of that size, the escape velocity at the surface would be the speed of light $c$; this is given by $$ r_s=\frac{2Gm}{c^2} $$ For a 3-solar mass black hole, this amounts to about 10 km. If we measure the gravitational acceleration from this point, $$ g_{BH}=\frac{Gm_{BH}}{r_s^2}\simeq10^{13}\,{\rm m/s^2} $$ and compare this to the acceleration due to the precursor 20 solar mass star with radius of $r_\star=5R_\odot\simeq7\times10^8$ m, we have $$ g_{M_\star}=\frac{Gm_\star}{r_\star^2}\simeq10^3\,{\rm m/s^2} $$ Note that this is the acceleration due to gravity at the surface of the object, and not at some distance away. If we measure the gravitational acceleration of the smaller black hole at the distance of the original star's radius, you'll find it is a lot smaller (by a factor of about 7).
{ "source": [ "https://physics.stackexchange.com/questions/130918", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57243/" ] }
131,170
On this site , change in entropy is defined as the amount of energy dispersed divided by the absolute temperature. But I want to know: What is the definition of entropy? Here , entropy is defined as average heat capacity averaged over the specific temperature. But I couldn't understand that definition of entropy: $\Delta S$ = $S_\textrm{final} - S_\textrm{initial}$. What is entropy initially (is there any dispersal of energy initially)? Please give the definition of entropy and not its change . To clarify, I'm interested in the definition of entropy in terms of temperature, not in terms of microstates, but would appreciate explanation from both perspectives.
There are two definitions of entropy, which physicists believe to be the same (modulo the dimensional Boltzman scaling constant) and a postulate of their sameness has so far yielded agreement between what is theoretically foretold and what is experimentally observed. There are theoretical grounds, namely most of the subject of statistical mechanics, for our believing them to be the same, but ultimately their sameness is an experimental observation. (Boltzmann / Shannon): Given a thermodynamic system with a known macrostate, the entropy is the size of the document, in bits, you would need to write down to specify the system's full quantum state. Otherwise put, it is proportional to the logarithm of the number of full quantum states that could prevail and be consistent with the observed macrostate. Yet another version: it is the (negative) conditional Shannon entropy (information content) of the maximum likelihood probability distribution of the system's microstate conditioned on the knowledge of the prevailing macrostate; (Clausius / Carnot): Let a quantity $\delta Q$ of heat be input to a system at temperature $T$. Then the system's entropy change is $\frac{\delta Q}{T}$. This definition requires background, not the least what we mean by temperature ; the well-definedness of entropy ( i.e. that it is a function of state alone so that changes are independent of path between endpoint states) follows from the definition of temperature, which is made meaningful by the following steps in reasoning: (see my answer here for details ). (1) Carnot's theorem shows that all reversible heat engines working between the same two hot and cold reservoirs must work at the same efficiency, for an assertion otherwise leads to a contradiction of the postulate that heat cannot flow spontaneously from the cold to the hot reservoir. (2) Given this universality of reversible engines, we have a way to compare reservoirs: we take a "standard reservoir" and call its temperature unity, by definition. If we have a hotter reservoir, such that a reversible heat engine operating between the two yields $T$ units if work for every 1 unit of heat it dumps to the standard reservoir, then we call its temperature $T$. If we have a colder reservoir and do the same (using the standard as the hot reservoir) and find that the engine yields $T$ units of work for every 1 dumped, we call its temperature $T^{-1}$. It follows from these definitions alone that the quantity $\frac{\delta Q}{T}$ is an exact differential because $\int_a^b \frac{d\,Q}{T}$ between positions $a$ and $b$ in phase space must be independent of path (otherwise one can violate the second law). So we have this new function of state "entropy" definied to increase by the exact differential $\mathrm{d} S = \delta Q / T$ when the a system reversibly absorbs heat $\delta Q$. As stated at the outset, it is an experimental observation that these two definitions are the same; we do need a dimensional scaling constant to apply to the quantity in definition 2 to make the two match, because the quantity in definition 2 depends on what reservoir we take to be the "standard". This scaling constant is the Boltzmann constant $k$. When people postulate that heat flows and allowable system evolutions are governed by probabilistic mechanisms and that a system's evolution is its maximum likelihood one, i.e. when one studies statistical mechanics, the equations of classical thermodynamics are reproduced with the right interpretation of statistical parameters in terms of thermodynamic state variables. For instance, by a simple maximum likelihood argument, justified by the issues discussed in my post here one can demonstrate that an ensemble of particles with allowed energy states $E_i$ of degeneracy $g_i$ at equilibrium (maximum likelihood distribution) has the probability distribution $p_i = \mathcal{Z}^{-1}\, g_i\,\exp(-\beta\,E_i)$ where $\mathcal{Z} = \sum\limits_j g_j\,\exp(-\beta\,E_j)$, where $\beta$ is a Lagrange multiplier. The Shannon entropy of this distribution is then: $$S = \frac{1}{\mathcal{Z}(\beta)}\,\sum\limits_i \left((\log\mathcal{Z}(\beta) + \beta\,E_i-\log g_i )\,g_i\,\exp(-\beta\,E_i)\right)\tag{1}$$ with heat energy per particle: $$Q = \frac{1}{\mathcal{Z}(\beta)}\,\sum\limits_i \left(E_i\,g_i\,\exp(-\beta\,E_i)\right)\tag{2}$$ and: $$\mathcal{Z}(\beta) = \sum\limits_j g_j\,\exp(-\beta\,E_j)\tag{3}$$ Now add a quantity of heat to the system so that the heat per particle rises by $\mathrm{d}Q$ and let the system settle to equilibrium again; from (2) and (3) solve for the change $\mathrm{d}\beta$ in $\beta$ needed to do this and substitute into (1) to find the entropy change arising from this heat addition. It is found that: $$\mathrm{d} S = \beta\,\mathrm{d} Q\tag{4}$$ and so we match the two definitions of entropy if we postulate that the temperature is given by $T = \beta^{-1}$ (modulo the Boltzmann constant). Lastly, it is good to note that there is still considerable room for ambiguity in definition 1 above aside from simple cases, e.g. an ensemble of quantum harmonic oscillators, where the quantum states are manifestly discrete and easy to calculate. Often we are forced to continuum approximations, and one then has freedom to define the coarse gaining size, i.e. the size of the discretizing volume in continuous phase space that distinguishes truly different microstates, or one must be content to deal with only relative entropies in truly continuous probability distribution models Therefore, in statistical mechanical analyses one looks for results that are weakly dependent on the exact coarse graining volume used.
{ "source": [ "https://physics.stackexchange.com/questions/131170", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
131,457
To find the Higgs boson, we had to build the biggest machine mankind has ever built: the LHC with a collision energy of up to 14 TeV. Inside the sun there is a huge pressure and temperature, but is the energy density high enough for Higgs bosons to be created?
You probably know that the mass of the Higgs boson is around $125$ GeV, which means the energy it takes to create a Higgs boson is around $125$ GeV and therefore that the temperature at which significant numbers of Higgs bosons will be created will be given by $kT = 125$ GeV. One GeV is $1.602 \times 10^{-10}$J, so the corresponding temperature is around $10^{13}$K - note that this is an order of magnitude estimate. Anyhow, the temperature at the centre of the Sun is around $10^7$ K, so it's six orders of magnitude too low to create significant numbers of Higgs bosons. Even a supernova only gets to a temperature of about $10^{11}$K , which is still two orders of magnitude too low.
{ "source": [ "https://physics.stackexchange.com/questions/131457", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/50559/" ] }
131,487
A standard phrase in popular discussions of the Higgs boson is that "it gives particles mass". To what extent is this a reasonable, pop-science, level of description of the Higgs boson and it's relationship to particles' masses? Is this phrasing completely misleading? If not, what would be the next level down in detail to try to explain to someone?
The Higgs field (note it is the field that is important here, not the Higgs boson itself, which is just a ripple in the Higgs field) gives particles mass in the same sense that the strong force gives the proton mass (context: $99\%$ of the mass of the proton comes not from the mass of its constituent quarks, but from the fact that roughly speaking the quarks have a large amount of kinetic energy but are bound by the strong force). If any force confines energy into a small amount of space, then that bound energy has a mass given by $E=mc^2$. This is what the Higgs field does: it binds a massless particle into a small space, and therefore by $E=mc^2$ (and the fact that the particle now has a frame of reference in which it is stationary) that particle has an effective rest mass. To get an intuitive feeling for what's going on, as an exercise you can derive $E=mc^2$ by considering a photon confined by a mirror box. The photon is bouncing back and forth exerting pressure on the mirror, and if you try to push the box it will have inertia due to the photon exerting more pressure on the front of the mirror than the back. If you work it out you will find that the mirror box has an effective inertial mass of $m=E/c^2$. The Higgs field provides a force that acts like this mirror box, thereby "giving" mass to the particle inside it.
{ "source": [ "https://physics.stackexchange.com/questions/131487", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10635/" ] }
131,738
Well let's start off with that I'm not a physicist but I'd like some thoughts on something I came across in my hometown. This guy: Is it possible that due to the electrical charge of magnets this guy can make the illusion that he can float ? Or is this probably a cheap trick that fools the eye ? I was standing there for quite some time watching the guy and he keep moving his feet. The resistance that he appeared to have was from a magnet force keeping him afloat. So after I passed this guy I did some physics searches on the web and the first thing that caught my eye was the electrical charge of magnets. So the question is : Is this related to the electrical charge of a magnet or a cheap trick ?
The "trick" is that the cane he is apparently holding is actually firmly attached to the platform. A rigid piece goes up his sleave, then to a harness that holds his whole body up. For more about this type of magic trick device, google "broom suspension" or "aerial suspension harness". No electric or magnetic fields were abused here. Image Credit: TwentyTwoWords
{ "source": [ "https://physics.stackexchange.com/questions/131738", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57584/" ] }
132,114
In many anime, comics, movies, etc, we see a lot of super human beings moving and fighting at such high speeds that a regular human being cannot see that they are fighting or moving pass by. In particular, in the first battle between Goku and Piccolo, those two are able to fight in a confined environment (when Piccolo first appears in the comics, stage fight, if I remembered correctly), and regular audiences are unable to see them when they battle. Is it physically possible for a "human" (human size object) to move in a confined environment (let's say, $20~\text{m} \times 20~\text{m} \times 10~\text{m}$ ) so fast that a normal person would be unable to see it? If it is possible, how fast must this "person" be? Assume that this "human" cannot move faster than, or even close to the speed of light. Some clarifications: First, please ignore the strength/physicality of the object, and consider it an object that can can move "freely" in this confined space without causing any "side effects" (such as heat, sounds, etc) Second, please take in note that the object is moving in a confined, rather small 3D space as mentioned in the question, and the observer would be always able to see the entire space. And last, a blurred image would be considered as "able to see".
Let me start by clarifying that I assume the question is whether a superhuman or any object of human size can render itself invisible through speed alone. And that the speed of said object must be $v\ll c$. From this, I assume that the object or person being viewed must spend a reasonably long amount of time within the observer's field of view such that invisibility can't be cheated through exploiting blinks or by having too short an exposure time or by hiding behind the observer most of the time. In short, no, it is not possible for something to move fast enough that a normal human would cease to see it entirely. Firstly, at any velocity, the amount of light that is redirected from the moving object to the observer is reasonably constant. At relativistic velocities, length contraction of the moving object in the observer's frame could reduce the size and, thus, visibility of the object; however, the previously mentioned speed restriction allows us to ignore any contraction of this magnitude. Additionally, the human eye-brain system processes input continuously and uses as-yet not completely understood processes for extrapolating and interpolating motion. In fact, even when presented with rapid images of the same object at different locations, the brain has the ability to assume motion and perceives these images as being continuously linked; something referred to as stroboscopic motion or illusory motion of stroboscopic images. Since the objects moving quickly still have light shining on them, the light that reflects towards the observer would still be captured and processed. However, since image processing is a complex task and due to something called persistence of vision, the images we perceive can include not only the light that recently entered your eyes, but also parts of previously perceived images. Therefore, at higher speeds, the image an observer gets would probably be blurry as it is an amalgamation of the moving object's position at multiple times as well as an effect of small eye movements; this is sometimes called motion blur. However, while it is not possible to disappear entirely, there are other and cooler things that are possible. Because of the processing speed of the eye-brain system and the illusory effects that can be created from the brain's extrapolation of data, it is entirely possible for an object to move fast enough that it does not appear to move at all. If Goku and Piccolo fight each other at an extremely high speed, but they make sure to periodically spend most of every, say, second in one specific position, then the images your eye captures will be mostly of them in those specific positions. As a result, they will appear unmoving to an observer (except, of course, for mysterious cuts and bruises showing up and their edges may appear slightly more blurred). Additionally, by spending an equal amount of time in two different locations (and with the help of perfect timing) it can appear as if there is two identical and slightly faded/blurry copies of the moving object. This effect works for creating large numbers of copies and can also be used to make the image appear to "jump" from place to place. Some have argued that an extremely fast moving object can blur so much as to be indistinguishable from the background. This may be true, however it is more dependent on the contrast between the background and the object than on purely its speed. Once camouflage and things of that sort are ruled out, there is simply no non-relativistic speed where a human-sized object in a confined space can become invisible to an ideal, normal human observer for an extended period of time. Let me further add that at relativistic speeds, it's possible for the light reflecting off the moving object to be red or blue-shifted outside of the visible spectrum (although, in the case of blue-shifting, the object's own heat, ie IR emissions, would be shifted into the visible range). This could make the object invisible (and would be really cool to see happening), but as the question requires speeds "[not] even close to the speed of light", that rules out the possibility in this case. Additionally, since it is in a confined environment and the red-shifting effect only works for when the object is travelling away from the observer, the invisibility effect would not last long enough to be considered sustainably invisible.
{ "source": [ "https://physics.stackexchange.com/questions/132114", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/53021/" ] }
132,123
Let's say we have the following wavefunction of two identical particles, $A$ and $B$: $$\frac{1}{2}[(\chi(A)\psi(B)\pm\psi(A)\chi(B))+(\phi(A)\eta(B)\pm\eta(A)\phi(B))]$$ Is this properly (anti)symmetric? i.e. can it be put in the following form? $$\frac{1}{\sqrt2}(f_1(A)f_2(B)\pm f_2(A)f_1(B))$$
Let me start by clarifying that I assume the question is whether a superhuman or any object of human size can render itself invisible through speed alone. And that the speed of said object must be $v\ll c$. From this, I assume that the object or person being viewed must spend a reasonably long amount of time within the observer's field of view such that invisibility can't be cheated through exploiting blinks or by having too short an exposure time or by hiding behind the observer most of the time. In short, no, it is not possible for something to move fast enough that a normal human would cease to see it entirely. Firstly, at any velocity, the amount of light that is redirected from the moving object to the observer is reasonably constant. At relativistic velocities, length contraction of the moving object in the observer's frame could reduce the size and, thus, visibility of the object; however, the previously mentioned speed restriction allows us to ignore any contraction of this magnitude. Additionally, the human eye-brain system processes input continuously and uses as-yet not completely understood processes for extrapolating and interpolating motion. In fact, even when presented with rapid images of the same object at different locations, the brain has the ability to assume motion and perceives these images as being continuously linked; something referred to as stroboscopic motion or illusory motion of stroboscopic images. Since the objects moving quickly still have light shining on them, the light that reflects towards the observer would still be captured and processed. However, since image processing is a complex task and due to something called persistence of vision, the images we perceive can include not only the light that recently entered your eyes, but also parts of previously perceived images. Therefore, at higher speeds, the image an observer gets would probably be blurry as it is an amalgamation of the moving object's position at multiple times as well as an effect of small eye movements; this is sometimes called motion blur. However, while it is not possible to disappear entirely, there are other and cooler things that are possible. Because of the processing speed of the eye-brain system and the illusory effects that can be created from the brain's extrapolation of data, it is entirely possible for an object to move fast enough that it does not appear to move at all. If Goku and Piccolo fight each other at an extremely high speed, but they make sure to periodically spend most of every, say, second in one specific position, then the images your eye captures will be mostly of them in those specific positions. As a result, they will appear unmoving to an observer (except, of course, for mysterious cuts and bruises showing up and their edges may appear slightly more blurred). Additionally, by spending an equal amount of time in two different locations (and with the help of perfect timing) it can appear as if there is two identical and slightly faded/blurry copies of the moving object. This effect works for creating large numbers of copies and can also be used to make the image appear to "jump" from place to place. Some have argued that an extremely fast moving object can blur so much as to be indistinguishable from the background. This may be true, however it is more dependent on the contrast between the background and the object than on purely its speed. Once camouflage and things of that sort are ruled out, there is simply no non-relativistic speed where a human-sized object in a confined space can become invisible to an ideal, normal human observer for an extended period of time. Let me further add that at relativistic speeds, it's possible for the light reflecting off the moving object to be red or blue-shifted outside of the visible spectrum (although, in the case of blue-shifting, the object's own heat, ie IR emissions, would be shifted into the visible range). This could make the object invisible (and would be really cool to see happening), but as the question requires speeds "[not] even close to the speed of light", that rules out the possibility in this case. Additionally, since it is in a confined environment and the red-shifting effect only works for when the object is travelling away from the observer, the invisibility effect would not last long enough to be considered sustainably invisible.
{ "source": [ "https://physics.stackexchange.com/questions/132123", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/19857/" ] }
132,566
I've been trying to design a list with reasons why a proper theoretical physicist should understand the methods and the difficulty of doing experimental physics . So far I've only thought of two points: Know how a theory can or cannot be verified; Be able to read papers based on experimental data; But that's pretty much what I can think of. Don't get me wrong, I think experimental physics is very hard to work on and I'm not trying to diminish it with my ridiculously short list. I truly can't think of any other reason. Can somebody help me?
As a theorist, one likes to invent new ideas of how things might work. One crucial component to theory-building is searching the connection to experiments: A theory is physically meaningless when we cannot test it, for then it cannot be falsified. A theorist should be able to come up with experimental tests for his theories . This requires a good understanding of what experimentalists are (not) capable of. The perfect example here is Einstein (isn't he always?), who came up with a number of experimentally testable predictions of his theory of general relativity (those for special relativity were quite obvious, so he didn't have to work too hard on that). The most famous of these is the prediction of the correct deflection of light, confirmed by Eddington and a few others during a solar eclipse. A notoriously bad example in this aspect is string theory. It has thus far turned out impossible to come up with a way to test string theory, and this is regarded by many as a serious problem (although it may not have to do with the theorists' lack of understanding of experimental physics).
{ "source": [ "https://physics.stackexchange.com/questions/132566", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/12958/" ] }
132,654
Upon reading my book on physics, it mentions that there are only two discovered types of electric charges. I wonder if there could be a third type of elusive charge, and what type of effects could it have upon matter or similarly?
No, there are only positive and negative charges. Or, more carefully stated, if there is a another type of charge, then electromagnetism is not what we are currently thinking it is. 1 Electromagnetism is a $\mathrm{U}(1)$-gauge theory , which relies on introducing the covariant derivative $$ D_\mu = \partial_\mu - \frac{e}{\hbar}A_\mu$$ acting upon matter fields in representations of the $\mathrm{U}(1)$ labeled by $e$, where $A_\mu$ corresponds to the four-vector potential of electrodynamics . There is no possibility for matter fields to gain any other kind of charge here, since all representations of the circle group decompose into these one-dimensional representations of charge $e$, so charge is simply an integer $e \in \mathbb{Z}$. (The $\mathbb{Z}$ and not $\mathbb{R}$ come from the fact that $\mathrm{U}(1)$ is compact ) If there were other charges, we would need another (non-abelian, Lie) gauge group $G$ with some $\mathrm{Lie}(G)$-valued "potential" $A$ and a covariant derivative looking like $$ D_\mu = \partial_\mu - \frac{g}{\hbar}\rho(A_\mu)$$ where now $\rho$ is some (irreducible) representation of $G$ and the $g \in \mathbb{R}$ is called the coupling constant . The charges lie within the representations and are usually thought of as the (root of the) eigenvalue of the quadratic Casimir operator in that representation. Since $\mathrm{U}(1)$ has only one generator, its Casimir is simply that generator (squared), and we reconcile this with the above has observing that the representations of the circle group are indeed given by sending its generator to its $e$-multiple as per $$ \rho_e : \mathrm{U}(1) \to \mathrm{GL}(\mathbb{R}) \cong \mathbb{R}, \mathrm{i} \mapsto e\mathrm{i} \text{ with } e \in \mathbb{Z} $$ Note on QCD (where the idea of "other electric charges" probably came from): The specific occurence of things like "colors" is not quite compatible with this language, as one usually identifies each dimension of a non-trivial representation with a color, but since irreducible representations have not subrepresentations, a gauge transformation will change the colors around (it won't change the quadratic Casimirs, which is why they are the proper generalization of charge, and not the colors ). Nevertheless, also under this idea of charge, $\mathrm{U}(1)$ theories have only positive/negative charges, as their irreps are one-dimensional. 1 Looking at the real world, we know that electromagnetism must be a $\mathrm{U}(1)$ theory, since photon do not interact easily - they do not couple to one another on the tree-level of the quantum theory, and thus two laser beams do not significantly scatter off each other. In non-Abelian theories, the force carriers (gluons) do interact on the tree-level, and would thus deliver a wholly different force, more like the strong force, not long-range, and gluon beams would either not exist, or be very weird things . (though the details would be probably tricky for arbitrary $G$, granted, and could produced other weirdness as well)
{ "source": [ "https://physics.stackexchange.com/questions/132654", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/51988/" ] }
132,719
Every Action Has An Equal and Opposite Reaction (Newton's Third Law.) If this is the case, does gravity have an equal-opposing force? From asking around I still haven't got a very clear answer; those who I've talked to seem to believe there isn't one - that gravity is actually a singularity [a one way force] which somehow "just works", others think it differently - believing there is an opposing force of which prevents gravity from compressing masses more than it already does. So which one is the right answer? (if either!)
Yes , every gravitational force in Newtonian mechanics has an equal and opposing force, and it usually acts on other mass. More specifically, every two pairs of masses feel a gravitational force that's proportional to the product of their masses and inversely proportional to the square of their relative distance, but more important is the fact that both masses feel the attraction to each other. Thus, when you throw a ball of ~100g in the air, it experiences a gravitational force of 1N downwards, and in doing so it exerts a force of 1N upwards on the Earth. The reason you don't observe the Earth moving is that its acceleration is so small (on the order of 10 -25 m s -2 ) that it gets swamped in everything else, but it does happen. Now, it's important to note that gravity is not usually the only force acting on any object at a given time. If it is, then the total force will be nonzero and the object will accelerate (as per Newton's Second Law). Conversely, if an object is not accelerating, then the net force on it is zero, and there must be additional forces that cancel out the gravitational one. For a book lying on a table, for example, the weight is cancelled by the upwards reaction force from the table. (And, of course, this gives an added reaction force downwards from the book on the table, which gets cancelled by a correspondingly larger reaction force from the floor on the table.) Similarly, the reason that masses (like, say, the interior of the Earth) don't get compressed any further is that any given volume of rock will be acted on by the downwards gravitational force and by the upwards pressure from the rocks below it.
{ "source": [ "https://physics.stackexchange.com/questions/132719", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57824/" ] }
132,720
This will probably be considered very simple, but I am just a beginner: I'm developing a software application where temperatures need to be added and subtracted. Some temperatures are in Celsius, some in Kelvin. I know how to convert to/from Kelvin (273.15), but how should one go about adding and subtracting these? Should everything be converted to Celsius first? For example: 0°C + 0°C = 0°C 0°C + 500°C = 500°C But: 0°C + 273.15K = ? If we put everything in Kelvin, we get: 273.15K + 273.15K = 546.3K If we put everything in Celsius, we get: 0°C + 0°C = 0°C But obviously, 546.3K isn't equals to 0°C. Now, you might say I can't add temperature to temperatures (but should be adding energy or something? not sure). But the reason I'm doing this is because we need to interpolate. I have a collection of key-value-pairs, like this: 973K -> 0.0025 1073K -> 0.0042 1173K -> 0.03 1273K -> 0.03 Now I need to get the value for 828°C. So I need to interpolate, which means adding/subtracting values. I hope I'm making sense.
You may always add the numbers in front of the units, and if the units are the same, one could argue that the addition satisfies the rules of dimensional analysis. However, it still doesn't imply that it's meaningful to sum the temperatures. In other words, it doesn't mean that these sums of numbers have natural physical interpretations. If one adds them, he should add the absolute temperatures (in kelvins) because in that case, one is basically adding "energies per degree of freedom", and it makes sense to add energies. Adding numbers in front of "Celsius degrees", i.e. non-absolute temperatures, is physically meaningless, unless one is computing an average of a sort. This is a point that famously drove Richard Feynman up the wall. Read Judging books by their covers and search for "temperature". He was really mad about a textbook that wanted to force children to add numbers by asking them to calculate the "total temperature", a physically meaningless concept. It only makes sense to add figures with the units of "Celsius degrees" if these quantities are inteprreted as temperature differences, not temperatures. As a unit of temperature different, one Celsius degree is exactly the same thing as one kelvin. If you interpolate or extrapolate a function of the temperature, $f(T)$ , you do it as you would do it for any other function, ignoring the information that the independent variable is the temperature. Results of simplest extrapolation/interpolation techniques won't depend on the units of temperatures you used.
{ "source": [ "https://physics.stackexchange.com/questions/132720", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58009/" ] }
132,754
My questions mostly concern the history of physics. Who found the formula for kinetic energy $E_k =\frac{1}{2}mv^{2}$ and how was this formula actually discovered? I've recently watched Leonard Susskind's lecture where he proves that if you define kinetic and potential energy in this way, then you can show that the total energy is conserved. But that makes me wonder how anyone came to define kinetic energy in that way. My guess is that someone thought along the following lines: Energy is conserved, in the sense that when you lift something up you've done work, but when you let it go back down you're basically back where you started. So it seems that my work and the work of gravity just traded off. But how do I make the concept mathematically rigorous? I suppose I need functions $U$ and $V$, so that the total energy is their sum $E=U+V$, and the time derivative is always zero, $\frac{dE}{dt}=0$. But where do I go from here? How do I leap to either a) $U=\frac{1}{2}mv^{2}$ b) $F=-\frac{dV}{dt}$? It seems to me that if you could get to either (a) or (b), then the rest is just algebra, but I do not see how to get to either of these without being told by a physics professor.
Newton's second law As you probably know, Newton thought that energy is linearly proportional to velocity: the Latin terms vis [force] and potentia [potence, power] were used at that time to refer to what today is called energy . The second law's original formulation reads: "Mutationem motus proportionalem esse vi motrici impressae " = "any change of motion (velocity) is proportional to the motive force impressed". This law, which nowadays is wrongly interpreted as: $F = ma$ (there is no reference to mass here) simply states states: $$[\Delta/\delta v]( v_1-v_0) \propto Vis_m$$ and in modern terms is sometimes ( illegitimately ) also interpreted as impulse , sort of : $$\Delta v \propto J [/m] \rightarrow \Delta p = J$$ . But mass is not at all mentioned in the second law (as the original text shows) but only in the second definition , where we can see a definition of momentum as ' the measure of [quantity of] motion' Quantitas motus est mensura ejusdem ( motus ) orta ex velocitate et quantite materiæ conjunctim = 'quantity of motion' (modern ' momentum ') is the measure of the same ( motion ), originated conjunctly by velocity and 'quantity of matter' (total mass) and, moreover 'motive force' ( vis motrix ) is used, like all other scholars of the time, referring to the yet unknown kinetic 'force ' that made bodies move, which Galileo had called ' impeto ' and Leibniz ' motive power '. The interpretation of this formula as the definition of force in modern usage is an ex post facto historical manipulation, done against the author's own will: he knew about this interpretation proposed by Hermann and refused to adopt it in the final edition The historical facts It was Gottfried Leibniz , as early as 1686 (one year before the publication of the Principia ) who first affirmed that kinetic energy is proportional to squared velocity or that velocity is proportional to the square root of energy: $$ v \propto \sqrt{V_{viva}}$$ . He called it, a few years later, vis viva = 'a- live/living ' force in contrast with vis mortua = ' dead ' force: Cartesian momentum ([mass/weight =] size * speed: $m *|v|$ ). This was accompanied by a first formulation of the principle of conservation of kinetic energy, as he noticed that in many mechanical systems of several masses $m_i$ each with velocity $v_i$ , $\sum_{i} m_i v_i^2$ was conserved so long as the masses did not interact. The principle represents an accurate statement of the approximate conservation of kinetic energy in situations where there is no friction or in elastic collisions. Many physicists at that time held that the conservation of momentum, which holds even in systems with friction , as defined by the momentum: $\,\!\sum_{i} m_i v_i$ was the conserved kinetic energy. The concept of PE played no role, it did not exist yet, nor did the concept of mechanical energy to which you refer (E = U + V), but Leibniz, in this first paper, uses the term potentia motrix/ viva [motive power] to refer both to the energy a body acquires falling from an altitude and to the force necessary to lift it to the same altitude (mass/weight * space: $F*s$ ) which are considered equal. Some scholars see, wrongly, here a first definition of PE, but that is simply one of the axioms of Galileo. The principle to which you refer: $E_{mech} [KE + PE] = k$ in astrodynamics is called vis viva equation in his honour. Leibniz stated the conservation of KE per se besides the conservation of all (kinds of) energy in the whole universe. We need to underline this amazing stroke of genius. His theory was strongly adversed by Newton[ians] and DesCartes-ians because it seemed to contrast, to be incompatible with the conservation of momentum . In Newton there was no distinction ( as shown above ) between speed, motion, momentum and energy but quantitas motus ( momentum ) was the prevailing concept and it was proven to be conserved in all situations, therefore Leibniz' vis viva was considered a threat to the whole system. Only later it was acknowledged that both energy and momentum, being different entities, could be conserved (by Bošković and later (1748) by d'Alembert ). We can thank Émilie du Châtelet for the modern..understanding of kinetic energy – user121330 There is no energy formula ..in the discovery of conservation of energy are Joule and... – Ben Crowell That's overlooking historical facts (Joule was not concerned with KE) : soon after Leibniz' death, the quadratic relation was confirmed by experiments independently by the Italian Poleni in 1719 and the Dutch Gravesande in 1722, who dropped balls from varying heights onto soft clay and found that balls with twice speed produced and indentation four times deeper. The latter informed M.me du Châtelet of his results and she publicized them . Two centuries later, after Joule had shown that mechanical work can be transformed in heat, Helmholz suggested that the lost energy, in inelastic collisions, might have been transformed in heat. Thomas Young is thought to have been the first to substitute the terms ' vis viva/ potentia motrix ' with ' energy ' in 1807 (from the Greek word: ἐνέργεια energeia , which had been coined by Aristotle on the stem of ergon = work, therefore: energeia [= the-state-of-being-at-work]). Later (1824-1829) Coriolis introduced the current formula and the terms 'work' and ' semi-vis viva '; this concept and the consequent theory of conservation of energy was eventually formalized by Lord Kelvin, Rankine et al. in the field of thermodynamics. The formula of kinetic energy The question is much more complex than it appears, as there are at least four formulas involved here, and each issue is complex in its turn: how, when and by whom was the formula for the second law of motion $F=m*a$ introduced how was the formula of kinetic energy $V_{viva} = [m]* v^2$ found by Leibniz how, when and by whom was the current newtonian formula of kinetic energy $E_k = [m]*\frac{v^2}{2} $ introduced how, when and by whom was the formula for work $W = F*d$ introduced I did not want to make this post too long, but I'll take the suggestion from the bounty and address the issues in separate answers. Just a brief note here to make this post self-contained: the formula of KE was not derived from work, as it may seem: it's the other way round. $W = F * d$ and $F = m * a$ were by-products of the KE formula. Once the quadratic relation had been verified and universally accepted: $E \propto v^2$ , any coefficient (0.2, 0.5, 2..) could be added as an irrelevant and arbitrary choice that depended only on the choice of units . The only avalaible (and precisably measurable ) source of KE at the time was gravity and the Galilean equations were too strong a temptation, as they included, too, a [0.5] quadratic relation: it seemed a stroke of genius to make the energy of the unitary mass at unitary (uniform) acceleration coincide with space. In this way energy was simply the integration of [m] $g$ on space. Conclusions Tying energy to gravity, that is, to acceleration and in particular to constant acceleration was not a wise idea, it was a gross mistake that tied, confined newtonian mechanics in a strait-jacket because it was in this way unable to deal with the more natural situations when KE is related to velocity and when there is just a transfer of energy: the concept of impulse was just an ad hoc awkward attempt to deal with that. Tying work-energy to space and not to the mere transfer of energy was an insane decision that had irrational, catastrophic practical consequences. But consequences were even more devastating on the conceptual, theoretical level because explaining and identifying KE with the acceleration gave the illusion that the issue of motion-KE had been understood, and prevented further speculation. Leibniz invented the concept of (kinetic) energy, prefigured and discovered its real formula $E = v^2$ resisting the Siren of gravity, suggested the right way of integration and established the universal principle of 'conservation of energy' as prevailing on/independent from 'conservation of momentum' (transcending Huygens' principle of 'conservation of KE' ). He engaged in passionate controversies until his death but was opposed and overwhelmed by obtuse/ignorant Newtonian contemporaries. He was vulnerable as he could not account for the loss of energy in inelastic collisions. He lost, and newtonian integration on space produced: $\frac{1}{2}$ mv 2 which is not the formula, but just one of the possible formulas of KE: the newtonian formula. Had he won, instead of the joule , now we would use the ' leibniz ' (= 1/2 J) and we would have a different, probably deeper, insight into the laws of motion and of the world. History , as we know, is written by the victors. You can find additional information on work here
{ "source": [ "https://physics.stackexchange.com/questions/132754", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/46758/" ] }
132,892
Does tire pressure measured by a meter on tire gauge change with load? (I am not interested in pressure produced by car tires onto the road). Car spec usually says "inflate to 220kPa normal load, 300kPa full load". Does this mean the measured pressure should be 300kPa only after the car was loaded, or can one inflate the tires to the recommended 300kPa while empty and then load the car with 500kg without needing to re-measure the pressure? There are basically both answers to be found when researching the non-physics forums on the internet: One explanation says that the tire will compress under bigger load, making the volume inside the tire smaller, hence pressure higher. It also states this is the reason why the spec plate in the car has two values - you should simply expect to measure higher pressure on the tires while the car is loaded. There are also explanations stating that the air does not escape from the tire, hence the amount of air inside the tire is constant and the pressure measured on the tires is constant even if the car is loaded (and based on observations, tires normally compress when the car is loaded). This would mean one needs to put more air into the tires when the car is loaded to ensure higher pressure. To a layman, both sound feasible. Is one of the explanations simplifying based on normal load assumptions? (i.e. tire pressure does not change if car is loaded with at most 1000kg, but would change if there was an enormous load put on the car?) Where is the truth?
TL;DR: The load does not significantly increase the pressure in the tire, but not inflating the tire more will increase friction. This will heat up the tire. Correct pressure ensures correct contact area - preventing wear on the tire, and keeping rolling friction low. Full answer: Going to use simple math, round numbers (no calculator): 1000 kilo car, 4 tires, 2.2 bar pressure. Contact area for each tire approximately $250 / 2.2 = 110 ~\rm{cm^2}$. With the tire 15 cm wide, the contact patch is 6 cm long. Now "load" the car with 50% more weight (500 kg). The additional contact area needed is $55~\rm{cm}^2$ per tire. If you assume that the side walls don't deform, the contact length increases to 9 cm. The change in volume from this additional flattening of the tire is quite small. Looking at the diagram below, you can compute the volume change (assuming all deformation happens in this plane) The volume of the air in the undeformed tube: $$\begin{align}V &= \pi (r_o^2 - r_i^2) w\\ V &= \rm{volume}\\ w &= \rm{width\ of\ tire}\end{align}$$ The angle subtended by the flat region: $$\theta = 2\sin^{-1}(\frac{L}{2r_o})$$ when $L<<r_o$ this approximates to $\theta = L/r_o$ The area of the flattened region is $$A_{flat}=\frac12r_o^2\theta - \frac{L}{2} r_o \cos\frac{\theta}{2}$$ Small angle approximation: $$\begin{align} A_{flat}&=\frac12r_oL(1-(1-\left(\frac{L}{2r_o}\right)^2))\\ &=\frac{L^3}{8r_o}\end{align}$$ For a constant width $W$ of the tire, the flattened volume is of course $Aw$. If we assume to first order that friction is proportional to the volume that is being distorted, you can see that a slightly flat tire (larger contact area) will significantly affect fuel consumption. How big is the effect? With the numbers I used above, the fractional volume change is only 0.03% (for $r_i = 30~\rm{cm}, r_o = 40~\rm{cm}, w = 15~\rm{cm}$). That means that the pressure will not increase due to the deformation of the tire / the additional mass. And that in turn means that the reason to inflate the tire more is precisely to prevent the increased contact area, which would lead to higher friction and potentially higher temperature. As @Tom pointed out, under load a tire sidewall will also deform, and this deformation will cause additional wear on the tire. This is another reason why tire pressure needs to be adjusted to the load. Note that there is a feedback loop - if the tire is underinflated and heavily loaded, it will get hot which will increase the pressure somewhat. But it is better just to start with a bit more air in it...
{ "source": [ "https://physics.stackexchange.com/questions/132892", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58081/" ] }
133,198
This is definitely not an illusion, as many people have the same experience. I have usually lived in places miles away from train stations, which makes it unlikely to hear any train horns during the day. However at night, occasionally train horns can be heard. I hope if any one can explain the physics of this effect, like possibly sound travels faster at low temperature.
There are two things that can be considered: one is trivial - that it is quieter at night so you are more likely to hear the horn. The second is physics: the speed of sound depends on the square root of temperature, so the refractive index is proportional to $T^{-1/2}$. At night it is quite possible to get a temperature inversion, such that air near the ground is colder than higher up. This would normally occur in still conditions and I think is more common in winter. As the refractive index decreases with height it means that sound waves propagating upwards at some angle to the horizontal will be bent back towards the ground. The sound waves at some distance from the source will be more intense than you might expect if the waves propagated isotropically. The contrast with the daytime situation would be enhanced by a more normal temperature gradient where the refractive index increases with height. EDIT: For an excellent visualisation of this effect, see these animations produced by Daniel Russell (Penn State)
{ "source": [ "https://physics.stackexchange.com/questions/133198", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/26357/" ] }
133,201
When peeling a sticker off its base, the immediate reaction is that it curls; why is this? I am having trouble finding an answer to this. Could it be that the glued side expands upon contact with the air? And, on a somewhat related note, is this a similar reason for why a ribbon curls when glided over with a blade?
*When peeling a sticker off its base, the immediate reaction is that it curls; why is this? on a somewhat related note, is this a similar reason for why a ribbon curls when glided over with a blade?* The two phenomena are not different: in the second case you you pull the ribbon across the blunt side of the blade of the scissors (or other tool), and at the same time you press a finger or your palm against the ribbon on the other side, creating a sharp angle of roughly 180° (shaped like a U ) around which the ribbon has to bend . This bending takes the fabric beyond its elastic limit and leaves it permanently bent. In the first case you have the same phenomenon, the glue might some special properties but its role is negligible: usually when you remove a sticker you pull it almost horizontally holding it with your thumb while sliding your index finger on its surface, and thus making an angle of roughly 170° (allowing for the thickness of your finger). Plastic material is made by polymers and these are deformable only to a certain extent: synthetic fibres can withstand a certain amount of stretching or bending without being permanently deformed. If the fibres are deformed too much, the polymer molecules cannot be straightened again, so the shape of the fibres is permanently changed. From wiki: Plasticity in amorphous materials Crazing In amorphous materials, the discussion of “dislocations” is inapplicable, since the entire material lacks long range order. These materials can still undergo plastic deformation. Since amorphous materials, like polymers, are not well-ordered, they contain a large amount of free volume, or wasted space. Pulling these materials in tension opens up these regions and can give materials a hazy appearance. This haziness is the result of crazing, where fibrils are formed within the material in regions of high hydrostatic stress. The material may go from an ordered appearance to a "crazy" pattern of strain and stretch mark Please note that ribbons for decoration are called curling ribbons because the are expecially made to curl, or usually poly ribbons as they are made from polymers, like the stickers. Sometimes they are still made of cotton or silk (which is very expensive)
{ "source": [ "https://physics.stackexchange.com/questions/133201", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58203/" ] }
133,366
I have often heard it said that several problems in the theory of electromagnetism as described by Maxwell's equations led Einstein to his theory of Special Relativity. What exactly were these problems that Einstein had in mind, and how does Special Relativity solve them?
There was no problem with electromagnetism. The problem was that Maxwell's equations are invariant under Lorentz transformations but are not invariant under Galileo transformations whereas the equations of classical mechanics can be easily made invariant under Galileo transformations. The question was: how to reconcile both in a universe in which Maxwell's equations had been tested much more thoroughly than the equations of classical mechanics when $v$ is in the same order of $c$ and not much smaller. Einstein basically solved the problem by deciding that electromagnetism is more fundamental in physics, and then showing that classical mechanics could be modified in such a way, that it, too, became Lorentz invariant. As a side effect, he recovered classical mechanics as a natural limit for $v/c\to0$, which perfectly explained almost all observations of macroscopic dynamics available at that time (leaving Mercury's perihelion precession to be explained by general relativity ten years later).
{ "source": [ "https://physics.stackexchange.com/questions/133366", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57940/" ] }
133,376
Some people say that mass increases with speed while others say that the mass of an object is independent of its speed. I understand how some (though not many) things in physics are a matter of interpretation based on one's definitions. But I can't get my head around how both can be "true" is any sense of the word. Either mass increases or it doesn't, right? Can't we just measure it, and find out which "interpretation" is right? E.g. by heating up some particles in a box (in some sophisticated way) and measuring their weight? UPDATE: Right, so I've got two identical containers, with identical amounts of water, placed on identical weighing scales, in the same gravitational field. If one container has hotter water, will the reading on its scale be larger than the other? If the answer is yes, and $g$ is constant, does this mean that the $m$ in $W=mg$ has increased?
There is no controversy or ambiguity. It is possible to define mass in two different ways, but: (1) the choice of definition doesn't change anything about predictions of the results of experiment, and (2) the definition has been standardized for about 50 years. All relativists today use invariant mass. If you encounter a treatment of relativity that discusses variation in mass with velocity, then it's not wrong in the sense of making wrong predictions, but it's 50 years out of date. As an example, the momentum of a massive particle is given according to the invariant mass definition as $$ p=m\gamma v,$$ where $m$ is a fixed property of the particle not depending on velocity. In a book from the Roosevelt administration, you might find, for one-dimensional motion, $$ p=mv,$$ where $m=\gamma m_0$ , and $m_0$ is the invariant quantity that we today refer to just as mass. Both equations give the same result for the momentum. Although the definition of "mass" as invariant mass has been universal among professional relativists for many decades, the modern usage was very slow to filter its way into the survey textbooks used by high school and freshman physics courses. These books are written by people who aren't specialists in every field they write about, so often when the authors write about a topic outside their area of expertise, they parrot whatever treatment they learned when they were students. A survey [ Oas 2005 ] finds that from about 1970 to 2005, most "introductory and modern physics textbooks" went from using relativistic mass to using invariant mass (fig. 2). Relativistic mass is still extremely common in popularizations, however (fig. 4). Some further discussion of the history is given in [ Okun 1989 ]. Oas doesn't specifically address the question of whether relativistic mass is commonly used anymore by texts meant for an upper-division undergraduate course in special relativity. I got interested enough in this question to try to figure out the answer. Digging around on various universities' web sites, I found that quite a few schools are still using old books. MIT is still using French (1968), and some other schools are also still using 20th-century books like Rindler or Taylor and Wheeler. Some 21st-century books that people seem to be talking about are Helliwell, Woodhouse, Hartle, Steane, and Tsamparlis. Of these, Steane, Tsamparlis, and Helliwell come out strongly against relativistic mass. (Tsamparlis appropriates the term "relativistic mass" to mean the invariant mass, and advocates abandoning the "misleading" term "rest mass.") Woodhouse sits on the fence, using the terms "rest mass" and "inertial mass" for the invariant and frame-dependent quantities, but never defining "mass." I haven't found out yet what Hartle does. But anyway from this unscientific sample, it looks like invariant mass has almost completely taken over in books written at this level. Oas, "On the Abuse and Use of Relativistic Mass," 2005, here . Okun, "The concept of mass," 1989, here .
{ "source": [ "https://physics.stackexchange.com/questions/133376", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/29532/" ] }
133,758
I'm an undergraduate student in Chemistry currently studying quantum mechanics and I have a problem with unitary transformations. Here in my book, it is stated that Every unitary operator $\hat{\mathcal{U}}$ can be written in an exponential form as follows: $$\mathrm{e}^{-i\alpha\hat{\mathcal{T}}}=\sum_{k=0}^{\infty}\dfrac{1}{k!}\left(-i\alpha\right)^{k}\hat{\mathcal{T}}^{k} $$ Provided that I have no knowledge of Lie Group/Algebra, my questions are: Why a unitary operator can be always represented by an exponential form? What is the intuitive mathematical meaning of the exponential form/matrix? What is the relation between the operator $\hat{\mathcal{U}}$ and the operator $\hat{\mathcal{T}}$? What is its physical meaning?
There's no escaping Lie theory if you want to understand what is going on mathematically . I'll try to provide some intuitive pictures for what is going on in the footnotes, though I'm not sure if it will be what you are looking for. On any (finite-dimensional, for simplicity) vector space, the group of unitary operators is the Lie group $\mathrm{U}(N)$, which is connected. Lie groups are manifolds , i.e. things that locally look like $\mathbb{R}^N$, and as such possess tangent spaces at every point spanned by the derivatives of their coordinates — or, equivalently, by all possible directions of paths at that point. These directions form, at $g \in \mathrm{U}(N)$, the $N$-dimensional vector space $T_g \mathrm{U}(N)$. 1 Canonically, we take the tangent space at the identity $\mathbf{1} \in \mathrm{U}(N)$ and call it the Lie algebra $\mathfrak{g} \cong T_\mathbf{1}\mathrm{U}(N)$. Now, from tangent spaces, there is something called the exponential map to the manifold itself. It is a fact that, for compact groups, such as the unitary group, said map is surjective onto the part containing the identity. 2 It is a further fact that the unitary group is connected , meaning that it has no parts not connected to the identity, so the exponential map $\mathfrak{u}(N) \to \mathrm{U}(N)$ is surjective, and hence every unitary operator is the exponential of some Lie algebra element. 3 (The exponential map is always surjective locally, so we are in principle able to find exponential forms for other operators, too) So, the above (and the notes) answers to your first three questions: We can always represent a unitary operator like that since $\mathrm{U}(N)$ is compact and connected, the exponential of an operator means "walking in the direction specified by that operator", and while $\mathcal{U}$ lies in the Lie group, $\mathcal{T}$ lies, as its generator, in the Lie algebra. One also says that $\mathcal{T}$ is the infinitesimal generator of $\mathcal{U}$, since, in $\mathrm{e}^{\alpha \mathcal{T}}$, we can see it as giving only the direction of the operation, while $\alpha$ tells us how far from the identity the generated exponetial will lie. The physical meaning is a difficult thing to tell generally - often, it will be that the $\mathcal{T}$ is a generator of a symmetry, and the unitary operator $\mathcal{U}$ is the finite version of that symmetry, for example, the Hamiltonian $H$ generates the time translation $U$, the angular momenta $L_i$ generate the rotations $\mathrm{SO}(3)$, and so on, and so forth — the generator is always the infinitesimal version of the exponentiated operator in the sense that $$ \mathrm{e}^{\epsilon T} = 1 + \epsilon T + \mathcal{O}(\epsilon^2)$$ so the generated operator will, for small $\epsilon$ be displaced from the identity by almost exactly $\epsilon T$. 1 Think of the circle (which is $\mathrm{U}(1)$): At every point on the circle, you can draw the tangent to it - which is $\mathbb{R}$, a 1D vector space. The length of the tangent vector specifies "how fast" the path in that direction will be traversed. 2 Think of the two-dimensional sphere (which is, sadly, not a Lie group, but illustrative for the exponential map). Take the tangent space at one point and imagine you are actually holding a sheet of paper next to a sphere. Now "crumble" the paper around the sphere. You will end up covering the whole sphere, and if the paper is large enough (it would have to be infinte to represent the tangent space), you can even wind it around the sphere multiple times, thus showing that the exponential map cannot be injective , but is easily seen to be surjective . A more precise notion of this crumbling would be to fix some measure of length on the sphere and map every vector in the algebra to a point on the sphere by walking into the direction indicated by the vector exactly as far as its length tells you. 3 This is quite easy to understand - if there were some part of the group wholly disconnected to our group, or if our group had infinite volume (if it was non-compact ), we could not hope to cover it wholly with only one sheet of paper, no matter how large.
{ "source": [ "https://physics.stackexchange.com/questions/133758", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58442/" ] }
133,985
Several different sources online state that the average temperature of interstellar space (or the universe in general) is around 2-3K. I learned that temperature is basically the wiggling of matter, and I find it somewhat counterintuitive that the wiggling of so few particles can cause a temperature of 2-3K. Is there a (order-of-magnitude) calculation which can show that this average temperature estimation is correct, using an estimation of the average density of interstellar space (or the universe in general)?
Temperature in a gas is the average kinetic energy per particle . As an intrinsic property its value is entirely decoupled from how much stuff has the property. Whether there are 100 particles per cubic centimeter or only 1 particle per cubic meter, the temperature can be anything. The coldest parts of the ISM are about 3 K, and getting colder than this is difficult, because the entire universe is bathed in a sea of 3 K photons . But some parts of the ISM are much, much hotter. The diffuse gas filling the space between galaxies in galaxy clusters can be hundreds of millions of degrees. This just means each particle is whizzing about very fast.
{ "source": [ "https://physics.stackexchange.com/questions/133985", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/30416/" ] }
134,071
and study the solar system and universe relative to it and why not relative to the Earth?
When you're trying to understand the mechanics of a system it's usually convenient to choose coordinates that reflect the symmetry of the system. The solar system is roughly centrally symmetric because the Sun is by far the largest mass in it, and the coordinates that reflect this symmetry are polar coordinates with the Sun at the centre. For example in these coordinates if the Earth was the only object apart from the Sun, the Earth's orbit would be (nearly) a ellipse. The presence of the other planets (mainly Jupiter) perturbs the Earth's orbit, but we can handle this by perturbation theory starting with the elliptical orbit and adding on the perturbations caused by the other planets. So taking the Sun as a reference point is a reflection of the symmetry of the Solar system. As noted in other answers, if we're describing the galaxy the Sun is no longer the best place to set the origin of our coordinate system, and we'd use polar coordinates centred on the centre of symmetry of the galaxy. Likewise to describe a galaxy cluster we'd choose the origin to be the centre of mass of the cluster. At the very largest scales the universe is isotropic and homogenous, so it doesn't matter where we place the origin.
{ "source": [ "https://physics.stackexchange.com/questions/134071", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57146/" ] }
134,119
As the title says. It is common sense that sharp things cut, but how do they work at the atomical level?
For organic matter, such as bread and human skin, cutting is a straightforward process because cells/tissues/proteins/etc can be broken apart with relatively little energy. This is because organic matter is much more flexible and the molecules bind through weak intermolecular interactions such as hydrogen bonding and van der Waals forces. For inorganic matter, however, it's much more complicated. It can be studied experimentally, e.g. via nanoindentation + AFM experiments, but much of the insight we have actually comes from computer simulations. For instance, here is an image taken from a molecular dynamics study where they cut copper (blue) with different shaped blades (red): In each case the blade penetrates the right side of the block and is dragged to the left. You can see the atoms amorphise in the immediate vicinity due to the high pressure and then deform around the blade. This is a basic answer to your question. But there are some more complicated mechanisms at play. For a material to deform it must be able to generate dislocations that can then propagate through the material. Here is a much larger-scale ( $10^7$ atoms) molecular dynamics simulation of a blade being dragged (to the left) along the surface of copper. The blue regions show the dislocations: That blue ring that travels through the bulk along [10-1] is a dislocation loop. If these dislocations encounter a grain boundary then it takes more energy to move them which makes the material harder. For this reason, many materials (such as metals, which are soft) are intentionally manufactured to be grainy. There can also be some rather exotic mechanisms involved. Here is an image from a recent Nature paper in which a nano-tip is forced into calcite (a very hard but brittle material): What's really interesting about it is that, initially, crystal twins form (visible in Stage 1) in order to dissipate the energy - this involves layers of the crystal changing their orientation to accommodate the strain - before cracking and ultimately amorphising. In short: it's complicated but very interesting!
{ "source": [ "https://physics.stackexchange.com/questions/134119", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57948/" ] }
134,473
I read in this answer in this site that the KE a free-falling ball acquires is not originated by the attracting body but that energy was actually stored in the ball when it had been lifted to the height it dropped from. In this way, it was said, gravity is subject to the conservation of energy principle and cannot change the total energy of an object. Consider now the maneuver known as a gravitational slingshot (also gravity assist ) used by space probes such as the Voyager 2 . A space probe approaches a planet with velocity $v$, slingshots around, and ends up with velocity $v+2U$, where $U$ is the velocity of the planet. Consider the energy of the probe. Before, it was $E_i=\frac{1}{2} mv^2$ and after it is $E_f=\frac{1}{2} m(v+2U)^2$. It looks like $E_f$ is much bigger than $E_i$ - but where did the additional energy come from? Is this not a violation of the conservation of energy principle?
Cory, here's a different way of thinking about gravity assists that may help: First is my short answer for readers in a hurry: What is really going on is a giant game of pool, with fast-moving planets acting as massive cue balls that impart some of their energy when they whack into tiny spacecraft. Since you can't bounce a spacecraft directly off the surface of a planet, it instead is steered to rebound smoothly off the immense virtual trampoline that gravity creates behind the planet. This field slows down and reverses the relative backward motion of a spacecraft to give a net powerful forward thrust (or bounce) as the spacecraft loops around in a U-shaped path behind the planet. Next is my original, more story-style long answer: Imagine a planet like Venus as a giant, perfectly elastic (bouncy) rubber ball, and your spacecraft as a particularly tough steel ball. Next, drop your steel ball spacecraft from space in such a way that it will hit the side of Venus that is facing forward in its orbit around the Sun. The spacecraft will speed up as it falls towards the surface of Venus, but after it bounces — perfectly and without any loss of energy in this imaginary scenario — it will similarly slow down as the same gravity resists its departure. Just as with an elastic ball that at first speeds up when dropped and then slows down after bouncing on the floor, there is no net free "gravity energy" from the interaction. But wait a second... there is another factor! Because the spacecraft was dropped in front of the orbital path of Venus, the planet will be moving towards the satellite at tremendous speed when the bounce happens at the surface. Venus thus acts like an incredibly fast, unimaginably massive cue ball, imparting a huge boost in velocity to the spacecraft when the two hit. This is a real increase in speed and energy that has nothing to do with the transient faster-then-slower speed change due to gravity. And just as a cue ball slows down when it transfers impact energy to another ball, there is no free energy lunch here either: Venus slows down when it speeds up the spacecraft. It's just that its massive size makes the decrease in the orbital speed of Venus immeasurably small in comparison. By now you probably see where I'm heading with this idea: If only there were a real way to bounce a spacecraft off of a planet that is moving quickly around the Sun, you could speed it up tremendously by playing what amounts to a gigantic interplanetary game of space pool. The shots in this game of pool would be very tricky to set up, and a single shot might take years to complete. But look at the benefits! Even if you start out with a relatively slow (and thus for space travel, cheap) spacecraft launch, a good sequence of whacks by planetary (or moon!) cue balls would eventually get your spacecraft moving so fast that you could send it right out of the solar system. But of course, you can't really bounce spacecraft off of planets in a perfectly elastic and energy conserving fashion, can you? Actually... yes, you can, by using gravity! Imagine again that you have placed a relatively slow-moving spacecraft somewhere in front of the orbital path of Venus. But this time instead of aiming it towards the front of Venus, where any real spacecraft would just burn up, you aim it a bit to the side so that it will pass just behind Venus. If you aim it close enough and at just right  angle, the gravity of Venus will snatch the spacecraft around into a U-shaped path. Venus won't capture it completely, but it can change its direction of motion by some large angle that can approach 180 degrees. Now think about that. The spacecraft first moves towards the fast-approaching planet, interacts powerfully with it via gravity, and ends up moving in the opposite direction. If you look only at the start and end of the event, it looks just as if the spacecraft has bounced off of the planet! And energetically speaking, that is exactly what happens in such events. Instead of storing the kinetic energy of the incoming spacecraft in crudely compressed matter (the rubber ball analogy), the gravity of Venus does all the needed conversions between kinetic and potential energy for you. As an added huge benefit, the gravitational version of a rebound works in a smooth, gentle fashion that permits even delicate spacecraft to survive the process. Incidentally, it's worth noticing that the phrase "gravity assisted" is really referring only to the elastic bounce part of a larger, more interesting collision event. The real game that is afoot is planetary pool, with the planets acting as hugely powerful cue balls that if used rightly can impart huge increases in speed to spacecraft passing near them. It is a tricky game that requires patience and phenomenal precision, but it is one that space agencies around the world have by now learned to use very well indeed.
{ "source": [ "https://physics.stackexchange.com/questions/134473", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58723/" ] }
134,704
I'm trying to think about special relativity without "spoiling" it by looking up the answer; I hope someone can offer some insight - or at least tell me I'm wrong. Suppose I have an ordinary clock in front of me and I push it back with my hands. The force applied to the clock causes it to retreat away from me and after the push, it will travel away with uniform velocity. Suppose further, I can always see the clock clearly no matter how far away it is. Since the speed of light is constant, the light coming from the clock must travel a longer distance to reach my eye as it moves away. This would make time appear to slow down? If, on the other hand, the clock is moving towards me, the distance the light must travel to reach my eye becomes shorter and shorter, thus time would appear to speed up?
Analyzing one moving clock from the perspective of one stationary person will be inadequate to derive special relativity from. With just that set-up, you aren't actually using the key fact that the speed of light is the same for all observers – all you're actually using is just the fact that the speed of light is finite. With just taking into account that the speed of light is finite, all you'll arrive at is the non-relativistic Doppler effect , which is different from time dilation .
{ "source": [ "https://physics.stackexchange.com/questions/134704", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/26071/" ] }
134,705
When you attach a bolt to something using a nut, it is clear what the roles of the nut and bold are. The more you tighten the bolt the more secure your fastening. However, you are often also told to use a washer as well. I know this somehow prevents the bolt from loosening but from a physics/mathematics point of view, what is the role of the washer?
Some (smooth or Teflon) washers are used to reduce friction while tightening allowing for greater torque application and thus higher axial loads on the bolt. Some washers increase the friction between the parts to prevent it from loosening up Some split (or lock) washers act like a spring maintaining pressure in contact during thermal or elastic expansion and/or helps prevent unwinding of the nut by digging in the base material. Some washers have a ratcheting surface to fix the nut in a particular orientation Some washers are thick in order to re-distribute the contact pressure and soften the damage to the clamped body Some washers are there to separate dissimilar metals to avoid galvanic corrosion Some (belleville) washers reduce the stiffness of the connection in order to take up deformation better. ... the list goes on ...
{ "source": [ "https://physics.stackexchange.com/questions/134705", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/181630/" ] }
134,943
This scene (youtube link) from the movie The A-team , the four members are in the tank and its falling from the air, they fire the canon and it slows the tank from falling for a moment before falling again. Is this possible from a Physics point of view? I am looking at this from a recoil point of view. Can the tank firing the rocket produce enough recoil for it to counter the force of gravity?
Olin Lanthrop suggested a plausible approach but there was a lot of (inaccurate) guessing in his answer. I was going to write this as a comment to his answer but it got too long. Note - in the below I round to no more than 2 significant figures - the nature of the problem doesn't support more. Let's take the famous Sherman tank as our example. A brief search tells us that it weighed 66,000 pounds (about 30,000 kg) - not clear if that includes full tank of fuel, ammunition and crew) and that its main cannon (the M3 L:40) could fire 6.7 kg rounds at a muzzle velocity over 600 m/s. From conservation of momentum we conclude that firing 80 rounds per second would keep the tank from falling. The tank had 90 rounds, so that would work for about one second. It would also melt the barrel in a heartbeat. So let's look at the other weapons on the tank. There was a .50 caliber Browning with 50 g bullets with muzzle velocity of 800 m/s and firing around 800 rounds per minute. That's a mean impulse of 40 Ns per round, or around 500 Ns firing at full tilt. That would be enough to keep a child airborne - not a tank. Point all three machine guns forward - it still doesn't even carry the gunner plus the guns. This should give you a sense of the enormous power of the main cannon: those M3 rounds are not something you want to stop with your Kevlar vest. Movie physics - you don't have to get the math right. It's magic.
{ "source": [ "https://physics.stackexchange.com/questions/134943", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/40164/" ] }
135,016
Why is it that when you drop paper behind a fan, it drops, and is not blown/sucked into the fan, whereas if you drop paper in front of a fan, it is blown away?
There is a YouTube video that visualizes the air flow around a propeller for various configurations. I caught a screen shot of a moment that more or less shows what is going on: As you can see, this happens at 2:07 into the clip - this happens to be for a dual rotor configuration (two counter rotating blades) but the principle is the same. Behind the rotor (above, in this picture) the air is moving slowly. Air over a wide range of area is drifting towards the rotor, where it is accelerated. I will leave it up to others to describe the mathematics behind this contraction - but I thought visualizing the flow would at least confirm your observation that it is indeed slower behind the fan, and faster in front of it. In other words - it pushes, but doesn't suck. A better image showing the flow lines around the propeller is given at this article about the mechanics of propellers As the pressure is increased, the flow velocity goes up and the flow lines end up closer together (because of conservation of mass flow). This gives the flow the asymmetry you observed. But it's still more intuitive than rigorous... AFTERTHOUGHT Hot Licks made an excellent observation in a comment that I would like to expand on. The air being drawn towards the fan is moving in the pressure differential between the atmosphere at rest, and the lower pressure right in front of the fan blades. The pressure gradient is quite small, so the air cannot flow very fast - and it has to be drawn from a wide region to supply the mass flow. After impact with the blade (or at least after "interacting" with the blade), the air has a LOT more momentum that is directed along the axis of the fan (with a bit of swirl...). This higher momentum gives the air downstream of the fan its coherence as can be seen in the diagram.
{ "source": [ "https://physics.stackexchange.com/questions/135016", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/37626/" ] }
135,162
An answer to the question If we could build a neutrino telescope, what would we see? contains a link to a neutrino image of the sun by the Super-Kamiokande neutrino detector. There it says that the image actually covers a large part of the sky of about 90x90 degrees. As the diameter of the sun from earth is around one half of a degree, it must be that many of the neutrinos didn't come straight at us. This seems surprising (to me), as neutrinos should hardly interact with the atmosphere. Maybe the central few pixels of the image are extremely much brighter than the others, but this image doesn't show the difference between those and the surrounding pixels? Or is something else going on?
The detector that took that image--Super Kamiokande (super-K for short)--is a water Cerenkov device. It detects neutrinos by imaging the Cerenkov cone produced by the reaction products of the neutrinos. Mostly elastic scattering off of electrons: $$ \nu + e \to \nu + e \,,$$ but also quasi-elastic reactions like $$ \nu + n \to l + p \,,$$ where the neutron comes from the oxygen and $l$ means a charged lepton corresponding to the flavor of the neutrino (for energy reasons always an electron from solar neutrinos, but they also get muons from atmospheric and accelerator neutrinos---Super-K is the far detector for T2K). Then you reconstruct the direction in which the lepton was moving (which is correlated with but not identical to the direction the neutrino was going). This indirect pointing method accounts for the very poor angular resolution of the image.
{ "source": [ "https://physics.stackexchange.com/questions/135162", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/25794/" ] }
135,180
Crossed my mind after random rant on wikipedia that lead me to articles about chronometers and measuring position. Let's assume I were trapped in the underground laboratory with lots of equipment but without any access to the surface. Would I be able to properly determine my position (latitude, longitude and altitude), and if so, what instruments are needed? (and mny what's the coolest way to do it :) I thought about measuring Coriolis effect, which could lead to latitude measurement, and earth's gravity map could give more hints, but it's still far too imprecise.
A precise measurement of the Coriolis force will not only give you your latitude, but will also tell you which direction is true north . A compass will tell you which direction is magnetic north, and the combination of knowing your latitude and your magnetic declination will give you your longitude. Measuring the long-term average air pressure, assuming there's a direct air path between you and the surface that doesn't involve fans, will give you a rough idea of your altitude.
{ "source": [ "https://physics.stackexchange.com/questions/135180", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/59022/" ] }
135,521
When an electron is moving around in its orbital, is it actually moving around like a wave, like this video shows? (By wave-like, I mean, the 'electron' in this video is showing it following a predictable wave-like path, which would mean you could precisely determine its position which obviously you can't). Or, instead, does it just have some probability to be in that orbital's space, and just randomly jumps around from one point to another? Or if not that, how does the electron move around in its orbital?
Orbitals are solutions to time-independent quantum wave equations. That is, there is no time-dependence. There is no little ball in there moving around, the electron has a quantum characteristic and exists with neither a well defined position nor a well defined momentum.
{ "source": [ "https://physics.stackexchange.com/questions/135521", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/16731/" ] }
135,726
I'm taking a course on Lagrangian and Hamiltonian Dynamics, and I would like to find a good book/resource with lots of practice questions and answers on either or both topics. So far at my university library, I have found many books on both subjects, but not ones with good practice questions and answers. I have Schuam's outline of Lagrangian Dynamics, but didn't really find a lot of practice questions. Any suggestions would be greatly appreciated!
I'll write here a list of my personal favorites plus some commonly used books. I wouldn't be surprised if your teacher chose either one of the books below as a textbook: i) Mechanics, the first volume of the Landau course on Theoretical Physics; ii) Goldstein's book "Classical Mechanics"; iii) Taylor's book "Classical Mechanics"; iv) Marion's book "Classical Dynamics of Particles and Systems"; v) Symon's book "Mechanics"; Goldstein's book may be very appropriate for a first or second course on the topic, but I don't believe it displays a very formal approach to the subject. I'd suggest it to someone who's not interested in the mathematical structure of Mechanics. Even though, good for a starter. Taylor's book has some very good exercises, but the book itself does not please me at all since it's informal, prolix and severely incomplete in most topics. Same goes to Marion's book, and even though Symon's is a little bit better, it didn't please me either. The best book in this list if definitely Landau's, but I don't find it as good as most people picture it. I didn't read the whole Landau series (not even half, actually), but until now it's the worst of them all, for me. It still carries much of the author's incredible insights and some very nice solved exercises, but (as Arnol'd pointed out) there are a some mistakes and fake demonstrations on the book. Don't trust all of his "proofs" and you'll be safe. Now I'll point some books that really helped me throughout my studies: Arnol'd's "Mathematical Methods of Classical Mechanics" : This book is simply the best book you can get your hands on after acquiring familiarity with the subject (after a first course using Goldstein's or Landau's book, for example). It's thorough, the maths are just clear and not extravagant, the proofs are very simple and you can get some contact with phase space structures, Lie algebras, differential geometry, exterior algebra and perturbation methods. Arnol'd's way of writing is incredibly clean, as if he really wanted to write a book with no "mysteries" and "conclusions that jump out of nowhere". The exercises are not very suited for a course. Saletan's "Classical Dynamics: a Contemporary Approach" : Very nice book. A little more developed mathematically than Arnol'd's, since it delves into the structure of the cotangent bundle and spends a great deal of the book talking about chaos and Hamilton-Jacobi theory. The proofs are not very elegant, but I'd chose it as a textbook for a graduate course. Some nice exercises. Fasano's "Analytical Dynamics" : Also a graduate-textbook-style one. Very close to Saletan's way of writing, trying to explain to physicists the mathematical nature of Mechanics without too much rigor, but developing proofs of many theorems. Very nice chapter of angular momentum, very nice exercises (some of them, solved!). Incredibly nice introduction to Lie derivatives and canonical transformations, and very philosophically inclined chapters so to answer "why is this this way" or "what does that mean, really?". Lanczos' "The Variational Principles of Mechanics" : This book is kept close at all times. Not suited (at all) as a textbook, more like a companion throughout life. The most philosophical, inquiring and historical Mechanics book ever written. If you want to read a very beautiful account on the the structure, the problems, the development and the birth of mechanical concepts I'd recommend this book without blinking. It is a physics book: calculus and stuff, but looks like it were written by someone who liked to ask deep questions of the kind "why do we use this instead of this, and why is mathematics such a perfect language for physics?". It's just amazing. Marsden's "Foundations of Mechanics" : This is the bible of Mechanics. Since it's a bible, no one ever read it all or understood it all. Not to be used as textbook ever. It's a book aimed for mathematicians, but the mathematical physicist will learn a lot from it, since it's quite self contained in what touches the maths: they're all developed in the first two chapters. Even though, very acidly developed. Hard to read, hard to understand, hard to grasp some proofs... In general, hard to use. Even though, I really like some parts of if... A lot. Ana Cannas' "Introduction to Symplectic and Hamiltonian Geometry" : Another mathematics book, but this is the best one (in my humble opinion). Can be found for free (in English) at www.impa.br/opencms/pt/biblioteca/pm/PM_11.pdf . Kotkin's "Collection of Problems in Classical Mechanics" : Last but not least, filling in the "with a lot of exercises" hole, Serbo & Kotkin's book is simply the key to score 101 out of 100 in any Mechanics exam. Hundreds of incredible, beautiful, well thought problems together with all (ALL!) their solutions at the end. From very simple to "hell no I'm not trying this one" problems, this book should be a reference to everyone studying the subject. Some of the problems are so nice that you can even publish notes in teaching journals about them, like I've seen once of twice before. Well, this is my humble contribution. I hope it helps you! EDIT.: I just noticed I forgot one book that really changed my life: Spivak's "Physics for Mathematicians, Volume I: Mechanics" . The physicist should not be scared about the title. This is the best book ever written about Mechanics. I actually have plans of taking vacations only to read it all. There's nothing missing, all the mathematics is rigorous and perfect, and there's not a single step that isn't clarified by the author (who said he was learning Mechanics himself whilst writing this book). There are moments he pauses to inquire about contact structures in symplectic manifolds, but also moments where the reason for inquiry is the fact that forces are represented by vectors; and then he goes back to Newton's time where vectors didn't exist... And tries to explain how people used to see forces and momentum at the time, in his opinion. It's just magical. He's as worried about presenting the content of the subject as to try to grasp why the definitions are the way they are, and then justify it historically. Sorry if I'm being redundant, but please read this book!
{ "source": [ "https://physics.stackexchange.com/questions/135726", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62004/" ] }
135,728
Could black holes' near light speed rotation cause galaxies to move like an irrotational vortices?
I'll write here a list of my personal favorites plus some commonly used books. I wouldn't be surprised if your teacher chose either one of the books below as a textbook: i) Mechanics, the first volume of the Landau course on Theoretical Physics; ii) Goldstein's book "Classical Mechanics"; iii) Taylor's book "Classical Mechanics"; iv) Marion's book "Classical Dynamics of Particles and Systems"; v) Symon's book "Mechanics"; Goldstein's book may be very appropriate for a first or second course on the topic, but I don't believe it displays a very formal approach to the subject. I'd suggest it to someone who's not interested in the mathematical structure of Mechanics. Even though, good for a starter. Taylor's book has some very good exercises, but the book itself does not please me at all since it's informal, prolix and severely incomplete in most topics. Same goes to Marion's book, and even though Symon's is a little bit better, it didn't please me either. The best book in this list if definitely Landau's, but I don't find it as good as most people picture it. I didn't read the whole Landau series (not even half, actually), but until now it's the worst of them all, for me. It still carries much of the author's incredible insights and some very nice solved exercises, but (as Arnol'd pointed out) there are a some mistakes and fake demonstrations on the book. Don't trust all of his "proofs" and you'll be safe. Now I'll point some books that really helped me throughout my studies: Arnol'd's "Mathematical Methods of Classical Mechanics" : This book is simply the best book you can get your hands on after acquiring familiarity with the subject (after a first course using Goldstein's or Landau's book, for example). It's thorough, the maths are just clear and not extravagant, the proofs are very simple and you can get some contact with phase space structures, Lie algebras, differential geometry, exterior algebra and perturbation methods. Arnol'd's way of writing is incredibly clean, as if he really wanted to write a book with no "mysteries" and "conclusions that jump out of nowhere". The exercises are not very suited for a course. Saletan's "Classical Dynamics: a Contemporary Approach" : Very nice book. A little more developed mathematically than Arnol'd's, since it delves into the structure of the cotangent bundle and spends a great deal of the book talking about chaos and Hamilton-Jacobi theory. The proofs are not very elegant, but I'd chose it as a textbook for a graduate course. Some nice exercises. Fasano's "Analytical Dynamics" : Also a graduate-textbook-style one. Very close to Saletan's way of writing, trying to explain to physicists the mathematical nature of Mechanics without too much rigor, but developing proofs of many theorems. Very nice chapter of angular momentum, very nice exercises (some of them, solved!). Incredibly nice introduction to Lie derivatives and canonical transformations, and very philosophically inclined chapters so to answer "why is this this way" or "what does that mean, really?". Lanczos' "The Variational Principles of Mechanics" : This book is kept close at all times. Not suited (at all) as a textbook, more like a companion throughout life. The most philosophical, inquiring and historical Mechanics book ever written. If you want to read a very beautiful account on the the structure, the problems, the development and the birth of mechanical concepts I'd recommend this book without blinking. It is a physics book: calculus and stuff, but looks like it were written by someone who liked to ask deep questions of the kind "why do we use this instead of this, and why is mathematics such a perfect language for physics?". It's just amazing. Marsden's "Foundations of Mechanics" : This is the bible of Mechanics. Since it's a bible, no one ever read it all or understood it all. Not to be used as textbook ever. It's a book aimed for mathematicians, but the mathematical physicist will learn a lot from it, since it's quite self contained in what touches the maths: they're all developed in the first two chapters. Even though, very acidly developed. Hard to read, hard to understand, hard to grasp some proofs... In general, hard to use. Even though, I really like some parts of if... A lot. Ana Cannas' "Introduction to Symplectic and Hamiltonian Geometry" : Another mathematics book, but this is the best one (in my humble opinion). Can be found for free (in English) at www.impa.br/opencms/pt/biblioteca/pm/PM_11.pdf . Kotkin's "Collection of Problems in Classical Mechanics" : Last but not least, filling in the "with a lot of exercises" hole, Serbo & Kotkin's book is simply the key to score 101 out of 100 in any Mechanics exam. Hundreds of incredible, beautiful, well thought problems together with all (ALL!) their solutions at the end. From very simple to "hell no I'm not trying this one" problems, this book should be a reference to everyone studying the subject. Some of the problems are so nice that you can even publish notes in teaching journals about them, like I've seen once of twice before. Well, this is my humble contribution. I hope it helps you! EDIT.: I just noticed I forgot one book that really changed my life: Spivak's "Physics for Mathematicians, Volume I: Mechanics" . The physicist should not be scared about the title. This is the best book ever written about Mechanics. I actually have plans of taking vacations only to read it all. There's nothing missing, all the mathematics is rigorous and perfect, and there's not a single step that isn't clarified by the author (who said he was learning Mechanics himself whilst writing this book). There are moments he pauses to inquire about contact structures in symplectic manifolds, but also moments where the reason for inquiry is the fact that forces are represented by vectors; and then he goes back to Newton's time where vectors didn't exist... And tries to explain how people used to see forces and momentum at the time, in his opinion. It's just magical. He's as worried about presenting the content of the subject as to try to grasp why the definitions are the way they are, and then justify it historically. Sorry if I'm being redundant, but please read this book!
{ "source": [ "https://physics.stackexchange.com/questions/135728", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/59226/" ] }
135,764
I wonder what technology can be obtained from such very expensive experiments/institutes as e.g. undertaken in CERN? I understand that e.g. the discovery of the Higgs Boson confirms our understanding matter. However, what can result form this effort? Are there examples in history where such experiments directly or indirectly lead to corresponding(!) important new technology? Or is the progress that comes from developing and building such machines greater than those from the actual experimental results?
The truth is we don't know. But when you think about it, how can we know? If we knew what technology would eventually come out of experiments like this, why would we not build that technology now? Large expensive machines like the CERN supercollider help us to further understand the laws of nature. And through understanding these laws, new technologies arise. But we, the physicists, have absolutely no idea what wonderful technologies might result tomorrow because we invested so heavily in science today. It's purported in 1850 after Faraday developed the electric generator, the British minister of finance asked him what practical value there was to electricity. Faraday could not have known that electricity would one day form the backbone of all modern society (but that didn't stop him from making a snarky remark ). It's hard to predict the future; we labour in science in the hopes that what we do will prove useful for some new and amazing technologies. But we don't know what technologies will result from our expensive laboratories any more than Faraday knew that electricity would allow you to make a computer.
{ "source": [ "https://physics.stackexchange.com/questions/135764", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/50509/" ] }
135,766
I'm studying rigid body dynamics lately. I came across the definition of torque, and though I've found a lot of explanations as to why there is an r there (the moment), all of them are mathematical (equating work and so on). None of them explained physically and I still couldn't figure out why the distance from axis of rotation increases the net effect, or torque. So I thought about this and came to this line of thought - Rotation can be thought of as a rigid body have all the infinitesimal masses performing circular motion about a fixed axis. There is pure rotation, hence angular velocity is constant. Thus velocity which is omega times r increase with the distance from the rotation axis. So if force is applied at more distance, this implies more velocity of the point of the application, and since the body is rigid, all the other mass connected to the point of application goes along through inter-atomic interactions and hence more rotational effect. Is this line of thought correct? So what happens inside a body when it rotates? Do the rest of the atoms go along due to electromagnetic attraction and if so, can someone explain exactly what happens inside the body when it rotates and where does that r come from from an inter-atomic point of view?
The truth is we don't know. But when you think about it, how can we know? If we knew what technology would eventually come out of experiments like this, why would we not build that technology now? Large expensive machines like the CERN supercollider help us to further understand the laws of nature. And through understanding these laws, new technologies arise. But we, the physicists, have absolutely no idea what wonderful technologies might result tomorrow because we invested so heavily in science today. It's purported in 1850 after Faraday developed the electric generator, the British minister of finance asked him what practical value there was to electricity. Faraday could not have known that electricity would one day form the backbone of all modern society (but that didn't stop him from making a snarky remark ). It's hard to predict the future; we labour in science in the hopes that what we do will prove useful for some new and amazing technologies. But we don't know what technologies will result from our expensive laboratories any more than Faraday knew that electricity would allow you to make a computer.
{ "source": [ "https://physics.stackexchange.com/questions/135766", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/59244/" ] }
135,780
Assuming that there is an interaction between 1 and 3 (they attract each other), what are the forces between 1 and 2? I know it is as if the force acts on a different body (1+2), but I want to know the exact forces between them.
The truth is we don't know. But when you think about it, how can we know? If we knew what technology would eventually come out of experiments like this, why would we not build that technology now? Large expensive machines like the CERN supercollider help us to further understand the laws of nature. And through understanding these laws, new technologies arise. But we, the physicists, have absolutely no idea what wonderful technologies might result tomorrow because we invested so heavily in science today. It's purported in 1850 after Faraday developed the electric generator, the British minister of finance asked him what practical value there was to electricity. Faraday could not have known that electricity would one day form the backbone of all modern society (but that didn't stop him from making a snarky remark ). It's hard to predict the future; we labour in science in the hopes that what we do will prove useful for some new and amazing technologies. But we don't know what technologies will result from our expensive laboratories any more than Faraday knew that electricity would allow you to make a computer.
{ "source": [ "https://physics.stackexchange.com/questions/135780", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/42768/" ] }
135,787
When solving the equation $$\boxed{ {1 \over r^2}{\partial \left( r^2 E_r \right) \over \partial r} + {1 \over r\sin\theta}{\partial \over \partial \theta} \left( E_\theta\sin\theta \right) + {1 \over r\sin\theta}{\partial E_\phi \over \partial \phi} = Const}$$ where $Const$ depends only on $r$ and $E_r$, $E_{\theta}$ and $E_{\phi}$ are three unknown functions and we assume the boundary conditions are spherical symmetric, what was the argument for $E_{\theta}$ and $E_{\phi}$ being zero?
The truth is we don't know. But when you think about it, how can we know? If we knew what technology would eventually come out of experiments like this, why would we not build that technology now? Large expensive machines like the CERN supercollider help us to further understand the laws of nature. And through understanding these laws, new technologies arise. But we, the physicists, have absolutely no idea what wonderful technologies might result tomorrow because we invested so heavily in science today. It's purported in 1850 after Faraday developed the electric generator, the British minister of finance asked him what practical value there was to electricity. Faraday could not have known that electricity would one day form the backbone of all modern society (but that didn't stop him from making a snarky remark ). It's hard to predict the future; we labour in science in the hopes that what we do will prove useful for some new and amazing technologies. But we don't know what technologies will result from our expensive laboratories any more than Faraday knew that electricity would allow you to make a computer.
{ "source": [ "https://physics.stackexchange.com/questions/135787", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/31585/" ] }
136,724
As the title asks: How close can you get to lava before burning? I know that it depends on an number of factors; speed of lava flow, wind direction/strength, type(?) of lava flow (related to speed, in part, I think?). I'm guessing it also depends on the person and what they're wearing. I'd be looking for an actual distance, preferably something I could calculate if I know the above, but general rules would work as well. (Note: The general rule of "Just don't go near lava in the first place" has already been taken into account.) This question is partially inspired by movies/games that show characters near lava where there should be enough heat (without actually touching it) to simply burst their garments into flames.
The factors that most matter when you are near lava: The fractional solid angle of lava as subtended at the observer ("how much lava do you see") The temperature of the lava The reflectivity of the clothing you are wearing Any effect of air flow (wind blowing towards lava or away from it) Toxic fumes... In essence, if we treat lava as a black body radiator with an emissivity of 0.8 (just to pick a "reasonable" value), we can compute the heat flow to an observer. This is essentially a fraction of the heat flow you would have if you were completely surrounded on all sides. This means that if you have a semi-infinite plane of lava, your height as an observer will matter a great deal - if you crouch down, the plane "looks smaller" and you will experience less heat flux. When you stand up, your head will get more heat than the rest of you. Temperature: radiated power goes as the fourth power of temperature, so this is the most important number to estimate correctly. A 10 % change (say from 800 to 900 C) results in a 40% change in radiation. Google gives values from 800 (Mt St Helens) to 1100 (Hawaiian basalt) so there is a lot of variability here Reflectivity: assume you wear white clothes (looks better in the movie) you might reflect 80% of the incident radiation Air flow: if there is a bit of wind blowing to cool you down, that will help. Luckily, if you are on the edge of a lava field, the effect of the heat will be to draw cold air in and then lift it up - so you should have a cool breeze (I have never been near a lava field but I think that's a reasonable speculation) Toxic fumes: if the above is true, the effect of toxic fumes will be mitigated by the built in "extractor fan" formed by the heat. Calculating: assume a height $h$ at distance $d$ from a semi infinite plane at temperature $T$: Heat flux per unit area of the lava (Stefan-Boltzmann law) $$F = \epsilon \sigma T^4$$ Fraction of solid angle covered (I think this approximation is valid... there may be a factor 2 gone astray): $$f = \frac{\tan^{-1}\frac{h}{d}}{2\pi}$$ Apparent heat flux at observer (taking into account reflectivity $r$ and emissivity $\epsilon$): $$F_{obs}= \epsilon \sigma T^4 (1-r) \frac{\tan^{-1}\frac{h}{d}}{2\pi}$$ The intensity of the sun on earth's surface is about $1 kW/m^2$. Let's assume that you are OK when you are receiving five times that (just to get an order of magnitude). Then we need to solve for $h/d$ in the above (let's use hot lava - 1300 K): $$\frac{h}{d} = tan{\frac{2\pi 5\cdot 10^3}{0.8 \cdot 5.6 \cdot 10^{-8} 1300^4 (0.2)}}$$ This results in an angle of about 50 degrees . That's interesting - it suggests that if you get close to the lava but crouch down, you should be OK. But if you stand up, the fact you are "looking at" so much lava burns you. Put differently - if you are 1.80 m ("six feet") tall, then you are OK when you are at least 2 m from the edge of the lava - for all the above assumptions. Note that reflectivity does play directly into this calculation - if you don't wear a reflective face mask, the heat of 25 suns will be bearing down on you, and that may be too much... In which case you need an angle around 10 degree - or stand about 10 m away. Of course there are secondary effects of heat absorption etc - but this is actually quite an interesting result.
{ "source": [ "https://physics.stackexchange.com/questions/136724", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/59626/" ] }
136,752
I am suddenly struck by the question of whether gravitation affects magnetism in some way. On the other hand, gravity is a weak force, but magnetism seems to be a strong force, so would magnetism affect gravity? Or do they "ignore" each other, being forces which do not interact? The answer to this is related to this question: If the earth's core were to cool so that it were no longer liquid, no longer rotated, and thus produced no magnetic field, would this do anything to earth's gravity?
The electromagnetic field tensor $F_{\mu\nu}$ which encodes all the information about the electric and magnetic field, certainly contributes to the energy-stress tensor $T_{\mu\nu}$, which appears in the Einstein Field Equations: $$G_{\mu\nu}= 8\pi G T_{\mu\nu}$$ The left hand side of this equation encodes the geometry of spacetime, while the right hand side describes the 'sources' of gravity. Therefore, we can say that magnetism does have an effect on the geometry of spacetime i.e. gravity.
{ "source": [ "https://physics.stackexchange.com/questions/136752", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5279/" ] }
136,754
A mosquito just wanted to bite me! Päng - and it stuck to my hand, hardly recognisable anymore. I said to my girlfriend: "Just reduced the dimension of the mosquito by one!" Therefore the question: If I squeeze a three-dimensional body really hard, will it ever become two-dimensional? Two-dimensional means that it has a thickness of 0, so 1 atom layer is not 2D. edit: I'd like to make this question a bit more general : Are there any real two dimensional objects or phonemes in a 3D (or 4D) world?
From a mathematical point of view you will never make something two dimensional by squeezing it because it will always have a thickness greater than zero. The limit would be something like graphene that is a single atom thick. This is pretty thin, but it still has a non-zero thickness so it's still 3D. However in the quantum world it is possible to produce structures that behave as if they are two dimensional. Particles like electrons have a wavelength, and if you can make a sheet thinner than the wavelength of the electrons then electrons in the sheet will behave as if the sheet really is just two dimensional. Indeed this is why graphene is often described as a 2D material. It's thin enough that conduction electrons behave as if they are restricted to a two dimensional manifold. There is more on this in the question How is graphene a 2D substance? Note that this sort of system is only two dimensional for a range of energies, because if you increase the energy you reduce the particle wavelength and at some point the wavelength reduces enough that it falls below the thickness of the sheet.
{ "source": [ "https://physics.stackexchange.com/questions/136754", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/50509/" ] }
136,860
TV documentaries invariably show the Big Bang as an exploding ball of fire expanding outwards. Did the Big Bang really explode outwards from a point like this? If not, what did happen?
The simple answer is that no, the Big Bang did not happen at a point. Instead, it happened everywhere in the universe at the same time. Consequences of this include: The universe doesn't have a centre: the Big Bang didn't happen at a point so there is no central point in the universe that it is expanding from. The universe isn't expanding into anything: because the universe isn't expanding like a ball of fire, there is no space outside the universe that it is expanding into. In the next section, I'll sketch out a rough description of how this can be, followed by a more detailed description for the more determined readers. A simplified description of the Big Bang Imagine measuring our current universe by drawing out a grid with a spacing of 1 light year. Although obviously, we can't do this, you can easily imagine putting the Earth at (0, 0), Alpha Centauri at (4.37, 0), and plotting out all the stars on this grid. The key thing is that this grid is infinite $^1$ i.e. there is no point where you can't extend the grid any further. Now wind time back to 7 billion years after the big bang, i.e. about halfway back. Our grid now has a spacing of half a light year, but it's still infinite - there is still no edge to it. The average spacing between objects in the universe has reduced by half and the average density has gone up by a factor of $2^3$ . Now wind back to 0.0000000001 seconds after the big bang. There's no special significance to that number; it's just meant to be extremely small. Our grid now has a very small spacing, but it's still infinite. No matter how close we get to the Big Bang we still have an infinite grid filling all of space. You may have heard pop science programs describing the Big Bang as happening everywhere and this is what they mean. The universe didn't shrink down to a point at the Big Bang, it's just that the spacing between any two randomly selected spacetime points shrank down to zero. So at the Big Bang, we have a very odd situation where the spacing between every point in the universe is zero, but the universe is still infinite. The total size of the universe is then $0 \times \infty$ , which is undefined. You probably think this doesn't make sense, and actually, most physicists agree with you. The Big Bang is a singularity , and most of us don't think singularities occur in the real universe. We expect that some quantum gravity effect will become important as we approach the Big Bang. However, at the moment we have no working theory of quantum gravity to explain exactly what happens. $^1$ we assume the universe is infinite - more on this in the next section For determined readers only To find out how the universe evolved in the past, and what will happen to it in the future, we have to solve Einstein's equations of general relativity for the whole universe. The solution we get is an object called the metric tensor that describes spacetime for the universe. But Einstein's equations are partial differential equations, and as a result, have a whole family of solutions. To get the solution corresponding to our universe we need to specify some initial conditions . The question is then what initial conditions to use. Well, if we look at the universe around us we note two things: if we average over large scales the universe looks the same in all directions, that is it is isotropic if we average over large scales the universe is the same everywhere, that is it is homogeneous You might reasonably point out that the universe doesn't look very homogeneous since it has galaxies with a high density randomly scattered around in space with a very low density. However, if we average on scales larger than the size of galaxy superclusters we do get a constant average density. Also, if we look back to the time the cosmic microwave background was emitted (380,000 years after the Big Bang and well before galaxies started to form) we find that the universe is homogeneous to about $1$ part in $10^5$ , which is pretty homogeneous. So as the initial conditions let's specify that the universe is homogeneous and isotropic, and with these assumptions, Einstein's equation has a (relatively!) simple solution. Indeed this solution was found soon after Einstein formulated general relativity and has been independently discovered by several different people. As a result the solution glories in the name Friedmann–Lemaître–Robertson–Walker metric , though you'll usually see this shortened to FLRW metric or sometimes FRW metric (why Lemaître misses out I'm not sure). Recall the grid I described to measure out the universe in the first section of this answer, and how I described the grid shrinking as we went back in time towards the Big Bang? Well the FLRW metric makes this quantitative. If $(x, y, z)$ is some point on our grid then the current distance to that point is just given by Pythagoras' theorem: $$ d^2 = x^2 + y^2 + z^2 $$ What the FLRW metric tells us is that the distance changes with time according to the equation: $$ d^2(t) = a^2(t)(x^2 + y^2 + z^2) $$ where $a(t)$ is a function called the [scale factor]. We get the function for the scale factor when we solve Einstein's equations. Sadly it doesn't have a simple analytical form, but it's been calculated in answers to the previous questions What was the density of the universe when it was only the size of our solar system? and How does the Hubble parameter change with the age of the universe? . The result is: The value of the scale factor is conventionally taken to be unity at the current time, so if we go back in time and the universe shrinks we have $a(t) < 1$ , and conversely in the future as the universe expands we have $a(t) > 1$ . The Big bang happens because if we go back to time to $t = 0$ the scale factor $a(0)$ is zero. This gives us the remarkable result that the distance to any point in the universe $(x, y, z)$ is: $$ d^2(t) = 0(x^2 + y^2 + z^2) = 0 $$ so the distance between every point in the universe is zero. The density of matter (the density of radiation behaves differently but let's gloss over that) is given by: $$ \rho(t) = \frac{\rho_0}{a^3(t)} $$ where $\rho_0$ is the density at the current time, so the density at time zero is infinitely large. At the time $t = 0$ the FLRW metric becomes singular. No one I know thinks the universe did become singular at the Big Bang. This isn't a modern opinion: the first person I know to have objected publically was Fred Hoyle , and he suggested Steady State Theory to avoid the singularity. These days it's commonly believed that some quantum gravity effect will prevent the geometry from becoming singular, though since we have no working theory of quantum gravity no one knows how this might work. So to conclude: the Big Bang is the zero time limit of the FLRW metric, and it's a time when the spacing between every point in the universe becomes zero and the density goes to infinity. It should be clear that we can't associate the Big Bang with a single spatial point because the distance between all points was zero so the Big Bang happened at all points in space. This is why it's commonly said that the Big Bang happened everywhere. In the discussion above I've several times casually referred to the universe as infinite , but what I really mean is that it can't have an edge. Remember that our going-in assumption is that the universe is homogeneous i.e. it's the same everywhere. If this is true the universe can't have an edge because points at the edge would be different from points away from the edge. A homogenous universe must either be infinite, or it must be closed i.e. have the spatial topology of a 3-sphere. The recent Planck results show the curvature is zero to within experimental error, so if the universe is closed the scale must be far larger than the observable universe.
{ "source": [ "https://physics.stackexchange.com/questions/136860", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1325/" ] }
137,189
I, like everybody I suppose, have read the explanations why the colour of the sky is blue: ... the two most common types of matter present in the atmosphere are gaseous nitrogen and oxygen. These particles are most effective in scattering the higher frequency and shorter wavelength portions of the visible light spectrum. This scattering process involves the absorption of a light wave by an atom followed by reemission of a light wave in a variety of directions. The amount of multidirectional scattering that occurs is dependent upon the frequency of the light. ... So as white light.. from the sun passes through our atmosphere, the high frequencies become scattered by atmospheric particles while the lower frequencies are most likely to pass through the atmosphere without a significant alteration in their direction. This scattering of the higher frequencies of light illuminates the skies with light on the BIV end of the visible spectrum. Compared to blue light, violet light is most easily scattered by atmospheric particles. However, our eyes are more sensitive to light with blue frequencies. Thus, we view the skies as being blue in color. and why sunsets are red: ... the light that is not scattered is able to pass through our atmosphere and reach our eyes in a rather non-interrupted path. The lower frequencies of sunlight (ROY) tend to reach our eyes as we sight directly at the sun during midday. While sunlight consists of the entire range of frequencies of visible light, not all frequencies are equally intense. In fact, sunlight tends to be most rich with yellow light frequencies. For these reasons, the sun appears yellow during midday due to the direct passage of dominant amounts of yellow frequencies through our atmosphere and to our eyes. The appearance of the sun changes with the time of day. While it may be yellow during midday, it is often found to gradually turn color as it approaches sunset. This can be explained by light scattering. As the sun approaches the horizon line, sunlight must traverse a greater distance through our atmosphere; this is demonstrated in the diagram below. As the path that sunlight takes through our atmosphere increases in length, ROYGBIV encounters more and more atmospheric particles. This results in the scattering of greater and greater amounts of yellow light. During sunset hours, the light passing through our atmosphere to our eyes tends to be most concentrated with red and orange frequencies of light. For this reason, the sunsets have a reddish-orange hue. The effect of a red sunset becomes more pronounced if the atmosphere contains more and more particles. Can you explain why the colour of the sky passes from blue to orange/red skipping altogether the whole range of green frequencies? I have only heard of the legendary 'green, emerald line/ flash' that appears in particular circumstances Green flashes are enhanced by mirage, which increase refraction... is more likely to be seen in stable, clear air,... One might expect to see a blue flash, since blue light is refracted most of all, and ... is therefore the very last to disappear below the horizon, but the blue is preferentially scattered out of the line of sight, and the remaining light ends up appearing green but I have never seen it, nor do I know anybody who ever did.
The sky does not skip over the green range of frequencies. The sky is green. Remove the scattered light from the Sun and the Moon and even the starlight, if you so wish, and you'll be left with something called airglow (check out the link, it's awesome, great pics, and nice explanation). Because the link does such a good job explaining airglow, I'll skip the nitty gritty. So you might be thinking, "Jim, you half-insane ceiling fan, everybody knows that the night sky is black!" Well, you're only half right. The night sky isn't black. The link above explains the science of it, but if that's not good enough, try to remember back to a time when you might have been out in the countryside. No bright city lights, just the night sky and trees. Now when you look at the horizon, can you see the trees? Yes, they're black silhouettes against the night sky. But how could you see black against black? The night sky isn't black. It's green thanks to airglow (or, if you're near a city, orange thanks to light pollution). Stop, it's picture time. Here's an above the atmosphere view of the night sky from Wikipedia: And one from the link I posted, just in case you didn't check it out: See, don't be worried about green. The sky gets around to being green all the time.
{ "source": [ "https://physics.stackexchange.com/questions/137189", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
137,229
Earth wasn't always the only water-world in the solar system. Mars also appear to have started out wet but, as conditions changed, Mars lost its oceans. So, how has Earth managed to avoid a similar fate? Doesn't the Giant impact hypothesis explain the origin of the Earth's core (geomagnetic field) activities which help keep the planet warm?
The Earth's climate isn't quite as stable as you think. The Earth's climate has toggled back and forth between a greenhouse Earth and an icehouse Earth for the last 600 million years or so. During the icehouse Earth phases, the climate can enter an ice age, an extended period of time during which the climate in oscillates between glaciations and interglacials. We are currently in the midst of an interglacial period of an ice age. On the flip side of the icehouse Earth climate, dinosaurs and tropical plants lived close to the poles when the Earth was in a greenhouse phase. In the past, there was a third climate phase, snowball Earth, which made the icehouse Earth look mild in comparison. Even during the worst glaciation, ice rarely reached closer than 40 degrees latitude of the equator. During snowball Earth phases (that last of which ended over 600 million years ago), ice reached well into the tropics, and possibly all the way to the equator. One of the open issues in paleoclimatology is explaining why the early young Earth wasn't perpetually stuck in the snowball Earth phase. The Sun's luminosity has been growing in intensity since it formed. Sunlight was only 75% to 85% as intense when the Earth was young as it is now. So why wasn't the Earth permanently frozen long, long ago? Explaining why this was not the case (and geological evidence says it wasn't) is the faint young sun problem. Regarding Mars, that's fairly simple. Mars is too small. Mars's core froze long ago Mars magnetic dynamo stopped operating long ago, and if Mars ever did have plate tectonics, that process stopped long ago. The end of plate tectonics stops any outgassing that would otherwise have replenished the atmosphere. That Mars is small means it has a tenuous hold on its atmosphere. The loss of a magnetic field (if it ever had one) would most likely have exaggerated the atmospheric loss, particularly if this happened when the Sun was young and had a much greater solar wind than it has now. The combination of the above means that even if Mars was habitable long, long ago, that habitability was rather very short lived. Regarding the giant impact hypothesis, you have it exactly backwards. Look to our sister planet. Venus has a very thick atmosphere and as a result has surface temperatures higher than those on Mercury. The giant impact hypothesis offers one explanation for why Earth is not like Venus. If it wasn't for that impact, the Earth would still have a thick primordial atmosphere and we wouldn't be here. Our planet would be uninhabitable. Mars would be habitable if it was the same size as the Earth or Venus and if it had a Venus-like atmosphere. Update: Regarding Anthropogenic Global Warming A number of comments has taken this answer to be proof that anthropogenic global warming is not happening. To the contrary, it most certainly is happening. As an analogy, consider a farmer who takes a trip to the Grand Canyon, then Badlands National Park, and then the Channeled Scablands in eastern Washington state. The farmer can rightfully conclude that nature has destructive capabilities that can far outdo even the very worst of farming practices. He cannot however conclude that poor farming practices do not cause erosion based on the existence of those remarkable records of natural erosion. The extent to which anthropogenic global warming is happening and what that means to humanity -- that's a different question and should be asked as such. What the long term variations in the Earth's climate as described in this answer mean to humanity, well that too a different question.
{ "source": [ "https://physics.stackexchange.com/questions/137229", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/26665/" ] }
137,350
The solar system is non-integrable and has chaos. The sun-earth-moon three-body system might be chaotic. So, how far into the future can we predict solar eclipses and/or lunar eclipses? How about 1 million years?
On predicting planetary orbits A number of studies have shown that the inner solar system is chaotic, with a Lyapunov time scale of about 5 million years. This 5 million year time scale means that while one can somewhat reasonably create a planetary ephemeris (a time-based catalog of where the planets were / will be) that spans from 10 million years into the past to 10 million years into the future, going beyond that by much is essentially impossible. At a hundred million years, the position of a planet on its orbit becomes complete garbage, meaning that the uncertainties in the planetary positions exceed the orbital radii. What one can do is forgo the idea of predicting position and instead ask only about parameters that determine the size, shape, and inclination of planetary orbits. This lets one look to secular chaos as opposed to dynamic chaos, which in turn lets attempt to answer the key question, Is the solar system stable? The answer to this question is "not quite". The key culprit is Mercury, the most chaotic of all of the planets. One factor is its small size, which magnifies perturbations from other planets. Another factor is resonances with Jupiter and Venus. Both of these planets have multiple resonances with Mercury's eccentricity (Jupiter more so than Venus), and Venus also has multiple resonances with Mercury's inclination. These resonances spell doom for Mercury. Mercury is perched on the threshold of secular chaos, and is likely to be ejected from the solar system in a few billion years. On predicting eclipses The issue of chaos becomes even more extreme when trying to predict eclipses, particularly solar eclipses. The Sun, Jupiter, and Venus have marked effects on the long-term behavior of the Moon's orbit. Even more importantly, however, the Moon is receding from the Earth due to tidal interactions, and this rate is not constant. The current recession rate is about twice the average rate over the last several hundred million years. Changes in the shape and interconnectivity of the oceans drastically changes the rate at which the Moon recedes from the Earth. The melting of the ice covering Antarctica and Greenland would also significantly change the recession rate, as would the Earth entering another glaciation. Even a small change destroys the ability to make long term predictions of the Moon's orbit. NASA developed a pair of catalogs of solar eclipses: one covering a 5,000-year period spanning from about 4000 years ago to about 1000 years into the future; the other a 10,000-year catalog of solar eclipses spanning from about 6000 years ago to about 4000 years into the future. The accuracy of these catalog degrades drastically before 3000 years ago and after 1000 years into the figure. Beyond these inner limits, the path of the eclipse over the Earth's surface becomes markedly unreliable, as does the ability to determine whether the eclipse will be partial, total, annular, or hybrid. At the outer time limits of the longer catalog, whether an eclipse did / will occur begins to become a bit dubious. Because of the Earth's much larger shadow, predictions of lunar eclipses are a bit more reliable, but not much. The problem is that of exponential error growth, which is a characteristic of dynamically chaotic systems. Predictions of lunar eclipses more than a few tens of thousands of years into the future are more or less nonsense. The millions of years asked in the question: No. The technique of orbital averaging once again can be of aid in determining characteristics of the Moon's orbit (but not position on the orbit). This can be augmented by geological records. Various tidal rhythmites give clues as to the paleological orbit of the Moon. A few rock formations exhibit layering that recorded the number of days in a month and the number of months in a year at the time the rock formation was created. References Adams, Fred C., and Gregory Laughlin. "Migration and dynamical relaxation in crowded systems of giant planets." Icarus 163.2 (2003): 290-306. Espenak and Meeus. "Five Millennium Canon of Solar Eclipses: -1999 to +3000." NASA Technical Publication TP-2006-214141 (2006). Espenak and Meeus. "Ten Millennium Canon of Long Solar Eclipses." Eclipse Predictions by Fred Espenak and Jean Meeus (NASA's GSFC) . Laskar, Jacques. "A numerical experiment on the chaotic behaviour of the solar system." Nature 338 (1989): 237-238. Laskar, Jacques. "Large scale chaos and marginal stability in the solar system." Celestial Mechanics and Dynamical Astronomy 64.1-2 (1996): 115-162. Laskar, Jacques, and Monique Gastineau. "Existence of collisional trajectories of Mercury, Mars and Venus with the Earth." Nature 459.7248 (2009): 817-819. Lithwick, Yoram, and Yanqin Wu. "Theory of Secular Chaos and Mercury's Orbit." The Astrophysical Journal 739.1 (2011): 31. Lithwick, Yoram, and Yanqin Wu. "Secular chaos and its application to Mercury, hot Jupiters, and the organization of planetary systems." Proceedings of the National Academy of Sciences (2013): 201308261. Naoz, Smadar, et al. "Secular dynamics in hierarchical three-body systems." Monthly Notices of the Royal Astronomical Society (2013): stt302.
{ "source": [ "https://physics.stackexchange.com/questions/137350", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/42337/" ] }
137,504
Computers generate heat when they work. Is it a result of information processing or friction (resistance)? Are these just different ways to describe the same thing? Or does some definite part of the heat "come from each explanation"? I often read that it's a necessary byproduct of information processing. There are irreversible operations such as AND gates and the remaining information goes to heat. But so many other things generate heat as well! A light bulb, electric hotplates, gears, etc. (These probably don't process information the way the computer does, but I may be wrong from a physical perspective.) Earlier I had always assumed the computer is like this as well. It basically has small wires in the processor and the resistance could explain the heat. Maybe these are parallel explanations. The information processing aspect may say that there has to be some heat as byproduct in some way in any realization of an abstract computer, and the friction aspect could then describe how this actually happens in this concrete wires-and-transistors-type physical implementation of the abstract computer. But maybe the two explanations account for separate amounts of the heat. Or maybe one accounts for a subset of the other, again in a partially parallel explanation way. Can someone clarify?
Landauer's principle (original paper pdf | doi ) expresses a non-zero lower bound on the amount of heat that must be generated by computers. However, this entropy-necessitated heat is dwarfed by the heat generated through ordinary electrical resistance of the circuitry (the same reason light bulbs give off heat).
{ "source": [ "https://physics.stackexchange.com/questions/137504", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57797/" ] }
137,509
I'm trying to calculate the total energy of a simple two charge system through the integral for electrostatic energy of a system given in Griffiths' book: $$U = \frac{\epsilon_0}{2}\int_V E^2 dV .$$ Where the volume is integrated across all space so the boundary term not shown here decays to zero. I think that this should yield the same answer as the standard formula given for point charges: $$U = \frac{1}{4\pi\varepsilon_0}\frac{Q_1Q_2}{R}.$$ But I'm having trouble evaluating the integral itself. I placed $Q_1$ on the origin of the coordinate axes and $Q_2$ on the $z$-axis a distance $R$ away from the first charge, and expanded the $E^2$ term: $$E = E_1 + E_2 $$ so $$E^2 = E_1^2 + 2E_1 \centerdot E_2 + E_2^2.$$ I found that the integral of the self terms diverges when evaluated, and, after reading through Griffiths, decided to discard the self-energy terms and only retain the energy due to the exchange term. Letting $r = \sqrt{x^2+y^2+z^2}$ and $r'= \sqrt{x^2+y^2+(z-R)^2}$, I found the integral of the interaction term to be: $$E_1 = \frac{1}{4\pi\varepsilon_0}\frac{Q_1}{r^3}\vec{r}\quad\text{and}\quad E_2 \frac{1}{4\pi\varepsilon_0}\frac{Q_2}{r'^3}\vec{r'}$$ $$U = \epsilon_0\int_V E_1\centerdot E_2 \space dV = \frac{Q_1 Q_2}{16\pi^2\varepsilon_0}\int_V \frac{x^2 + y^2 + z^2-zR}{(x^2 + y^2 + z^2)^{\frac{3}{2}} \space (x^2+y^2+(z-R)^2)^{\frac{3}{2}}}\space dV.$$ Converting to spherical coordinates, with $r=\sqrt{x^2+y^2+z^2}$, $\theta $ the angle from the z-axis and $\varphi$ the azimutal angle, where I have evaluated the azimuthal integral: $$U = \frac{Q_1 Q_2}{8\pi\varepsilon_0}\int_0^\infty \int_0^{2\pi} \frac{r - R\cos(\theta)}{(r^2-2Rr\cos(\theta)+R^2)^{\frac{3}{2}}}\sin(\theta) \space d\theta \space dr.$$ I hit a brick wall upon trying to evaluate the integral - ordinarily I would use a substitution in the single integral case but am unsure of how to do so for a double integral when the variables are all mixed up. Am I on the right track? I'm not sure that this integral converges, given that the other two diverge, does this formula apply to point charges or only to continuous charge distributions?
Landauer's principle (original paper pdf | doi ) expresses a non-zero lower bound on the amount of heat that must be generated by computers. However, this entropy-necessitated heat is dwarfed by the heat generated through ordinary electrical resistance of the circuitry (the same reason light bulbs give off heat).
{ "source": [ "https://physics.stackexchange.com/questions/137509", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/59964/" ] }
137,860
I know these two phenomena but I want to know a little deep explanation. What type of fringes are obtained in these phenomena?
Feynman has come from heaven to answer your question! Listen to him: No one has ever been able to define the difference between interference and diffraction satisfactorily. It is just a quest of usage, and there is no specific, important physical difference between them. The best we can do is, roughly speaking, is to say that when there are only a few sources, say two interference sources, then the result is usually called interference, but if there is a large number of them, it seems that the word diffraction is more often used.$_1$ To be more explicit read this passage from Ajoy Ghatak: We should point out that there is not much of a difference between the phenomenon of interference and diffraction, indeed, interference corresponds to the situation when we consider the superposition of waves coming out from a number of point sources and diffraction corresponds to the situation when we consider waves coming out from an area sources like a circular or rectangular aperture or even a large number of rectangular apertures (like the diffraction grating). $_2$ Credits: $_1$ Feynman Lectures on Physics $_2$Optics-Ajoy Ghatak.
{ "source": [ "https://physics.stackexchange.com/questions/137860", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60107/" ] }
138,151
What would happen if an accelerated particle (like they create in the LHC) hit a person standing in its path? Would the person die? Would the particle rip a hole? Would the particle leave such a tiny wound that it would heal right away? Something else?
A charged particle will create charge separation (ionization) along its path. This will cause harmful chemical reactions to occur in the body, including DNA damage. The effects of these chemical reactions depend on their amount. The body can heal from a low amount on its own, while a high amount will cause radiation sickness and probably death. This can be calculated, but also deduced by comparison. A single proton from LHC has 4 TeV of energy. This is much less than the probable energy of cosmic rays' protons. According this plot, approximately four TeV protons hit each square meter each month. Once per year, they are protons of 10E16 eV, i.e. 10000 times harder, than those from LHC . Cosmic ray protons hardly ever reach Earth's surface but astronauts can be exposed to them. No damage to astronauts from individual protons has been registered. Some astronauts report rare unusual light flashes in their eyes which may be caused by particles penetrating their brains or retinas. P.S. Also see the answer about BEAMS of particles, which can definitely damage the body.
{ "source": [ "https://physics.stackexchange.com/questions/138151", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
138,217
In section 12.11 of Jackson's Classical Electrodynamics, he evaluates an integral involved in the Green function solution to the 4-potential wave equation. Here it is: $$\int_{-\infty}^\infty dk_0 \frac{e^{-ik_0z_0}}{k_0^2-\kappa^2}$$ where $k$ and $z_0$ are real constants. Jackson considers two open contours: one above and one below the real axis. I understand that in order to use Jordan's lemma, when $z_0 < 0$ we have to close the contour in the upper half of the complex plane whereas if $z_0 > 0$ we have to close the contour in the lower half plane. What I don't understand is why it's OK to consider contours above and below the real axis when the original integral is along the real axis. As I understand it, the necessity to deal with poles like this also arises a lot in QFT, so perhaps it is well understood from that point of view.
A simple reference problem Suppose we want to analyse the problem of a forced harmonic oscillator. Denote as $\phi(t)$ the time dependent position of the oscillator. The oscillator experiences two forces, the spring force $-k\phi(t)$ and an external force $F_{\text{ext}}(t)$. Newton's law says $$ \begin{align} F(t) &= m a(t) \\ -k \phi(t) + F_{\text{ext}}(t) &= m \ddot{\phi}(t) \\ F_{\text{ext}}(t)/m &= \ddot{\phi}(t) + (k/m) \phi(t) \\ j(t) &= \ddot{\phi}(t) + \omega_0^2 \phi(t) \tag 1 \end{align} $$ where $\omega_0$ is the free oscillation frequency and $j(t)\equiv F_{\text{ext}}/m$. We use the following Fourier transform convention: $$ \begin{align} f(t) &= \int_\omega \tilde{f}(\omega) e^{i\omega t} \frac{\mathrm d\omega}{2\pi} \\ \tilde{f}(\omega) &= \int_t f(t) e^{-i\omega t}~\mathrm dt . \end{align} $$ With this convention on Eq. $(1)$, and defining $$\omega_{\pm} \equiv \pm \omega_0,$$ we find $$ \tilde{\phi}(\omega) = \frac{\tilde{j}(\omega)}{\omega_0^2-\omega^2} = \frac{-\tilde{j}(\omega)}{(\omega-\omega_+)(\omega-\omega_-)}. \tag 2 $$ From Eq. $(2)$ we see that the Green's function is $$\tilde{G}(\omega) = \frac{-1}{(\omega-\omega_+)(\omega-\omega_-)}$$ which has poles on the real axis. If we want to compute $\phi(t)$ we do a Fourier transform $$ \phi(t) = \int_\omega \frac{-\tilde{j}(\omega)e^{i\omega t}}{(\omega-\omega_+)(\omega-\omega_-)} \frac{\mathrm d\omega}{2\pi} = \int_\omega e^{i\omega t}\tilde{j}(\omega) \tilde{G}(\omega)\frac{\mathrm d\omega}{2 \pi}. \tag{*} $$ This integral is tricky because of the poles on the axis. The solution everyone knows is to push the poles off the axis by adding an imaginary part to $\omega_{\pm}$, or by moving the contour above or below the real axis, but what does this actually mean physically? How do we choose which direction to push the poles or move the contour? Damping to the rescue In a real system, we always have some damping . In our oscillator model, this could come in the form of a velocity dependent friction $F_{\text{friction}} = -\mu \dot{\phi}(t)$. Defining $2\beta = \mu/m$, the equation of motion becomes $$\ddot{\phi}(t) + 2\beta \dot{\phi}(t) + \omega_0^2\phi(t) = j(t) . \tag 3$$ Fourier transforming everything again leads to Eq. $(2)$ but now with \begin{equation} \omega_{\pm} = \pm \omega_0' + i\beta \end{equation} where \begin{equation} \omega_0' = \omega_0\sqrt{1-(\beta/\omega_0)^2}. \end{equation} Therefore, we see that adding damping moves the poles a bit toward the origin along the real axis, but also gives them a positive imaginary component. In the limit of small damping (i.e. $\beta \ll \omega_0$), we find $\omega_0' \approx \omega_0$. In other words, the frequency shift of the poles due to the damping is small. So let's ignore that and focus on the added imaginary part. Ok, suppose we want to do the integral $(*)$ in the case that $j(t)$ is a delta function at $t=0$. In that case, $\tilde{j}=1$ (I'm ignoring units) and we have $$ \phi(t) = \int_\omega \frac{e^{i\omega t}}{(\omega-\omega_+)(\omega-\omega_-)} \frac{\mathrm d\omega}{2\pi} $$ As you noted, for $t<0$ you have to close the contour in the lower plane in order to use Jordan's lemma. There aren't any poles in the lower half plane, so we get $\phi(t<0)=0$. This makes complete sense: the driving force is a delta function at $t=0$ and there shouldn't be any response of the system before the driving happens. This means that our introduction of friction imposed a causal boundary condition to the system! For $t>0$, you close in the upper half plane where there are poles, and so you get some response out of the integral. Damping as a tool In many cases, you don't naturally have damping in the system. For example, the Green's function from the question, $$\int^{\infty}_{-\infty}~\mathrm dk_0 \frac{e^{−ik_0 z_0}}{k_0^2 − \kappa^2}$$ doesn't have any damping and thus the poles sit on the real axis. So what you do is just bump the contour a bit up or down, or equivalently add $\pm i \beta$ to the poles (most people write $i \epsilon$ instead of $i \beta$), then do the integral, and finally take $\beta \rightarrow 0$. In doing this, you're solving the problem in the presence of damping (or anti-damping), and then taking the damping to zero in the end to recover the no-damping case. Choosing to push the contour up or down, or equivalently choosing the sign of $\pm i \beta$, corresponds to imposing either friction or anti-friction, causal or anti-causal boundary conditions. If you pick the "causal" boundary condition, you find that the response of the system to a delta function in time and space is an outgoing spherical wave which starts at the delta function source. This gives you the so-called "retarded Green's function". If you pick the other condition, you find that the solution for a point source is actually an incoming spherical wave which converges right onto the point of the source. This gives you the so-called "Advanced Green's function". The thing is, you can solve a problem using either Green's function. You're "allowed" to push the contour up or down (or add $+i\beta$ or $-i\beta$ to the poles) because you invented that as a trick to do the integral; it's not representing a real factor in your physical system. Of course, in problems where there is damping, the choice is made for you. When you have damping, you can't have fields at infinity; they'd be damped away by the time they interact with your sources. I hope this was helpful, and I really hope if someone finds mistakes they'll jump in and fix 'em.
{ "source": [ "https://physics.stackexchange.com/questions/138217", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57848/" ] }
138,293
What was the reason to use kilograms to measure weight (e.g. body weight, market vegetables etc.) instead of using newtons in everyday life?
The problem is that while mass is the same everywhere on earth, weight is not - it can vary as much as 0.7% from the North Pole (heavy) to the mountains of Peru (light). This is in part caused by the rotation of the earth, and in part by the fact that the earth's surface is not (quite) a sphere. When you are interested in "how much" of something there is - say, a bag of sugar - you really don't care about the local force of gravity on the bag: you want to know how many cups of coffee you can sweeten with it. Enter the kilogram. If I calibrate scales using a reference weight, they will indicate (at that location) the amount of mass present in a sample relative to the calibration (reference). So if I have a 1 kg calibration weight, it might read 9.81 N in one place, and 9.78 N in another place; but if I put the reference weight on the scales and then say "if you feel this force, call it 1 kg" - that is what I get. You can now express relative weights as a ratio to the reference. All I need to do when I move to Jamaica (would that I could…) is recalibrate my scales - and my coffee will taste just as sweet as before. Well - with Blue Mountain I might not need sugar but that's another story. So there it is. We use the kilogram because it is a more useful metric in "daily life". The only time we care about weight is when we're about to snap the cables in the elevator (too much sweetened coffee?) or have some other engineering task where we care about the actual force of gravity (as opposed to the quantity of material). So why don't we call it "mass"? Well, according to http://www.etymonline.com/index.php?term=weigh , "weight" is a very old word, The original sense was of motion, which led to that of lifting, then to that of "measure the weight of." The older sense of "lift, carry" survives in the nautical phrase weigh anchor. Before Newton, the concept of inertia didn't exist; so the distinction between mass and weight made no sense when the word was first introduced. And we stuck with it...
{ "source": [ "https://physics.stackexchange.com/questions/138293", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2419/" ] }
139,067
Imagine an aeroplane travelling with velocity $v$ at some angle $\alpha$ from East to North. A box is dropped from the aeroplane. What would the projectile of the box be? Would it be a parabola with an initial y-component of velocity acting upwards, or would it travel as a downward curve (with an initial y-component of velocity acting downwards)? My reasons for thinking both: Parabolic projection My explanation for this would be that the box will have the same initial components of velocity as the aeroplane at the time it is dropped. Since the aeroplane has a y-component of velocity acting upwards and an x-component of velocity it would travel in a parabola, as in conventional projectile motion. Downward curve (By "downward curve" I mean a curve with negative gradient increasing in magnitude.) My explanation for this would be that as soon at it is let go the only force acting on the box would be its weight, and hence it couldn't travel upwards. As well as this, when I visualise in my head a box being dropped from an aeroplane, I can't imagine it's possible to travel in a diagonally upward direction once it's been let go. Can someone explain to me which type of motion it has and why?
It will travel along a parabola (ignoring drag from the air here), initially with upward velocity, as you describe in your first scenario. You're correct that the only force acting on the box is its weight, but this means it will have downward acceleration immediately, not necessarily downward velocity. Eventually the downward acceleration will lead to negative velocity, but initially the box travels with the same velocity as the plane. Travelling upward once it's been let go isn't strange at all. If you throw a ball into the air, it travels upward for a while before beginning to drop, and this isn't strange...
{ "source": [ "https://physics.stackexchange.com/questions/139067", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/8082/" ] }
139,153
In light of today's announcement of the 2014 Nobel laureates , and because of a discussion among colleagues about the physical significance of these devices, let me ask: What is the physical significance of blue LEDs, which challenges had to be overcome to create them? Why are materials with the band gap necessary for blue light apparently so rare/difficult to manufacture? I know it took decades to create blue LED after Holonyak discovered the first red ones, so there must have been some obstacles, which were maybe also important for other areas of research - otherwise I wouldn't understand why the inventors of the blue LED got a prize that the inventor of the first LED didn't. Wikipedia has something to say on the topic: Its development built on critical developments in GaN nucleation on sapphire substrates and the demonstration of p-type doping of GaN. However, I'm asking myself why this is "critical" and why this was difficult.
The Nobel website scientific background is good. Basically, when you try to make gallium nitride, you usually end up with a material that is (1) chock-full of defects, and (2) n-doped (even when you were trying to p-dope it). So blue LEDs required The invention of MOCVD technology for growing crystals (early 1970s); Finding the right recipe to grow good GaN by MOCVD (i.e., use a sapphire substrate, start with a low temperature step then switch to high temperature, etc.) (mid-1980s); Finding the right recipe to grow p-type GaN (what dopant to use (Mg), in what concentration, and what annealing / treating recipe to use to make the Mg dopants actually work and reduce the number of unintended n-type dopants that were canceling it out) (early 1990s); Once all that was in place, find good structures to make LEDs (e.g. if you can also grow InGaN then you can make quantum wells) (early-to-mid 1990s). All these steps required not only painstaking trial-and-error but also lots of insightful analysis and careful measurements to diagnose the problems and discover how to fix them. :-D Sidenote: I think it's really cool and exciting that this line of materials-science research is not finished yet. As you alloy more and more indium into indium-gallium-nitride, the defects get even worse and p-doping becomes even harder. There are now lots of people working on overcoming these problems. Each year it seems that someone comes up with a materials-processing breakthrough that allows them to use a few more percentage points of indium. With enough indium, the bandgap would shift from blue to green (with MUCH more indium, it shifts all the way to infrared). So this research could potentially lead to a much more efficient green LED, and even better, the long-awaited green diode laser , which would have myriad applications e.g. in display technology. (You've seen green laser pointers, but these are complicated devices that use infrared lasers and nonlinear optics. A green diode laser, if it existed, would be cheaper, more rugged, smaller, and way more energy efficient.) Also, if you could use more indium, InGaN-GaN becomes a promising candidate material system for tandem solar cells.
{ "source": [ "https://physics.stackexchange.com/questions/139153", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/30247/" ] }
139,398
What kind of motion would a (preferably dimensionless for simplicity) body do if the force acted on it was proportional to the semi-derivative of displacement, i.e. $$m \frac{\mathrm{d}^2 x}{\mathrm{d}t^2}=-k\frac{\mathrm{d}^{\frac12}x}{\mathrm{d}t^{\frac12}} \, \, ?$$ It would be helpful if someone with a copy of Mathematica plotted this for various values of the constants.
If $D^n$ denotes the $n$ th derivative and $D^{-n}$ the $n$ th integral, then we have that, $$D^n f(t) = D^m[D^{-(m-n)}f(t)]$$ providing $m \geq \lceil{n}\rceil$ . For our half derivative, we choose $n=1/2$ , and $m=2$ , in which case we have, $$D^{1/2}f(t) = D^2[D^{-(3/2)}f(t)]$$ There is a general formula for the $n$ th integral of a function, one of my favorite results of Cauchy: $$f^{-(n)}(t) = \frac{1}{\Gamma(n)}\int_{0}^t (t-u)^{n-1}f(u) \, du$$ which is essentially a convolution $f(t) \ast t^{n-1}$ . Applying it, we find, $$D^{1/2}f(t) = \frac{d^2}{dt^2} \left[ \frac{2}{\sqrt{\pi}}\int_0^t (t-u)^{1/2}f(u) \, du\right]$$ Given the differential equation, $$\frac{d^2 x(t)}{dt^2} = -\frac{k}{m} \frac{d^{1/2} x(t)}{dt^{1/2}}$$ we can substitute in our definition of $D^{1/2}x(t)$ , and conclude, $$x(t) = -\frac{2k}{m\sqrt{\pi}}\int_{0}^t (t-u)^{1/2}x(u)\, du + c_1t +c_2$$ for $c_1,c_2 \in \mathbb{R}$ which is an integral equation. If we can assume $x(t)$ is supported on $[0,\infty)$ only, then the integral is a convolution $x(t) \ast \sqrt{t}$ and taking the Laplace transform, we find, $$X(s) = \left( 1+ \frac{k}{ms^{3/2}}\right)^{-1} \left( \frac{c_1}{s^2} + \frac{c_2}{s} \right) = \frac{m(c_1 + c_2 s)}{k\sqrt{s}+ms^2}$$ The solution $x(t)$ is then the inverse Laplace transform of $X(s)$ . Formaly, this is given by, $$x(t) = \frac{1}{2\pi i} \int_{\Gamma} e^{st} \frac{m(c_1 + c_2 s)}{k\sqrt{s}+ms^2} \, ds$$ where the contour $\Gamma$ is in the complex plane; it is a vertical line of infinite length with all poles of $F(s)$ to its left. In practice, we close the contour with an additional contour, ensure the second integral tends to zero (e.g. by the estimation lemma), and use the residue theorem. The integrand, which we denote $F(s)$ , has three poles located at $s^3 = k^2/m^2$ , or equivalently, $$s_1 = \omega^{4/3}_0, \quad s_2 = \frac{1}{2}(1+i\sqrt{3})\omega^{4/3}_0, \quad s_3 = \frac{1}{2}(i\sqrt{3}-1)\omega^{4/3}_0$$ as well as at $s_0= 0$ , where we define $\omega^2_0 := k/m$ . The vertical contour should begin after $s_1$ so all poles are to the left. However, doing so analytically is somewhat tedious. I chose to use a numerical method for the evaluation of inverse Laplace transforms due to H.E. Salzer which uses aquadrature formula. With Mathematica, I managed to reconstruct $x(t)$ partially: in the simplified case when $c_1 = c_2 = k/m = 1$ . It seems, by visual inspection, the solution resembles that of damped harmonic motion, such as when one introduces a damping term $\gamma \dot{x}$ in the equations of motion of a standard harmonic oscillator.
{ "source": [ "https://physics.stackexchange.com/questions/139398", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58152/" ] }
139,545
I know that the gravitational interaction of antimatter is expected to be the same as normal matter. But my question is, has it ever been experimentally validated? I think it would not be a trivial experiment, because electromagnetic effects have to be eliminated, so neutral particles would be needed. Maybe diamagnetically trapped antihidrogen atoms could be examined as to which direction they fall?
The only experiment I know of was done by the ALPHA team at CERN . The results are published in this paper . The error bounds are huge - all the team were able to say is that the upper limit for the gravitational mass of antihydrogen is no greater than 75 times its inertial mass! However I believe an updated version of the experiment, ALPHA2 , is in progress and will hopefully be able to do a bit better. Other planned experiments are AEGIS and GBAR , both also at CERN. However neither have made any measurements yet. This may seem like slow progress, but antihydrogen is extraordinarily difficult stuff to handle as contact with any normal matter will annihilate the antihydrogen.
{ "source": [ "https://physics.stackexchange.com/questions/139545", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/32426/" ] }
140,927
You can start fire by focusing the sunlight using the magnifying glass. I searched the web whether you can do the same using moonlight. And found this and this - the first two in Google search results. What I found is the thermodynamics argument: you cannot heat anything to a higher temperature using black body radiation than the black body itself, and Moon isn't hot enough. It may be true, but my gut feelings protest... The larger your aperture is, the more light you collect, also you have better focus because the airy disk is smaller. So if you have a really huge lens with a really short focus (to keep Moon's picture small), or in the extreme case you build a Dyson-sphere around the Moon (letting a small hole to the let the sunlight enter), and focusing all reflected light into a point it should be more than enough to ingnite a piece of paper isn't it? I'm confused. So can you start fires using the Moon?
Moonlight has a spectral peak around $650\ \mathrm{nm}$ (the sun peaks at around $550\ \mathrm{nm}$ ). Ordinary solar cells will work just fine to convert it into electricity. The power of moonlight is about $500\,000$ times less than that of sunlight, which for a solar constant of $1000\ \mathrm{W/m^2}$ leaves us with about $2\ \mathrm{mW/m^2}$ . After accounting for optical losses and a typical solar cell efficiency of roughly $20\ \%$ , we can probably hope to extract approx. $0.1\ \mathrm{mW}$ with a fairly simple foil mirror of $1\ \mathrm{m^2}$ surface area. Accumulated over the course of a whole night with a full moon, this leaves us with around $6\ \mathrm h\times3600\ \mathrm{s/h}\times0.1\ \mathrm{mW}\approx2\ \mathrm J$ of energy. That's plenty of energy to ignite a fire using the right chemicals and a thin filament as a heater.
{ "source": [ "https://physics.stackexchange.com/questions/140927", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/7743/" ] }
141,058
As far as I understand some of the readings, Gravity does not exist in real terms. It's only a way of modeling motion as we see. Einstein for example explained the motion without having to have Gravity. Is this true?
Physics does not answer existential problems. It gathers data and observations and models them with mathematical equations and functions, and then can explain the data with the model and predict new observations. This has been going on for centuries, and what we see if we study the history of physics is that there are regions of validity for the mathematical models, regions in the variables where the models are valid within measurement errors and regions where they fail The Newtonian gravitational model is valid for our everyday experiences and experiments with sizes of the order of kilograms, meters, and seconds. Our whole civilizations are built and maintained on the accuracy of the Newtonian model. Thus to ask if gravity exists is a bit like asking "do I exist?", which, as far as experience goes, is a philosophical question. Now in the region where the unit of length is light years and the unit of mass sun masses, deviations from Newtonian mechanics have been measured validating the General Relativity model. This does not invalidate the Newtonian model in its region of validity since for the low values of the quantities involved, the predictions of the GR model are indistinguishable from the Newtonian model within measurement error. In conclusion, if we exist, gravity exists, as gravity is a definition in the simplest mathematial model that describes mechanics in human dimensions.
{ "source": [ "https://physics.stackexchange.com/questions/141058", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/40342/" ] }
141,128
How does water extinguish fire? Heat energy from the fire is transferred to the water, isn't that how it works? How does water deprive oxygen and stop combustion? How is the specific heat of water connected to this? If we use hot water instead cold water, does that make a difference?
To sustain a fire, you need three factors: fuel, oxygen, and heat. Take away one of the three and the fire goes out. Water removes heat. Most of this "removing heat" is the evaporation - roughly 540 calories / gram, so 7x more heat than is needed to get water from 20°C to boiling (with a tip of the hat to @Jasper for pointing out erroneous value in earlier revision of answer). So using hot water is "a bit" less efficient for cooling (per unit mass of water added), but not as bad as you might think. And warm water will create (relatively) more vapor which will actually improve its role as an asphyxiant (pushing away atmospheric oxygen). In certain kinds of fire, using water will not work well (or "at all"). That includes fires with liquid fuel - force of water can disperse the fuel into the air and thus the cooling doesn't happen where the fire happens (actually this can make things worse, since many droplets of fuel can now burst into flame away from the base), chemical fires (you might cause additional reactions, or just speed up the reaction by dissolving the components), and fires in which the fuel would react with water - for example certain kinds of metal fires (e.g. magnesium shavings, alkali metals, and the like). You also don't want to add water when there are other risks related to its use (for example high voltages present). This is why many "general purpose" extinguishers tend to be of the "deprive of oxygen" kind - foam, powder. Afterthought based on BeastRaban's answer: when water becomes vapor, it is lighter than air, with an atomic mass of 18 vs 29 for the usual oxygen/nitrogen mixture - but being generally cooler than a flame (most vapor will be around 100°C), it may slow down the rate with which fresh air is being drawn into the fire. As such, it is not only a coolant of the fuel (which slows down the rate of the exothermal reaction taking place), but also an asphyxiant, pushing away oxygen (or at least slowing down the rate at which it is being replenished).
{ "source": [ "https://physics.stackexchange.com/questions/141128", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/42836/" ] }
141,131
I am trying to get a vague understanding of the mathematical equations for the Big Bang in GR and LQG. My understanding so far is that when the universe is assumed to be homogeneous and isotropic, which it practically is, then the Einstein Field Equations may be solved and you get the FLRW metric. I think the Friedman equations are based on the FLRW metric and they can be used to show that GR cannot deal with the big bang. I have the following questions about this: In the Friedman equation $\frac{\ddot{a}}{a} = -\frac{4 \pi G}{3}\left(\rho+\frac{3p}{c^2}\right) + \frac{\Lambda c^2}{3}$ what exactly is a? From this equation I do not really see what it tells us about the big bang, mainly becuase of the constant. In this video at 10:10 https://www.youtube.com/watch?v=IFcQuEw0oY8 the equation given is $H^2=\frac{8\pi G}{3}\rho$ for GR where $H$ is $\left (\frac{\dot{a}}{a}\right)$, rather than the equation given above. I'm not sure what version I should be using. If it is the case that the second one is a simplification that I shouldn't really be using, does anyone know where I can find the full equations for the LQG version of the equation above (in the video it is given as $H^2=\frac{8\pi G}{3}\rho\left(1-\frac{\rho}{\rho_{c}}\right)$ ).
To sustain a fire, you need three factors: fuel, oxygen, and heat. Take away one of the three and the fire goes out. Water removes heat. Most of this "removing heat" is the evaporation - roughly 540 calories / gram, so 7x more heat than is needed to get water from 20°C to boiling (with a tip of the hat to @Jasper for pointing out erroneous value in earlier revision of answer). So using hot water is "a bit" less efficient for cooling (per unit mass of water added), but not as bad as you might think. And warm water will create (relatively) more vapor which will actually improve its role as an asphyxiant (pushing away atmospheric oxygen). In certain kinds of fire, using water will not work well (or "at all"). That includes fires with liquid fuel - force of water can disperse the fuel into the air and thus the cooling doesn't happen where the fire happens (actually this can make things worse, since many droplets of fuel can now burst into flame away from the base), chemical fires (you might cause additional reactions, or just speed up the reaction by dissolving the components), and fires in which the fuel would react with water - for example certain kinds of metal fires (e.g. magnesium shavings, alkali metals, and the like). You also don't want to add water when there are other risks related to its use (for example high voltages present). This is why many "general purpose" extinguishers tend to be of the "deprive of oxygen" kind - foam, powder. Afterthought based on BeastRaban's answer: when water becomes vapor, it is lighter than air, with an atomic mass of 18 vs 29 for the usual oxygen/nitrogen mixture - but being generally cooler than a flame (most vapor will be around 100°C), it may slow down the rate with which fresh air is being drawn into the fire. As such, it is not only a coolant of the fuel (which slows down the rate of the exothermal reaction taking place), but also an asphyxiant, pushing away oxygen (or at least slowing down the rate at which it is being replenished).
{ "source": [ "https://physics.stackexchange.com/questions/141131", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57218/" ] }
141,278
Take a really dry dish cloth and try to wipe up some liquid you spilled on the kitchen counter. I will take up only so much of the liquid. Then try it with a damp cloth (or a wring out a wet one). It will take up much more of the liquid. It seems counter-intuitive. Why does a damp cloth absorb more liquid?
The sort of dishcloths generically known as J Cloths are made from a material called Viscose rayon : This material is derived from cellulose and like cellulose it interacts with water. Water breaks hydrogen bonds formed within the fibres. This makes the fibres softer, and the exposed hydroxyl groups make the surface more hydrophilic. It's the latter process that makes a damp cloth more able to soak up water than a dry cloth. The absorption of water on a fabric procedes by wicking . This requires a low contact angle and no hydrophobic areas on the fibres where the meniscus can get pinned. Incidentally, pinning is the basis of many superhydrophobes - even a relatively hydrophilic surface can be made hydrophobic by giving it the correct microstructure. Anyhow, the contact angle of dry Viscose rayon is around 30-40º, which is fairly low but still high enough to prevent wicking and cause pinning. That's why a dry cloth is slow at absorbing water. It will absorb the water eventually, but the timescale may be many seconds or even minutes. After the Viscose has interacted with water and formed free hydroxyl groups at the surface the contact angle falls to effectively zero. This makes wicking, and therefore water absorption, much faster. The dry cloth finds itself in a catch 22 situation. It has to interact with water to become hydrophilic, but until it becomes hydrophilic the water can't spread into the fabric for the interaction to occur. As anyone used to doing the washing up can tell you, the solution is to force the water to wet the fabric by wetting then squeezing it.
{ "source": [ "https://physics.stackexchange.com/questions/141278", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62099/" ] }
141,303
It's kind of a tricky concept I assume, on one side you got those neat shared vertices of SiO2, on the other (water) you don't really have shared vertices, only kinda (but they still want to align themselves) But what truly defines a glass is the existence of an alternative-form crystal that doesn't have time to form during the solidification, due to high viscosity and stuff. Is ice the true crystal? If not it's a glass, right?
A crystalline substance doesn't necessarily have to be a single crystal to be deemed as such. An amorphous solid such as glass doesn't exhibit a crystalline structure even at very high levels of magnification. Glassy substances have a glass transition phase that is lower than the melting temperature. The melting point of ice formed under ordinary circumstances is ice Ih, with a melting point of 0° C. In other words, the ice we typically see is a crystalline form of ice. There are a number of other forms of ice; fortunately Kurt Vonnegut's ice nine ( Cat's Cradle ) is not one of them. Ice IX (as opposed to Vonnegut's ice nine) exhibits a tetragonal crystalline structure and only forms below liquid nitrogen temperatures. The ordinary ice Ih we typically encounter exhibits a hexagonal (think snowflakes) crystalline structure. Amorphous ice can be made, but it's rather hard to prepare. Extremely pure water is needed, and the water needs to be supercooled very rapidly to liquid nitrogen temperatures. One can supercool liquid water, but any substantial disturbances or vibrations (e.g., looking at it cross-eyed) will result in that water instantaneously freezing into ordinary, everyday crystalline ice. If you quickly cool pure water far below its nominal freezing point and somehow avoid invoking any significant disturbances, you can indeed create a glassine form of ice.
{ "source": [ "https://physics.stackexchange.com/questions/141303", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20094/" ] }
141,321
In simple words what is the conceptual difference between Gibbs and Boltzmann entropies? Gibbs entropy: $S = -k_B \sum p_i \ln p_i$ Boltzmann entropy: $S = k_B \ln\Omega$
The Gibbs entropy is the generalization of the Boltzmann entropy holding for all systems, while the Boltzmann entropy is only the entropy if the system is in global thermodynamical equilibrium. Both are a measure for the microstates available to a system, but the Gibbs entropy does not require the system to be in a single, well-defined macrostate. This is not hard to see: For a system that is with probability $p_i$ in a microstate, the Gibbs entropy is $$ S_G = -k_B \sum_i p_i \ln(p_i)$$ and, in equilibrium, all microstates belonging to the equilibrium macrostate are equally likely, so, for $N$ states, we obtain with $p_i = \frac{1}{N}$ \begin{align} S_G &= -k_B \sum_i \frac{1}{N} \ln\left(\frac{1}{N}\right) \\&= -k_B N \frac{1}{N} \ln\left(\frac{1}{N}\right) \\ &= k_B \ln(N)\end{align} by the properties of the logarithm, where the latter term is the Boltzmann entropy for a system with $N$ microstates.
{ "source": [ "https://physics.stackexchange.com/questions/141321", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/41000/" ] }
141,569
Most of the time when I mixed different nuts in a bowl, I observed that the big Brazil nuts were always on top while the small peanuts settled near the base. No matter how you take it, if the big nuts are put at the bottom and then the small nuts are poured, then also after some time the big ones will come to the top. Will not the gravity attract those big nuts more and eventually will they not remain at the base? But it doesn't happen! I googled it and found the term Brazil-nut Effect but couldn't find any proper explanation. What is the physical explanation for this effect?
The process you describe is called granular convection $^1$. It happens because under random motion it's easier for a small particle to fall under a big one than vice versa. Let's assume that all the particles are made of the same material so there are no density differences in play. If you agitate the particles then temporary voids will open between particles as they randomly move. These voids will have a size distribution with lots of small voids and few large ones because it takes much less energy to create a small void than a large void. Small particles can fall into small voids, but a large particle can move downwards only if a large void appears. That means it's more likely for a small particle to move downwards than a large one. Over time the result is that small particles move downwards more often than large particles and hence small particles end up at the bottom and large particles at the top. This is not the lowest energy state, because the greatest packing fraction is achieved with a mixture of particle sizes . Separating the mixture into layers of roughly similar particle size will decrease its density and hence increase its gravitational potential energy. The sorting is a kinetic effect and the sorted system is in principle metastable. $^1$ I've linked the Wikipedia article, but actually I don't think the article is particularly rigorous and you should Google for more substantive articles .
{ "source": [ "https://physics.stackexchange.com/questions/141569", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
141,865
My understanding is that when it comes to forming a white dwarf, it is the electron degeneracy pressure, due to the Pauli Exclusion Principle, preventing collapse in of the white dwarf. If the gravitational force is sufficiently large, then the electrons in the white dwarf will be forced to fuse with the protons to form neutrons, and the neutron star resists collapse in by neutron degeneracy pressure. If the gravitational force is even greater, then black hole will form. How does the Pauli Exclusion Principle actually create a force? It seems to me from various things I have read that the force due to the Pauli Exclusion Principle increases as the fermions are squeezed closer together, although I am not sure why there is an increasing force and it is not simply the case that the fermions cannot be pushed into exactly the same position. It is as if the fermions know when they are approaching each other?
How does the Pauli Exclusion Principle actually create a force? The Pauli exclusion principle doesn't really say that two fermions can't be in the same place. It's both stronger and weaker than that. It says that they can't be in the same state, i.e., if they're standing waves, two of them can't have the same standing wave pattern. But for bulk matter, for our purposes, it becomes a decent approximation to treat the exclusion principle as saying that if $n$ particles are confined to a volume $V$, they must each be confined to a space of about $V/n$. Since volume goes like length cubed, this means that their wavelengths must be $\lesssim (V/n)^{1/3}$. As $V$ shrinks, this maximum wavelength shrinks as well, and the de Broglie relation then tells us that the momentum goes up. The increased momentum shows up as a pressure, just as it would if you increased the momenta of all the molecules in a sample of air. A degenerate body like a neutron star or white dwarf is in a state where this pressure is in equilibrium with gravity.
{ "source": [ "https://physics.stackexchange.com/questions/141865", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57218/" ] }
142,159
Since Newton's law of gravitation can be gotten out of Einstein's field equatons as an approximation, I was wondering whether the same applies for the electromagnetic force being the exchange of photons. Is there an equation governing the force from the exchange of photons? Are there any links which would show how the Coulomb force comes out of the equations for photon exchange? I know that my question is somewhat similar to the one posted here The exchange of photons gives rise to the electromagnetic force , but it doesn't really have an answer to my question specifically.
The classical Coulomb potential can be recovered in the non-relativistic limit of the tree-level Feynman diagram between two charged particles. Applying the Born approximation to QM scattering, we find that the scattering amplitude for a process with interaction potential $V(x)$ is $$\mathcal{A}(\lvert p \rangle \to \lvert p'\rangle) - 1 = 2\pi \delta(E_p - E_{p'})(-\mathrm{i})\int V(\vec r)\mathrm{e}^{-\mathrm{i}(\vec p - \vec p')\vec r}\mathrm{d}^3r$$ This is to be compared to the amplitude obtained from the Feynman diagram: $$ \int \mathrm{e}^{\mathrm{i}k r_0}\langle p',k \rvert S \lvert p,k \rangle \frac{\mathrm{d}^3k}{(2\pi)^3}$$ where we look at the (connected) S-matrix entry for two electrons scattering off each other, treating one with "fixed" momentum as the source of the potential, and the other scattering off that potential. Using the Feynman rules to compute the S-matrix element, we obtain in the non-relativistic limit with $m_0 \gg \lvert \vec p \rvert$ $$ \langle p',k \rvert S \lvert p,k \rangle \rvert_{conn} = -\mathrm{i}\frac{e^2}{\lvert \vec p -\vec p'\rvert^2 - \mathrm{i}\epsilon}(2m)^2\delta(E_{p,k} - E_{p',k})(2\pi)^4\delta(\vec p - \vec p')$$ Comparing with the QM scattering, we have to discard the $(2m)^2$ as they arise due to differing normalizations of momentum eigenstate in QFT compared to QM and obtain: $$ \int V(\vec r)\mathrm{e}^{-\mathrm{i}(\vec p - \vec p')\vec r}\mathrm{d}^3r = \frac{e^2}{\lvert \vec p -\vec p'\rvert^2 - \mathrm{i}\epsilon}$$ where Fourier transforming both sides, solving the integral and taking $\epsilon \to 0$ at the end will yield $$ V(r) = \frac{e^2}{4\pi r}$$ as the Coulomb potential.
{ "source": [ "https://physics.stackexchange.com/questions/142159", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57218/" ] }
142,169
The Schrödinger equation is the basis to understanding quantum mechanics, but how can one derive it? I asked my instructor but he told me that it came from the experience of Schrödinger and his experiments. My question is, can one derive the Schrödinger equation mathematically?
Be aware that a "mathematical derivation" of a physical principle is, in general, not possible. Mathematics does not concern the real world, we always need empirical input to decide which mathematical frameworks correspond to the real world. However, the Schrödinger equation can be seen arising naturally from classical mechanics through the process of quantization. More precisely, we can motivate quantum mechanics from classical mechanics purely through Lie theory, as is discussed here , yielding the quantization prescription $$ \{\dot{},\dot{}\} \mapsto \frac{1}{\mathrm{i}\hbar}[\dot{},\dot{}]$$ for the classical Poisson bracket. Now, the classical evolution of observables on the phase space is $$ \frac{\mathrm{d}}{\mathrm{d}t} f = \{f,H\} + \partial_t f$$ and so its quantization is the operator equation $$ \frac{\mathrm{d}}{\mathrm{d}t} f = \frac{\mathrm{i}}{\hbar}[H,f] + \partial_t f$$ which is the equation of motion in the Heisenberg picture. Since the Heisenberg and Schrödinger picture are unitarily equivalent, this is a "derivation" of the Schrödinger equation from classical phase space mechanics.
{ "source": [ "https://physics.stackexchange.com/questions/142169", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/53172/" ] }
142,419
What books are recommended for an advanced undergraduate course in electrodynamics?
D.J. Griffith's Introduction to Electrodynamics must be mentioned. To my knowledge this text is ubiquitous in junior-level E&M courses. The writing is extremely friendly and is excellent for self-study. The author frequently tells you what he is doing and provides motivation , unlike the ubiquitous graduate-level text by Jackson. Equations often use a convenient notation (you know that script $r$?) that makes them appear less complicated, yet is straightforward to expand. As for required background, I would say the only thing really required is a thorough understanding of multi-variable calculus. The physics content is self-contained, so I'd argue even freshmen level E&M knowledge isn't necessary , though it would only help a learner in thinking "like a physicist" to help solve problems. Being a junior-level undergraduate text, it is not thorough nor does it go into much depth, at least compared to graduate level texts. You won't get a mathematically complete understanding of using Green's functions to solve boundary condition problems (e.g., Dirichlet conditions). Some results are simply stated rather than worked out due to their complexity, though the author is up front about this. (This is a community response; feel free to add additional items)
{ "source": [ "https://physics.stackexchange.com/questions/142419", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/53172/" ] }
142,435
Jupiters moon Io is heated through the gravitational pull of Jupiter, but when Io is heated because of this, where does that energy come from? How does conservation of energy work for this effect, where is energy "lost"?
Jupiters moon Io is heated through the gravitational pull of Jupiter, but when Io is heated because of this, where does that energy come from? How does conservation of energy work for this effect, where is energy "lost"? TL;DR: The energy ultimately comes from Jupiter's rotation. Io is tidally locked; it has the same orbital and rotation rates. If Io was in a circular orbit, the tidal forces on Io would merely result in a "frozen tide" on Io. There would be no heating because Io's shape would not be changing. However, Io's orbit is not quite circular. This means the tidal forces vary in magnitude and direction over the span of an orbit. This stretches and squeezes Io, which in turn results in Io's heating. There's a tension between the other Galilean moons , particularly Europa, and Jupiter with regard to Io's orbit. If those other moons didn't exist, the dissipation of those tidal forces on Io would tend to circularize Io's orbit. The outer Galilean moons tend to make Io's orbit more elliptical. Which wins of Jupiter's tendency to make the orbit more circular or the outer moons to make the orbit more elliptical depends on two things: the ellipticity of Io's orbit, and how warm Io's interior is. The degree to which Io responds to the Jovian tidal forces depends on the ratio of Io's $k_2$ Love number to its tidal dissipation quality factor $Q$. The quality factor is high when Io is cool, low when Io is hot. Io cools as it's orbit becomes closer circular. The outer moons can then push Io into a more elliptical orbit, and that's when Io warms up. Now the Jovian influences dominate, and Io moves toward a more circular orbit. Heating and cooling a large moon takes some time, so this means there's a time lag in the response. A nice hysteresis loop sets up. These tidal effects go both ways. Io raises tides on Jupiter. How Jupiter responds to those tidal forces depends on the ratio of Jupiter's $k_2$ Love number to its tidal dissipation quality factor $Q$. Various estimates of Jupiter's quality factor $Q$ were extremely high before humanity sent spacecraft to Jupiter. Now that we've accurately seen the Galilean moons in action for quite some time, it appears that Jupiter's $Q$ is rather low. There's a lot of dissipation in the Jovian system. The energy certainly has a place to go. As for where it comes from, that's simple. The actions by Io on Jupiter slows Jupiter's rotation rate. This is the ultimate source of energy for the Galilean system. References: Hussmann, et al. "Implications of rotation, orbital states, energy sources, and heat transport for internal processes in icy satellites," Space Science Reviews 153.1-4 (2010): 317-348. Lainey, et al. "Strong tidal dissipation in Io and Jupiter from astrometric observations," Nature 459.7249 (2009): 957-959. Peale, "Origin and evolution of the natural satellites," Annual Review of Astronomy and Astrophysics 37.1 (1999): 533-602. Wu, "Origin of tidal dissipation in Jupiter. II. The value of Q," The Astrophysical Journal 635.1 (2005): 688. Yoder, "How tidal heating in Io drives the Galilean orbital resonance locks," Nature 279 (1979): 767-770. Note that Lainey et al. disagree markedly with Wu on the value of Jupiter's Q, 36,000 (Lainey et al.) to 10 9 (Wu).
{ "source": [ "https://physics.stackexchange.com/questions/142435", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62605/" ] }
142,461
I've just found out that a negative specific heat capacity is possible. But I have been trying to find an explanation for this with no success. Negative heat capacity would mean that when a system loses energy, its temperature increases . How is that possible in the case of a star? Musn't there be a source of energy to increase the temperature of any system?
Consider a satellite in orbit about the Earth and moving at some velocity $v$. The orbital velocity is related to the distance from the centre of the Earth, $r$, by: $$ v = \sqrt{\frac{GM}{r}} $$ If we take energy away from the satellite then it descends into a lower orbit, so $r$ decreases and therefore it's orbital velocity $v$ increases. Likewise if we add energy to the satellite it ascends into a higher orbit and $v$ decreases. This is the principle behind the negative heat capacity of stars. Replace the satellite by a hydrogen atom, and replace the Earth by a large ball of hydrogen atoms. If you take energy out then the hydrogen atoms descend into lower orbits and their velocity increases. Since we can relate velocity to temperature using the Maxwell-Boltzmann distribution this means that as we take energy out the temperature rises, and therefore the specific heat must be negative. This is all a bit of a cheat of course, because you are ignoring the potential energy. The total energy of the system decreases as you take energy out, but the decrease is accomplished by decreasing the potential energy and increasing the kinetic energy. The virial theorem tells us that the decrease of the potential energy is twice as big as the increase in the kinetic energy, so the net change is negative.
{ "source": [ "https://physics.stackexchange.com/questions/142461", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/27753/" ] }
142,878
I am currently doing a physics project on the effects of so-called 'super-speed'. I was wondering how fast you would have to run to vertically travel up a wall? That is, to negate the force of gravity. Is it even possible? Help would be appreciated!
Assuming you could get traction against the wall, you could run or walk up it at any speed. However, the problem is that for the large majority of circumstances, you cannot get traction against a vertical wall. The reason we can walk across the ground is because gravity pushes us downwards. This downward force is then opposed by an upward normal force from the ground. This normal force is what enables static friction to be present between our feet and the ground and this friction provides the horizontal force that lets us push ourselves forward. When running or walking up a vertical surface, you still need a force from your feet to push you in the direction of intended motion. However, gravity is no longer pushing you towards the "ground" (or wall in this case). As a result, when you push off the wall with your feet, the normal force that is required to generate friction is no longer opposed by anything. As a result, trying to run or walk on a vertical surface results in you pushing yourself off of that surface and you fall. Now, we've all seen movies where people like Jackie Chan run up a wall for a brief bit. They way they do that is by running at the wall first. They then jump at the wall and the wall has to apply a force to decelerate their horizontal motion to zero. That force can be used as the necessary normal force to generate the required friction. However, once the wall has eliminated all of their velocity towards it, any further steps would again push them away from the wall entirely. So in theory, if you got a really fast run at the wall, you could make it up higher before you pushed yourself off of it again. This couldn't be more than a few steps though because the wall quickly decelerates your horizontal velocity. That means that slow steps would take too long to go very far. Additionally, fast steps would mean that you need to accelerate upwards much harder. And so you'd require more friction, meaning a larger normal force from the wall, meaning a faster deceleration of horizontal velocity, meaning a sooner departure from the wall. Not to mention that to go really high would require you to run at the wall so fast that you'd splat when you jumped at it to start running up (splatting is bad, just fyi). This is a no-win scenario. That said, if you replace friction by another source of a vertical force (one that doesn't result in you being pushed off the wall) then the only thing that keeps you from sauntering up it is the strength of your legs. But if they can overcome gravity (and if you can climb a ladder, then that's a yes) then you can run or walk so long as something holds you to the wall. Credit where it's due, as t.c pointed out, spoilers like from a race car would also work to provide the oh so necessary force towards the wall. Also magnetic boots, a big fan pointed outwards, a strong wind, etc. The lesson? While it's really cool when a super-fast character in a story runs up a sky-scraper, this simply isn't realistic. Sorry to spoil the fun :-(
{ "source": [ "https://physics.stackexchange.com/questions/142878", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62787/" ] }
143,224
Since the spectra of hydrogen and antihydrogen are the same, how do astronomers know which one they're detecting? Is, perhaps, the Lamb shift in antihydrogen different?
One cannot tell by the light spectra. Hydrogen and antihydrogen would give the same lines in the spectrum. The prevalence of matter over antimatter from other evidence indicates matter is predominant in the observable universe, and here is a nice review . How do we really know that the universe is not matter-antimatter symmetric? The Moon: Neil Armstrong did not annihilate, therefore the moon is made of matter. The Sun: Solar cosmic rays are matter, not antimatter. The other Planets: We have sent probes to almost all. Their survival demonstrates that the solar system is made of matter. The Milky Way: Cosmic rays sample material from the entire galaxy. In cosmic rays, protons outnumber antiprotons $10^4$ to $1$ . The Universe at large: This is tougher. If there were antimatter galaxies then we should see gamma emissions from annihilation. Its absence is strong evidence that at least the nearby clusters of galaxies (e.g., Virgo) are matter-dominated. At larger scales there is little proof. However, there is a problem, called the "annihilation catastrophe" which probably eliminates the possibility of a matter-antimatter symmetric universe. Essentially, causality prevents the separation of large chucks of antimatter from matter fast enough to prevent their mutual annihilation in the early universe. So the Universe is most likely matter dominated. So the astronomers presume they are detecting hydrogen, based on the analysis above.
{ "source": [ "https://physics.stackexchange.com/questions/143224", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/31819/" ] }
143,576
We usually heard that speed of light in vacuum $c$ remains same no matter how observer is moving? I am wondering whether is it taken as a postulate or a proven phenomenon that $c$ is constant irrespective of observer's speed?
I am wondering whether is it taken as a postulate or a proven phenomenon that c is constant irrespective of observer's speed? Either one. Both. Einstein took it as a postulate in his 1905 paper on special relativity. From it, he proved various things about space and time. The frame-independence of $c$ is also experimentally supported. This is what the Michelson-Morley experiment showed (although it was not interpreted correctly until much later). You can also take other postulates for special relativity, describing the symmetry properties of space and time. In this case the constancy of $c$ becomes a theorem rather than an axiom. From a modern point of view, this approach makes more sense than Einstein's 1905 axiomatization, which puts light in a special role and defines $c$ as the speed of light. Nowadays we know that light is just one of several fields, and $c$ is not the speed of light but rather a conversion factor between space and time units. The symmetry approach goes back to W.v.Ignatowsky, Phys. Zeits. 11 (1911) 972, and can be found in various other modern presentations, such as this one or my own .
{ "source": [ "https://physics.stackexchange.com/questions/143576", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/28258/" ] }
143,630
Previously I thought this is a universal theorem, for one can prove it in the one dimensional case using variational principal. However, today I'm doing a homework considering a potential like this:$$V(r)=-V_0\quad(r<a)$$$$ V(r)=0\quad(r>a)$$ and found that there is no bound state when $V_0a^2<\pi^2\hbar^2/8m$. So what's the condition that we have at least one bound state for 3D and 2D?
The precise theorem is the following, cf. e.g. Ref. 1. Theorem 1: Given a non-positive (=attractive) potential $V\leq 0$ with negative spatial integral $$ v~:=~\int_{\mathbb{R}^n}\! d^n r~V({\bf r}) ~<~0 ,\tag{1} $$ then there exists a bound state $^1$ with energy $E<0$ for the Hamiltonian $$\begin{align} H~=~&K+V, \cr K~=~& -\frac{\hbar^2}{2m}{\bf \nabla}^2\end{align}\tag{2} $$ if the spatial dimension $\color{Red}{n\leq 2}$ is smaller than or equal to two. The theorem 1 does not hold for dimensions $n\geq3$ . E.g. it can be shown that already a spherically symmetric finite well potential does not $^2$ always have a bound state for $n\geq3$ . Proof of theorem 1: Here we essentially use the same proof as in Ref. 2, which relies on the variational method . We can for convenience use the constants $c$ , $\hbar$ and $m$ to render all physical variables dimensionless, e.g. $$\begin{align} V~\longrightarrow~& \tilde{V}~:=~\frac{V}{mc^2}, \cr {\bf r}~\longrightarrow~&\tilde{\bf r}~:=~ \frac{mc}{\hbar}{\bf r},\end{align}\tag{3} $$ and so forth. The tildes are dropped from the notation from now on. (This effectively corresponds to setting the constants $c$ , $\hbar$ and $m$ to 1.) Consider a 1-parameter family of trial wavefunctions $$\begin{align} \psi_{\varepsilon}(r)~=~&e^{-f_{\varepsilon}(r)}~\nearrow ~e^{-1}\cr &\text{for}\quad \varepsilon ~\searrow ~0^{+} , \end{align}\tag{4}$$ where $$\begin{align} f_{\varepsilon}(r)~:=~& (r+1)^{\varepsilon} ~\searrow ~1\cr &\text{for}\quad \varepsilon ~\searrow ~0^{+}\end{align} \tag{5} $$ $r$ -pointwise. Here the $\nearrow$ and $\searrow$ symbols denote increasing and decreasing limit processes, respectively. E.g. eq. (4) says in words that for each radius $r \geq 0$ , the function $\psi_{\varepsilon}(r)$ approaches monotonically the limit $e^{-1}$ from below when $\varepsilon$ approaches monotonically $0$ from above. It is easy to check that the wavefunction (4) is normalizable: $$\begin{align}0~\leq~~&\langle\psi_{\varepsilon}|\psi_{\varepsilon} \rangle\cr ~=~~& \int_{\mathbb{R}^n} d^nr~|\psi_{\varepsilon}(r)|^2 \cr ~\propto~~& \int_{0}^{\infty} \! dr ~r^{n-1} |\psi_{\varepsilon}(r)|^2\cr ~\leq~~& \int_{0}^{\infty} \! dr ~(r+1)^{n-1} e^{-2f_{\varepsilon}(r)} \cr ~\stackrel{f=(1+r)^{\varepsilon}}{=}&~ \frac{1}{\varepsilon} \int_{1}^{\infty}\!df~f^{\frac{n}{\varepsilon}-1} e^{-2f}\cr ~<~~&\infty,\qquad \varepsilon~> ~0.\end{align}\tag{6} $$ The kinetic energy vanishes $$\begin{align} 0~\leq~~&\langle\psi_{\varepsilon}|K|\psi_{\varepsilon} \rangle \cr ~=~~& \frac{1}{2}\int_{\mathbb{R}^n}\! d^nr~ |{\bf \nabla}\psi_{\varepsilon}(r) |^2\cr ~=~~& \frac{1}{2}\int_{\mathbb{R}^n}\! d^nr~ \left|\psi_{\varepsilon}(r)\frac{df_{\varepsilon}(r)}{dr} \right|^2 \cr ~\propto~~& \varepsilon^2\int_{0}^{\infty}\! dr~ r^{n-1} (r+1)^{2\varepsilon-2}|\psi_{\varepsilon}(r)|^2\cr ~\leq~~&\varepsilon^2 \int_{0}^{\infty} \!dr ~ (r+1)^{2\varepsilon+n-3}e^{-2f_{\varepsilon}(r)}\cr ~\stackrel{f=(1+r)^{\varepsilon}}{=}&~ \varepsilon \int_{1}^{\infty}\! df ~ f^{1+\frac{\color{Red}{n-2}}{\varepsilon}} e^{-2f}\cr ~\searrow ~~&0\quad\text{for}\quad \varepsilon ~\searrow ~0^{+},\end{align} \tag{7}$$ when $\color{Red}{n\leq 2}$ , while the potential energy $$\begin{align}0~\geq~&\langle\psi_{\varepsilon}|V|\psi_{\varepsilon} \rangle\cr ~=~& \int_{\mathbb{R}^n} \!d^nr~|\psi_{\varepsilon}(r)|^2~V({\bf r}) \cr ~\searrow ~& e^{-2}\int_{\mathbb{R}^n} \!d^nr~V({\bf r})~<~0 \cr &\text{for}\quad \varepsilon ~\searrow ~0^{+} ,\end{align}\tag{8} $$ remains non-zero due to assumption (1) and Lebesgue's monotone convergence theorem . Thus by choosing $ \varepsilon \searrow 0^{+}$ smaller and smaller, the negative potential energy (8) beats the positive kinetic energy (7), so that the average energy $\frac{\langle\psi_{\varepsilon}|H|\psi_{\varepsilon}\rangle}{\langle\psi_{\varepsilon}|\psi_{\varepsilon}\rangle}<0$ eventually becomes negative for the trial function $\psi_{\varepsilon}$ . A bound state $^1$ can then be deduced from the variational method . Note in particular that it is absolutely crucial for the argument in the last line of eq. (7) that the dimension $\color{Red}{n\leq 2}$ . $\Box$ Simpler proof for $\color{Red}{n<2}$ : Consider an un-normalized (but normalizable) Gaussian test/trial wavefunction $$\psi(x)~:=~e^{-\frac{x^2}{2L^2}}, \qquad L~>~0.\tag{9}$$ Normalization must scale as $$||\psi|| ~\stackrel{(9)}{\propto}~ L^{\frac{n}{2}}.\tag{10}$$ The normalized kinetic energy scale as $$0~\leq~\frac{\langle\psi| K|\psi \rangle}{||\psi||^2} ~\propto ~ L^{-2}\tag{11}$$ for dimensional reasons. Hence the un-normalized kinetic scale as $$0~\leq~\langle\psi| K|\psi \rangle ~\stackrel{(10)+(11)}{\propto} ~ L^{\color{Red}{n-2}}.\tag{12}$$ Eq. (12) means that $$\begin{align}\exists L_0>0 \forall L\geq L_0:~~0~\leq~& \langle\psi|K|\psi\rangle\cr ~ \stackrel{(12)}{\leq} ~&-\frac{v}{3}~>~0\end{align}\tag{13}$$ if $\color{Red}{n<2}$ . The un-normalized potential energy tends to a negative constant $$\begin{align}\langle\psi| V|\psi \rangle ~\searrow~&\int_{\mathbb{R}^n} \! \mathrm{d}^nx ~V(x)~=:~v~<~0\cr &\quad\text{for}\quad L~\to~ \infty.\end{align}\tag{14}$$ Eq. (14) means that $$\exists L_0>0 \forall L\geq L_0:~~ \langle\psi| V|\psi\rangle ~\stackrel{(14)}{\leq}~ \frac{2v}{3} ~<~ 0.\tag{15}$$ It follows that the average energy $$\begin{align}\frac{\langle\psi|H|\psi\rangle}{||\psi||^2} ~=~~&\frac{\langle\psi|K|\psi\rangle+\langle\psi|V|\psi\rangle}{||\psi||^2}\cr ~\stackrel{(13)+(15)}{\leq}&~ \frac{v}{3||\psi||^2}~<~0\end{align}\tag{16}$$ of trial function must be negative for a sufficiently big finite $L\geq L_0$ if $\color{Red}{n<2}$ . Hence the ground state energy must be negative (possibly $-\infty$ ). $\Box$ References: K. Chadan, N.N. Khuri, A. Martin and T.T. Wu, Bound States in one and two Spatial Dimensions, J.Math.Phys. 44 (2003) 406 , arXiv:math-ph/0208011 . K. Yang and M. de Llano, Simple variational proof that any two‐dimensional potential well supports at least one bound state, Am. J. Phys. 57 (1989) 85 . -- $^1$ The spectrum could be unbounded from below. $^2$ Readers familiar with the correspondence $\psi_{1D}(r)=r\psi_{3D}(r)$ between 1D problems and 3D spherically symmetric $s$ -wave problems in QM may wonder why the even bound state $\psi_{1D}(r)$ that always exists in the 1D finite well potential does not yield a corresponding bound state $\psi_{3D}(r)$ in the 3D case? Well, it turns out that the corresponding solution $\psi_{3D}(r)=\frac{\psi_{1D}(r)}{r}$ is singular at $r=0$ (where the potential is constant), and hence must be discarded.
{ "source": [ "https://physics.stackexchange.com/questions/143630", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/24452/" ] }