source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
477,933
Initially, the changing electric field generates a magnetic field around a wire having an alternating current. The electric field and magnetic field travel outward from the wire at the speed on light. -After leaving the wire, do the fields actually self generate each other in a continuously cycle? -Cannot the fields remain fixed and travel together as a unit after leaving the wire? -Is there an experimental observation that shows the fields are actually self generating and not traveling as an unchanging unit? Edit: -If I understand correctly from Maxwell's equations, a time varying electric field generating a magnetic field has not been experimentally determined (in capacitors, I do not know about electromagnetic radiation). A time varying magnetic field generating an induced electric field can be observed with a solenoid. -I am not aware if the induced electric field has been experimentally shown to be able to produce a magnetic field in empty space. An induced electric field can drive a current in a conductor and the current charges in turn generate a magnetic field. -An unqualified observation can be: The electric field from charges can generate a magnetic field. An induced electric field cannot generate a magnetic field. -The key question is: in EM radiation, is a time varying electric field generating a magnetic field based upon the prediction of Maxwell's equation, but not on experimental observation? -Could the electric field component of the EM radiation be obtained from source charges and the magnetic field component from the time varying electric field of those source charges and both fields move as an unchanging unit?
Physics models rarely hint at ontological level. Throwing dice can be modelled as deterministic process, using initial conditions and equations of motion. Or it can be modelled as stochastic process, using assumptions about probability. Both are appropriate in different contexts. There is no proof of "the real" model.
{ "source": [ "https://physics.stackexchange.com/questions/477933", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/95057/" ] }
478,060
We all have elaborative discussion in physics about classical mechanics as well as interaction of particles through forces and certain laws which all particles obey. I want to ask,Does a particle exert a force on itself? EDIT Thanks for the respectful answers and comments.I edited this question in order to make it more elaborated. I just want to convey that I assumed the particle to be a standard model of point mass in classical mechanics. As I don't know why there is a minimum requirement of two particles to interact with fundamental forces of nature,in the similar manner I wanted to ask does a particle exerts a force on itself?
This is one of those terribly simple questions which is also astonishingly insightful and surprisingly a big deal in physics. I'd like to commend you for the question! The classical mechanics answer is "because we say it doesn't." One of the peculiarities about science is that it doesn't tell you the true answer, in the philosophical sense. Science provides you with models which have a historical track record of being very good at letting you predict future outcomes. Particles do not apply forces to themselves in classical mechanics because the classical models which were effective for predicting the state of systems did not have them apply forces. Now one could provide a justification in classical mechanics. Newton's laws state that every action has an equal and opposite reaction. If I push on my table with 50N of force, it pushes back on me with 50N of force in the opposite direction. If you think about it, a particle which pushes on itself with some force is then pushed back by itself in the opposite direction with an equal force. This is like you pushing your hands together really hard. You apply a lot of force, but your hands don't move anywhere because you're just pushing on yourself. Every time you push, you push back. Now it gets more interesting in quantum mechanics. Without getting into the details, in quantum mechanics, we find that particles do indeed interact with themselves. And they have to interact with their own interactions, and so on and so forth. So once we get down to more fundamental levels, we actually do see meaningful self-interactions of particles. We just don't see them in classical mechanics. Why? Well, going back to the idea of science creating models of the universe, self-interactions are messy . QM has to do all sorts of clever integration and normalization tricks to make them sane. In classical mechanics, we didn't need self-interactions to properly model how systems evolve over time, so we didn't include any of that complexity. In QM, we found that the models without self-interaction simply weren't effective at predicting what we see. We were forced to bring in self-interaction terms to explain what we saw. In fact, these self-interactions turn out to be a real bugger. You may have heard of "quantum gravity." One of the things quantum mechanics does not explain very well is gravity. Gravity on these scales is typically too small to measure directly, so we can only infer what it should do. On the other end of the spectrum, general relativity is substantially focused on modeling how gravity works on a universal scale (where objects are big enough that measuring gravitational effects is relatively easy). In general relativity, we see the concept of gravity as distortions in space time, creating all sorts of wonderful visual images of objects resting on rubber sheets, distorting the fabric it rests on. Unfortunately, these distortions cause a huge problem for quantum mechanics. The normalization techniques they use to deal with all of those self-interaction terms don't work in the distorted spaces that general relativity predicts. The numbers balloon and explode off towards infinity. We predict infinite energy for all particles, and yet there's no reason to believe that is accurate. We simply cannot seem to combine the distortion of space time modeled by Einstein's relativity and the self-interactions of particles in quantum mechanics. So you ask a very simple question. It's well phrased. In fact, it is so well phrased that I can conclude by saying the answer to your question is one of the great questions physics is searching for to this very day. Entire teams of scientists are trying to tease apart this question of self-interaction and they search for models of gravity which function correctly in the quantum realm!
{ "source": [ "https://physics.stackexchange.com/questions/478060", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/230533/" ] }
478,142
Imagine we have paper book. If we put this into a pan and increase its temperature, this book would not catch on fire. If on the other hand the book interacts with this heat source directly, it does catch fire. What is the difference between these two situations?
Before answering your question, it is important to understand how ignition of a solid material occurs. For fuels that contain hydrogen and carbon like paper, Ignition is a gas phase phenomenon . It is not the solid itself that ignites. Before a solid material can be ignited, it must be partially converted into a volatile (combustible) gas. This generally requires heat. It is the combustible gases at the surface of the solid that actually ignites, not the solid itself. The process of decomposing a solid to generate combustible gas is called pyrolysis. The ignitable gaseous products of pyrolysis need to be mixed with oxygen (air) in the proper ratio in order to be in what is called the flammable range. Ignition of the gas/air mixture produced by heating the paper can occur in two ways. If you continue to increase the temperature of the mixture it may reach what is called its auto (self) ignition temperature and ignite. This would be the mechanism for the book on a heated pan. Alternatively, exposing it to a pilot ignition source, such as an external flame or arc, can also ignite the mixture. That would be your book surrounded by air and exposed to a flame. The temperature of the mixture at which this occurs is called the piloted ignition temperature, or flash ignition temperature. Generally speaking, the piloted (flash) ignition temperature is less than the auto (self) ignition temperature. Returning to paper and your book, @StudyStudy mentioned Fahrenheit 451. That (233 C) happens to be the auto (self) ignition temperature of paper, made popular by the book of the same name. The original test used to determining that temperature comes from ASTM 1929 “Standard Test Method for Determining Ignition Temperatures of Plastics”, though the test is not restricted to plastics. The piloted (flash) ignition temperature using the ASTM test is about 177 C, which is less than the auto ignition temperature. Now let’s consider your book on a pan. Since there is no flame or arc above the pan, any ignition that would occur would be auto (self) ignition. All other things considered, as noted above, auto ignition requires the gas be at higher temperatures than ignition involving a pilot source (flame or arc). What’s more, heating occurs on the bottom surface of book. Much of this heat is conducted away from the heated surface into the mass of the book by heat conduction as well as to the surrounding air by convection. Most of the gaseous products of pyrolysis that may be produced at the bottom are prevented from mixing with air, which is essential for ignition. The surrounding air above the pan dilutes those gaseous products that escape the bottom surface. What you are likely to get is a book with a charred bottom but no flaming ignition. If the book is surrounded by air subjected to an external flame, the much higher flame temperature can quickly both cause pyrolysis (thermal decomposition) and ignition of the resulting vapors. Hope this helps.
{ "source": [ "https://physics.stackexchange.com/questions/478142", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/220850/" ] }
478,273
What is the meaning of matter in physics? By defining matter in terms of mass and mass in terms of matter in physics, are we not forming circular definitions? Please give a meaning of "matter" in Physics that circumvents this circularity.
What is the meaning of "matter" in physics? It doesn't matter. Sometimes matter means "particles with rest mass". Sometimes matter means "anything that contributes to the stress-energy tensor". Sometimes matter means "anything made of fermions". And so on. There's no need to have one official definition of the word "matter", nothing about the physical theories depends on what we call the words. Discussing this any further is just like worrying about whether a tomato is really a fruit or a vegetable. A cook doesn't care.
{ "source": [ "https://physics.stackexchange.com/questions/478273", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/231249/" ] }
478,538
Let's say we have an electron around atom. Let's say the electron drops into a lower electron shell. Is 100% of the energy difference converted to a photon? Does the atom recoil at all? Is the any of the energy lost to any other means?
There is always energy loss due to recoil. Considering a single atom that is in free space, the total momentum is conserved, and the total reaction energy is constrained by the transition energy $E_0$ (ignoring the natural line width due to the energy-time uncertainty), so we get in the centre of mass frame: $$0 = \hbar k + mv $$ $$E_0 = \frac 1 2 mv^2 + \hbar \omega = \frac 1 2 mv^2 + \hbar c k$$ Leading to the equation $$E_0 = \frac {\hbar^2 k^2}{2m} + \hbar c k.$$ So $$k = - \frac{mc}{\hbar} \pm \sqrt{ \frac{m^2c^2}{\hbar^2} + \frac{2mE_0}{\hbar^2} } = \frac{mc}{\hbar} \left( \sqrt{1 + \frac{2E_0}{mc^2}} - 1 \right) \approx \frac{E_0}{\hbar c} - \frac 1 4 \frac{E_0^2}{\hbar mc^3}. $$ This corresponds to an energy correction for the photon of $$\frac{\Delta E}{E_0} = -\frac{E_0}{4 mc^2}.$$ So the relative correction is of the order of the transition energy compared to the rest energy of the recoiling mass. If the atom interacts with other atoms (and is not in free space) the process becomes complex and energy may be transferred to the interacting objects (e.g. in a crystal lattice) the possible recoil energies are determined by the phonon spectrum, here, at low temperatures, the Mößbauer effect becomes important wherein the recoil momentum is transferred to the entire crystal, leading to a virtually recoil free emission (since the reaction mass is macroscopically large). Further, there are other ways an electron can loose its energy – e.g. in Auger processes another electron is ejected from the atom, or the energy can be transferred ("coherently") to another atom (exciting an electron there). Further, combination processes, such as emitting a photon and ejecting an electron are allowed if all quantum numbers are conserved.
{ "source": [ "https://physics.stackexchange.com/questions/478538", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/179357/" ] }
479,055
I was taught that when the acceleration experienced by a body is constant, that body follows a parabolic curve. This seems logical because constant acceleration means velocity that is linear and position that is quadratic. This is what I learned from projectiles: Bodies are thrown with an initial velocity near the surface of the Earth, they experience constant acceleration and the result is a parabolic curve. Now that doesn't apply to the orbit of the Earth. The gravitational force can be thought of as constant since the distance from the Earth to the Sun can be thought of as constant too, which by Newton's Second Law means the acceleration of Earth is also constant. Wouldn't that mean that the Earth should just follow a parabolic path? Is there a mathematical proof (similar to the one I mentioned about projectiles) giving the elliptical orbit as a result? My question is, in a word, why can't the Earth be treated as a projectile? And if it can then why doesn't it behave like one?
Now that doesn't apply on the orbit of the Earth. The gravitational force can be thought of as constant since the distance fron Earth to Sun can be thought of as constant too You are correct that the strength or magnitude of the sun's gravitational field is very similar over the length of the earth's orbit, but the direction is not. In a uniform gravitational field, the direction would be the same everywhere. Over the path of the earth's orbit, the sun's gravitational field points in different directions. This significant difference from a uniform field means that the earth's orbit is quite far from a parabola.
{ "source": [ "https://physics.stackexchange.com/questions/479055", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/231589/" ] }
479,515
I have noticed that, in my country India, most of the solar panels are tilted southward at an angle of ${45}^{\circ} .$ Even on buildings with inverted V-shaped roofs, solar panels are still oriented southward on both the sides of roof. Research Many sites suggests that the tilt aids in self-cleaning also another site stated that tilt depends on factor like latitude My questions: Why are solar panels tilted southward? How is latitude of the location of a solar panel relevant in increasing efficiency?
First, not every solar panel in India is oriented towards the south or tilted at 45°. One of the world's largest photovoltaic power stations is installed in Kamuthi (9.3°N, Southern India), with pv modules tilted at 8° . Azimuth Panels are usually oriented towards the south in the northern hemisphere because the sun mostly is in the southern part of the sky. The sun sometimes is in the northern part of the sky, e.g. during sunrise and sunset in spring and in summer. It only happens when the sun is relatively low so it doesn't have a huge influence on the total yield. Here's a sun-path diagram for New Delhi (28.6°N, Northern India): When solar panels are installed on buildings, they sometimes have to be integrated directly in the roof, so the orientation will be dictated by the architecture. Depending on whether the electricity will be used on location, stored in batteries or sold to the grid, it might be interesting to produce less electricity per year but to produce it when it is most useful, e.g. during the afternoon for air conditioning. In that case, solar panels could be turned towards the west. Tilt Finding the best tilt angle is a compromise : too low and the panels won't be cleaned by rain. too low and the panels won't produce much in winter. too high and the panels won't produce much in summer. This can be desired for solar thermal collectors , because boiling water could damage the pumps. too high and the rows will shadow each other. too high and the panels and mount will have to withstand higher forces in windy conditions. too high and the rows will have to be wider apart. Since pv modules are getting cheaper and cheaper, the current trend is to put the modules almost flat, and as close to each other as possible. This way, a larger capacity can be installed for a given roof size. 45° tilt seems to be too high in India for photovoltaic panels: It could be about right for hot water production: Finally, this angle might have been dictated by architectural choices. Here's an average irradiance vs tilt diagram for New-Delhi (28.6°N): and for Kamuthi (9.3°N): In both case, the curves are pretty flat around the maximum, so the tilt angle could be chosen to be 20° or 25° in New-Delhi in order to avoid shadows. It shouldn't be much flatter than 10° in Kamuthi in order to avoid soiling. Azimuth & Tilt North India Here are contour lines for yearly insolation vs orientation in New-Delhi: Unsurprisingly, the orientation with the highest yearly yield is towards the South with a tilt between 25° and 30°, for an insolation of almost $2150\mathrm{\frac{kWh}{m².a}}$ . South India Kamuthi is so close to the equator that the azimuth doesn't matter much, as long as the tilt angle is low. If the tilt angle is higher (e.g. around 60° for solar thermal collectors), it's actually better to orient the panel towards the East or West than towards the South. Sources Every diagram has been generated with Ruby + INSEL + Gnuplot . Monthly irradiance data has been downloaded from PVGIS . Hourly values have been generated with Gordon-Reddy . Hay & Davies diffuse sky model has been used to calculate global irradiance on tilted planes.
{ "source": [ "https://physics.stackexchange.com/questions/479515", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/141548/" ] }
479,739
When watching a video by Veritasium about the SI units redefinition (5:29) , a claim that the volt and unit of resistance (presumably the ohm) will change by about 1 part in 10 million caught my attention: [ ... ] I should point out that a volt will actually change by about 1 part in 10 million, and resistance will change by a little bit less than that. And that's because back in 1990, the electrical metrologists decided to stop updating their value of, effectively, plancks constant, and just keep the one they had in 1990. And there was a benefit to that: they didn't have to update their definitions or their instruments. [ ... ] Well, now the electrical metrologists will have to change. But, that's a very tiny change for a very tiny number of people. Apparently, the reason is that on 20 May, 2019, redefinitions of SI base units are scheduled to come into force . The kilogram will be redefined using the Planck constant, which, presumably, means that any change in value from the previous definition ( the International Prototype of the Kilogram ) would affect derived units depending on it, including the volt, ohm, farad, henry, siemens, tesla and (formerly) ampere . Will the volt or ohm change, as Veritasium seemingly claims? Are any other electrical units (listed above) affected? If so, exactly how much will they have changed after the redefinition?
Late last century electrical standards based on Josephson junctions became common. A Josephson junction together with an atomic clock can give an exquisitely precise voltage standard in terms of the Josephson constant. Unfortunately, the then-current definition of the volt relied on the definition of the SI kilogram, which introduced substantial uncertainty. So we could provide a very precise voltage standard, but because of the imprecise definition of the volt we were not sure how many volts it was. Therefore, in 1990 the community came up with the conventional volt, denoted $V_{90}$ , based on a fixed value of the Josephson constant, $K_{J-90}$ . This conventional unit has served as a more accurate and reproducible standard for voltage since then, however its exact value in terms of SI $V$ was unknown due to the aforementioned lack of precision. https://en.wikipedia.org/wiki/Conventional_electrical_unit With the SI redefinition in a few days $K_J$ will now have an exact value, and that value is slightly different from the exact value assigned to $K_{J-90}$ by the 1990 convention. Therefore, the SI $V$ is also slightly different from the conventional $V_{90}$ . Because both $K_J$ and $K_{J-90}$ are exact, the conversion between SI and conventional volts is also exact and therefore the conventional volt is abrogated. This means that electrical metrologists will need to stop using $V_{90}$ and use $V$ which has a slightly different value but the same precision. In other words, an accurate old 1 $V$ standard was much less precise than an old 1 $V_{90}$ standard, but an accurate new 1 $V$ standard will have the same precision as the abrogated 1 $V_{90}$ standard even though the value is slightly different. So as Veritasium pointed out, it's a very tiny change for a very tiny number of people , although it is not that $V_{90}$ is changing, it is just being abrogated. And the value of $V$ is not changing, it is just gaining precision. Here is a summary of the affected electrical units and the changes being made: Unit Symbol Definition Related to SI SI value (CODATA 2014) SI value (2019) conventional volt V 90 see above $\frac{K_\text{J-90}}{K_J} \text{V}$ 1.000 000 0983(61) V 1.000 000 106 66... V conventional ohm Ω 90 see above $\frac{R_K}{R_\text{K-90}} \text{Ω}$ 1.000 000 017 65(23) Ω 1.000 000 017 79... Ω conventional ampere A 90 V 90 /Ω 90 $\frac{K_\text{J-90}}{K_J} \cdot \frac{R_\text{K-90}}{R_K} \text{A}$ 1.000 000 0806(61) A 1.000 000 088 87... A conventional coulomb C 90 s⋅A 90 = s⋅V 90 /Ω 90 $\frac{K_\text{J-90}}{K_J} \cdot \frac{R_\text{K-90}}{R_K} \text{C}$ 1.000 000 0806(61) C 1.000 000 088 87... C conventional watt W 90 A 90 V 90 = V 90 2 /Ω 90 $\left(\frac{K_\text{J-90}}{KJ}\right)^2 \cdot \frac{R_\text{K-90}}{R_K} \text{W} $ 1.000 000 179(12) W 1.000 000 195 53... W conventional farad F 90 C 90 /V 90 = s/Ω 90 $\frac{R_\text{K-90}}{R_K} \text{F}$ 0.999 999 982 35(23) F 0.999 999 982 20... F conventional henry H 90 s⋅Ω 90 $\frac{R_K}{R_\text{K-90}} \text{H}$ 1.000 000 017 65(23) H 1.000 000 017 79... H From the exact value of $K_{J-90}$ in the link above and the exact value of $e$ and $h$ given here you can calculate that $\frac{K_{J-90}}{K_J} = \frac{ 71207857995393}{71207850400000}$ exactly. For the volt that works out to approximately $1 V_{90} = 1+1.06\times 10^{-7} \; V$ , or ~100PPB.
{ "source": [ "https://physics.stackexchange.com/questions/479739", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
480,113
So the second episode of the HBO series began to cover the risk of a steam explosion that led to them sending three divers into the water below the reactor to drain the tanks. This occurred after the initial explosion that destroyed the reactor, and after the fire in the core had been put out. But at this point the decay heat and remaining fission reaction kept the core at more than 1200°C, causing it to melt through the concrete floors below the reactor. And below the reactor were water tanks which contained 7,000 cubic meters of water (according to the TV show. If anyone has a real figure, I'd love to hear). When the lava of the melted core hit it, it would cause an enormous steam explosion . Finally, my question: About how large would this explosion have been? The character in the show says "2-4 megatons" (of TNT equivalent, I assume). I'm pretty sure this is absurd and impossible. But real estimates are hard to come by. Other sources vary wildly, some repeating the "megatons" idea, and others saying it would've " level[ed] 200 square kilometers ". This still seems crazy. tl;dr: I know a lot of it hinges on unknowns and the dynamics of the structures and materials involved, so I can simplify it to a constrained physics question: Assuming 7,000 cubic meters of water instantly flashes to steam, how much potential energy is momentarily stored in that volume of steam occupying the same volume as the water did? I don't know what to assume the temperature of the steam is. There were hundreds of tons of core material at temperatures near 1200°C, so worst case scenario you could assume all the steam becomes that temperature as the materials mix. Best case scenario, I guess we could assume normal atmospheric boiling point (100°C)?
In my view the water isn't really the thing to focus on here. The real energy reservoir was the partially-melted core ; the water wasn't dangerous because it held energy, but rather because it had the potential to act as a heat engine and convert the thermal energy in the core into work. We can therefore calculate the maximum work which could conceivably be extracted from the hot core (using exergy) and use this as an upper bound on the amount of energy that could be released in a steam explosion. The exergy calculation will tell us how much energy an ideal (reversible) process could extract from the core, and we know from the Second Law of Thermodynamics that any real process (such as the steam explosion) must extract less. Calculation Using exergy, the upper bound on the amount of work which could be extracted from the hot core is \begin{align} W_\text{max,out} &= X_1 - X_2 \\ &= m(u_1 - u_2 -T_0(s_1-s_2)+P_0(v_1-v_2)) \end{align} If we assume that the core material is an incompressible solid with essentially constant density, then \begin{align} W_\text{max,out} &= m(c (T_1 - T_2) -T_0 c \ln(T_1/T_2)) \end{align} where $T_0$ is the temperature of the surroundings, $T_2$ is the temperature after energy extraction is complete, and $T_1$ is the initial temperature. At this point you just need to choose reasonable values for the key parameters, which is not necessarily easy. I used: $T_1 = 2800\,^\circ\text{C}$ based on properties of corium $T_2 = T_0$ as an upper bound (the most energy is extracted when the system comes to the temperature of the surroundings) $T_0 = 25\,^\circ\text{C}$ based on SATP $c = 300\,\text{J/(kg.K)}$ based on properties of UO $_2$ $m = 1000\,\text{tonnes}$ based on the text in your question. This gives me $W_\text{max,out} = 6.23 \times 10^{11}\,\text{J}$ or 149 tonnes of TNT equivalent . This is several orders of magnitude lower than the "megatons" estimate provided in your question, but does agree with your gut response that "megatons" seems unreasonably high. A sanity check is useful to confirm that my result is reasonable... Sanity Check With the numbers I used, the system weights 1 kiloton and its energy is purely thermal. If we considered instead 1 kiloton of TNT at SATP, the energy stored in the system would be purely chemical. Chemical energy reservoirs are generally more energy-dense than thermal energy reservoirs, so we'd expect the kiloton of TNT to hold far more energy than the kiloton of hot core material. This suggests that the kiloton of hot core material should hold far less than 1 kiloton of TNT equivalent, which agrees with your intuition and my calculation. Limitations One factor which could increase the maximum available work would be the fact that the core was partially melted. My calculation neglected any change in internal energy or entropy associated with the core solidifying as it was brought down to ambient conditions; in reality the phase change would increase the maximum available work. The other source of uncertainty in my answer is the mass of the core; this could probably be deduced much more precisely from technical documents. A final factor that I did not consider is chemical reactions: if the interaction of corium, water, and fresh air (brought in by an initial physical steam explosion) could trigger spontaneous chemical reactions, then the energy available could be significantly higher. Conclusion Although addressing the limitations above would likely change the final upper bound, I doubt that doing so could change the bound by the factor of ten thousand required to give a maximum available work in the megaton range. It is also important to remember that, even if accounting for these factors increased the upper bound by a few orders of magnitude, this calculation still gives only an upper bound on the explosive work; the real energy extracted in a steam explosion would likely be much lower. I am therefore fairly confident that the megaton energy estimate is absurd , as your intuition suggested.
{ "source": [ "https://physics.stackexchange.com/questions/480113", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/232002/" ] }
480,114
I am currently reading Weinberg's Lectures on Quantum Mechanics (I am halfway through chapter 4, that encompasses angular momentum and spin). While I like the book quite a lot I have noticed that Weinberg's notation is not standard and his approach is very algebraic. I am looking for other graduate level books that can be complementary for this one. An important factor for me is that it should be a book suitable for independent reading (I am not enrolled in QM courses at the moment) or that if any parts are omitted, they are covered by Weinberg. I have thought about L&L but I haven't transcended mortality yet so maybe not... I also heard good things about Cohen-Tannoudji, would it be appropriate? After learning more QM I plan to learn some Quantum Field Theory (I am particularly interested in QCD, from what I know it sounds interesting). I am also interested on nuclear physics (I wish to read Walecka's book in the future). Superconductivity is also on my radar, in particular those parts that involve topology (I have read the first half of Munkres' book). Thank you! Edit: following the recommendation made by @EverydayFoolish I added the paragraph with my topics of interest.
In my view the water isn't really the thing to focus on here. The real energy reservoir was the partially-melted core ; the water wasn't dangerous because it held energy, but rather because it had the potential to act as a heat engine and convert the thermal energy in the core into work. We can therefore calculate the maximum work which could conceivably be extracted from the hot core (using exergy) and use this as an upper bound on the amount of energy that could be released in a steam explosion. The exergy calculation will tell us how much energy an ideal (reversible) process could extract from the core, and we know from the Second Law of Thermodynamics that any real process (such as the steam explosion) must extract less. Calculation Using exergy, the upper bound on the amount of work which could be extracted from the hot core is \begin{align} W_\text{max,out} &= X_1 - X_2 \\ &= m(u_1 - u_2 -T_0(s_1-s_2)+P_0(v_1-v_2)) \end{align} If we assume that the core material is an incompressible solid with essentially constant density, then \begin{align} W_\text{max,out} &= m(c (T_1 - T_2) -T_0 c \ln(T_1/T_2)) \end{align} where $T_0$ is the temperature of the surroundings, $T_2$ is the temperature after energy extraction is complete, and $T_1$ is the initial temperature. At this point you just need to choose reasonable values for the key parameters, which is not necessarily easy. I used: $T_1 = 2800\,^\circ\text{C}$ based on properties of corium $T_2 = T_0$ as an upper bound (the most energy is extracted when the system comes to the temperature of the surroundings) $T_0 = 25\,^\circ\text{C}$ based on SATP $c = 300\,\text{J/(kg.K)}$ based on properties of UO $_2$ $m = 1000\,\text{tonnes}$ based on the text in your question. This gives me $W_\text{max,out} = 6.23 \times 10^{11}\,\text{J}$ or 149 tonnes of TNT equivalent . This is several orders of magnitude lower than the "megatons" estimate provided in your question, but does agree with your gut response that "megatons" seems unreasonably high. A sanity check is useful to confirm that my result is reasonable... Sanity Check With the numbers I used, the system weights 1 kiloton and its energy is purely thermal. If we considered instead 1 kiloton of TNT at SATP, the energy stored in the system would be purely chemical. Chemical energy reservoirs are generally more energy-dense than thermal energy reservoirs, so we'd expect the kiloton of TNT to hold far more energy than the kiloton of hot core material. This suggests that the kiloton of hot core material should hold far less than 1 kiloton of TNT equivalent, which agrees with your intuition and my calculation. Limitations One factor which could increase the maximum available work would be the fact that the core was partially melted. My calculation neglected any change in internal energy or entropy associated with the core solidifying as it was brought down to ambient conditions; in reality the phase change would increase the maximum available work. The other source of uncertainty in my answer is the mass of the core; this could probably be deduced much more precisely from technical documents. A final factor that I did not consider is chemical reactions: if the interaction of corium, water, and fresh air (brought in by an initial physical steam explosion) could trigger spontaneous chemical reactions, then the energy available could be significantly higher. Conclusion Although addressing the limitations above would likely change the final upper bound, I doubt that doing so could change the bound by the factor of ten thousand required to give a maximum available work in the megaton range. It is also important to remember that, even if accounting for these factors increased the upper bound by a few orders of magnitude, this calculation still gives only an upper bound on the explosive work; the real energy extracted in a steam explosion would likely be much lower. I am therefore fairly confident that the megaton energy estimate is absurd , as your intuition suggested.
{ "source": [ "https://physics.stackexchange.com/questions/480114", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
480,163
I am an undergraduate Physics student completing my first year shortly. The following question is based on the physical systems I’ve encountered so far. (We mostly did Newtonian mechanics.) In all of our analyses of the physical systems (up till now) we recklessly exploited Taylor’s series, retaining terms up to the desired precision of our approximate model of reality. But what is the justification of using Taylor’s series? It implicitly implies that the mathematical functions in our physical model are analytic . But how can we be sure about that? Sure, the nature doesn’t seem to be discontinuous or have “kinks” (i.e. nonexistent derivatives) in its behaviour. That seems plausible. But still, there are non-analytic smooth functions. And there are “many” more of them than there are analytic functions. So even if the nature works smoothly in its endeavours, it is essentially zero probability that it should do so analytically. So why do we use Taylor’s series at all?
I had this same problem, too. The trick with it is realizing that there's an important difference between Taylor series and Taylor approximations or polynomials , whose behavior is described by Taylor's theorem . Yes, very often I suspect a common mistake is that you first see Taylor polynomials and theorem, and then you get Taylor series and that becomes the focus and suddenly you forget about the rest. But here, what we're actually doing when we "truncate" a Taylor series is that we are going back to a Taylor polynomial , since that is what a truncated Taylor series is - or alternatively, a Taylor series is the natural extension of to infinite order. In that context, Taylor's theorem tells you exactly how it does or does not behave as an approximation and - surprise - it doesn't require anything about analyticity at all. Analyticity only comes into play when you consider the full series: in fact, what Taylor's theorem tells you is that a finite Taylor polynomial will still work as an approximation for even a non -analytic function, so long as you get suitably close to the point at which you're taking the polynomial and the function is differentiable enough to be able to make the polynomial of the given degree possible to take. Specifically, Taylor's theorem tells you that, analytic or not , if you cut the Taylor series so that the highest term has degree $N$ , to form the Taylor polynomial (or truncated Taylor series) $T_N(a, x)$ , where $a$ is the expansion point, you have $$f(x) = T_N(a, x) + o(|x - a|^N),\ \ \ \ \ x \rightarrow a$$ where the last part defines the behavior of the remainder term: this is the "little-o notation" and means that the error pales in comparison to the bound $|x - a|^N$ . As an example in elementary mathematical physics, consider the analysis of the "pathological" potential in Newtonian mechanics given by $$U(x) := \begin{cases} e^{-\frac{1}{x^2}},\ x \ne 0\\ 0,\ \mbox{otherwise} \end{cases}$$ which is smooth everywhere , but not analytic when $x = 0$ . In particular, it is so bad that not only is it not analytic, the Taylor series exists and even converges ... just to the wrong thing! : $$U(x)\ "="\ 0 + 0x + 0x^2 + 0x^3 + 0x^4 + \cdots,\ \ \ \ \mbox{near $x = 0$}$$ ... and yes, that is literally 0s on every term , so the right-hand expression equals $0$ ! (ADD - see comments: no... not THAT 0! ... uh ... Ooops... uhhh ... ) Nonetheless , while that is technically "wrong", the usual analysis methods you have for this system will still tell you the "right thing", provided you're careful : in particular, we note that $x = 0$ looks like some kind of "equilibrium" since $U'$ is zero there, but we also note that we are told - correctly! - that we should not apply the harmonic oscillator approximation because we also have that the coefficient out in front of $x^2$ is 0 as well. We are justified in both conclusions because while this Taylor series is "bad", it is still A-OK by Taylor's theorem to write the truncated series, and thus Taylor polynomial , $$U(x) \approx 0 + 0x + 0x^2,\ \ \ \ \mbox{near $x = 0$}$$ even though it "equals $0$ ", because this $U(x)$ is "so exquisitely approximated by the constant function $U^{*}(x) := 0$ " that it is $o(|x|^N)$ for every order $N > 0$ and thus, in particular, also $N = 2$ ! Hence, the harmonic analysis and conclusion of failure thereof are still 100% justified! ADD (IE+1936.6817 Ms - 2018-05-16): Per a comment added below, there is an additional wrinkle in this story which had been thinking of mentioning but didn't, yet for which, in light of that, I thought maybe I now should. There are actually two different kinds of ways in which the Taylor series can fail when a function is not analytic at a point and it is taken at that point. One of these is the way I showed above - where the Taylor series converges, but it converges to the "wrong" thing in that it does not equal the function in any non-trivial interval around that point (you might be able to have it equal it on some weird dusty/broken-up set, but not on any interval), i.e. no interval $[a - \epsilon, a + \epsilon]$ with $\epsilon \ne 0$ . Such a point is called a Cauchy point , or C-point . The other way is for the Taylor series to have actually radius of convergence 0, i.e. it does not converge in any non-trivial interval of the same form with $\epsilon \ne 0$ . This kind of point is called a Pringsheim point , or P-point . This case was not demonstrated, but even in such a case, the Taylor series is still an asymptotic series in the sense that it will at least try to start to converge if you're close enough and, moreover, the closer you are to the expansion point $a$ , the more terms you can take before it stops converging and starts to diverge again. Since in physics, we are usually interested - and esp. for the harmonic oscillator - in only a few low-order terms, the ultimate behavior of the series is not important and we can still take it to get, say, the harmonic approximation near a point of equilibrium even if the function is not analytic there - e.g. consider the potential $U_3(x) := U(x) + \frac{1}{2} kx^2$ with $k > 0$ , where we used the first potential we just gave above. This is not analytic at $x = 0$ either, but nonetheless, the harmonic approximation will not only work, but work exquisitely well, and with the frequency $\omega := \sqrt{\frac{k}{m}}$ as usual. See: https://math.stackexchange.com/questions/620290/is-it-possible-for-a-function-to-be-smooth-everywhere-analytic-nowhere-yet-tay
{ "source": [ "https://physics.stackexchange.com/questions/480163", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/231957/" ] }
480,190
I was wondering about the Roche limit and its effects on satellites. Why aren't artificial satellites ripped apart by gravitational tidal forces of the earth? I think it's due to the satellites being stronger than rocks? Is this true? Also, is the Roche limit just a line (very narrow band) around the planet or is it a range (broad cross sectional area) of distance around the planet?
The Roche limit denotes how close a body held together by its own gravity can come. Since gravity tends to be the only thing holding moon-sized objects together, you won't find natural moons closer than the Roche limit. [Strictly speaking, the Roche Limit is a function of both the primary (in the case of this question, Earth) and the secondary (satellites) bodies; there is a different Roche limit for objects with different densities, but for simplicity I'll be treating the Roche Limit as being a function just of the primary.] For instance, Saturn's rings lie inside its Roche limit, and may be the debris from a satellite that was ripped apart. The rings are made up of small particles, and each particle is held together by molecular bonds. Since they have something other than gravity holding them together, they are not ripped apart any further. Similarly, an artificial satellite is also held together by molecular bonds, not internal gravity. The molecular-bonds-will-be-ripped-apart-by-tidal-forces limit is obviously much smaller than a satellite's orbit, as we, on the surface of the Earth, are even closer, and we are not ripped apart. You would have to have an extremely dense object, such as a neutron star or black hole, for that limit to exist. Being inside the Roche limit does mean that if an astronaut were to go on a space walk without a tether, tidal forces would pull them away from the larger satellite. Outside the Roche limit, the gravity of the larger satellite would pull the astronaut back (although not before the astronaut runs out of air). If you look at the influence of the Moon's tides on Earth, you can see that the oceans are pulled towards the Moon, but the land is (relatively) stationary. The fact that tides are only a few meters shows that the Earth is well outside the Moon's Roche limit (and of course, the Earth's Roche limit is further out than the Moon's, so the Moon would reach the Earth's Roche limit long before the Earth reached the Moon's). If the Moon were to move towards the Earth, the tides would get higher and higher. The Moon's Roche limit is the point at which the tides would get so high that the water is ripped away from the Earth. The land would still survive slightly past that point, because the crust has some rigidity beyond mere gravitational attraction. Regarding your second question: there is a region in which the tidal forces would be larger than internal gravitational attraction, and a region in which internal gravitational attraction would be larger than tidal forces. The Roche limit is the boundary between those two regions. Everything inside the Roche limit constitutes the former region, while everything outside the Roche limit constitutes the latter.
{ "source": [ "https://physics.stackexchange.com/questions/480190", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/231868/" ] }
481,149
In my understanding, the center of the Earth is hot because of the weight of the its own matter being crushed in on itself because of gravity. We can use water to collect this heat from the Earth and produce electricity with turbines. However, I'd imagine that doing this at an enormous, impossibly large scale would not cool the center of the Earth to the same temperature as the surface, since gravity is still compressing the rock together. However, since energy cannot be created or destroyed, it seems like this energy is just coming from nowhere. I doubt the Earth's matter is being slowly consumed to generate this energy, or that the sun is somehow causing the heating. I think that I have misunderstood or overlooked some important step in this process. If so, why (or why not) does the Earth's center heat up, and, if not, does geothermal energy production cool it down irreversibly?
Heating because of high pressure is mostly an issue in gases, where gravitational adiabatic compression can bring up the temperature a lot (e.g. in stellar cores). It is not really the source of geothermal heat. Earth's interior is hot because of three main contributions : "Primordial heat": energy left over from when the planet coalesced. The total binding energy of Earth is huge ( $2\cdot 10^{32}$ J) and when the planetesimals that formed Earth collided and merged they had to convert their kinetic energy into heat. This contributes 5-30 TW of energy flow today. "Differentiation heat": the original mix of Earth was likely relatively even, but heavy elements would tend to sink towards the core while lighter would float up towards the upper mantle . This releases potential energy. "Radiogenic heat": The Earth contains a certain amount of radioactive elements that decay, heating up the interior. The ones that matter now are the ones that have half-lives comparable with the age of Earth and high enough concentrations; these are $^{40}$ K, $^{232}$ Th, $^{235}$ U and $^{238}$ U. The heat flow due to this is 15-41 TW. Note that we know the total heat flow rather well, about 45 TW, but the relative strengths of the primordial and radiogenic heat are not well constrained. The energy is slowly being depleted, although at a slow rate: the thermal conductivity and size of Earth make the heat flow out rather slowly. Geothermal energy plants may cool down crustal rocks locally at a faster rate, getting less efficient over time if they take too much heat. But it has no major effect on the whole system, which is far larger.
{ "source": [ "https://physics.stackexchange.com/questions/481149", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/191721/" ] }
481,557
I have read this question: Electromagnetic gravity where Safesphere says in a comment: Actually, photons themselves don't bend spacetime. Intuitively, this is because photons can't emit gravitons, because, as any massless particles not experiencing time, photons can't decay by emitting anything. The latest theoretical results show that the gravitational field of a photon is not static, but a gravitational wave emanating from the events of the emission and absorption of the photon. Thus the spacetime is bent by the charged particles emitting or absorbing photons, but not by the photons themselves. If photon can bend spacetime how does it exchange graviton? Is there experimental evidence that massless particles such as photons attract massive objects? where John Rennie says: as far as I know there has been no experimental evidence that light curves spacetime. We know that if GR is correct it must do, and all the experiments we've done have (so far) confirmed the predictions made by GR, so it seems very likely that light does indeed curve spacetime. Now this cannot be right. One of them says photons do bend spacetime, since they do have stress-energy, but it is hard to measure it since the energy they carry is little compared to astronomical body's stress-energy. So they do bend spacetime, it is just that it is hard to measure it with our currently available devices. Now the other one says that photons do not bend spacetime at all. It is only the emitting charge (fermion) that bends spacetime. Which one is right? Do photons bend spacetime themselves because they do have stress-energy or do they not?
Classical electromagnetic fields carry energy and momentum and therefore cause spacetime curvature. For example, the EM field around a charged black hole is taken into account when finding the Reissner-Nordstrom and Kerr-Newman metrics. The question of whether photons cause spacetime curvature is a question about quantum gravity, and we have no accepted theory of quantum gravity. However, we have standard ways of quantizing linear perturbations to a metric, and reputable journals such as Physical Review D have published papers on graviton-mediated photon-photon scattering, such as this one from 2006. If such calculations are no longer mainstream, it is news to me. Given that photons have energy and momentum, it would surprise me if they do not induce curvature. I also note that the expansion of the "radiation-dominated" early universe was caused by what is generally described as a photon gas and not as a classical electromagnetic field. So the idea that photons bend spacetime is part of mainstream cosmology, such as the standard Lambda-CDM model. Finally, the idea of a kugelblitz makes no sense to me unless photons bend spacetime. So in Rennie v. Safesphere, I am on the Rennie side, but I look forward to Safesphere defending his position in a competing answer. Addendum: Safesphere declined to answer; in a now-removed comment, he said that knzhou’s answer explains the disagreement. I don’t agree. I disagree with knzhou that “bends spacetime” is vague. It is commonly understood by most physicists to mean “contributes to the energy-momentum tensor on the right side of the Einstein field equations”. And most physicists believe that real photons do exactly this, for the reasons that Ben Crowell and I have stated.
{ "source": [ "https://physics.stackexchange.com/questions/481557", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/132371/" ] }
481,634
Is that something we could do if we use ion or nuclear thrusters? Wouldn't people in the station reach 0.99993 speed of light in just 5 years accelerating at 1g and effectively travel into the future by 83.7 years? That would be a great experiment and a very effective way to show relativity theory in action. I mean, the people inside the station would have effectively traveled into the future, how cool is that? Why haven't it been done yet?
It is not feasible because it would cost an enormous amount of energy to accelerate the spacecraft. To prove this let's calculate with some concrete numbers. Very optimistically estimated, your spacecraft may have a mass of $m=1000\text{ kg}$ (enough for a few people and a small space capsule around them, but neglecting the mass of the fuel needed). And you said you want a speed of $v=0.99993\cdot c$ . Now you can calculate the relativistic kinetic energy of it: $$\begin{align} E_{\text k} &= \frac{mc^2}{\sqrt{1-v^2/c^2}} - mc^2 \\ &= \left(\frac{1}{\sqrt{1-v^2/c^2}}-1\right) mc^2 \\ &= \left(\frac{1}{\sqrt{1-0.99993^2}}-1\right)\cdot 1000 \text{ kg}\cdot (3\cdot 10^8\text{ m/s})^2 \\ &= (84.5-1)\cdot 1000 \text{ kg}\cdot (3\cdot 10^8\text{ m/s})^2 \\ &= 7.5 \cdot 10^{21}\text{ J} \end{align}$$ Now this is an enormous amount of energy. It is comparable to the yearly total world energy supply. (According to Wikipedia:World energy consumption the total primary energy supply for the year 2013 was $5.67 \cdot 10^{20}\text{ J}$ .)
{ "source": [ "https://physics.stackexchange.com/questions/481634", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/157916/" ] }
482,014
My family members dislike the idea of having many devices communicating wirelessly in our house, arguing that the signals have negative effects on our physical health. I would like to tell them the EM signals are in fact weaker than the light from our lights but I could not really confirm this. Could someone tell me how strong the signals from the wireless devices are compared to those from lights, and perhaps those from the Sun as well? How about the signals for radio devices and handphones? What is the scientific basis for claiming the radiation has or doesn't have effects on the human body?
Could someone tell me how strong the signals from the wireless devices are compared to those from lights, and perhaps those from the Sun as well? At the surface of the Earth, the Sun delivers approximately 1 kW/m $^2$ [ Wikipedia ]. An average 100 W incandescent light bulb is only about 2.5% efficient so it emits 2.5 W of optical power (the rest is emitted as infrared or ultraviolet as described in the case of the Sun below). Wireless routers emits about 0.1 W of power. A cell phone emits about 1 W. Let's put all that into a table, but let's make sure we include the frequency of the radiation emitted by each of these sources: +----------------+-----------+-----------+ | Source | Power (W) | Frequency | +----------------+-----------+-----------+ | Sun | 1000/m^2 | optical | | Light bulb | 2.5 | optical | | Cell phone | 1 | microwave | | WiFi router | 0.1 | microwave | | Microwave oven | 700 | microwave | +----------------+-----------+-----------+ The Sun is by far the strongest emitter in our daily lives. That's pretty obvious though if you think about the fact that looking at the Sun is painful and would destroy your eyes while looking at a WiFi router is no problem. What is the scientific basis for claiming the radiation has or doesn't have effects on the human body? There are two factors that determine whether radiation is hamful: flux and frequency. Flux means roughly the number of photons flowing through a certain area per time. Frequency means the frequency of each photon. Power $P$ is related to frequency $\omega$ and flux $\Phi$ via $$P = \Phi \hbar \omega \, .$$ However, power is not the only thing that determines harmfulness. It turns out various materials have specific frequencies where they do and do not absorb radiation. For example, glass does not absorb optical radiation, which is why you can see through it. The Sun emits power over a range of frequencies but the peak is in the optical (i.e. visible) range. That comes as no surprise because of course our eyes are evolved to see the radiation light that exists on Earth. Optical radiation has relatively high energy and because of that it gets readily absorbed by the outer parts of your body (except for clear part of the eyes). However, we're not usually exposed to enough optical radiation flux to do any harm. For example, we don't usually encounter strong enough lights to burn us. A really strong industrial laser would be a counterexample. On the other hand, the part of the solar spectrum at frequencies just above the optical, known as "ultraviolet", has enough energy to damage your body cells, causing sunburn and skin cancer. The part of the solar spectrum at frequencies below the optical, known as "infrared", is commonly called "heat". The infrared is generally too low energy to destroy body cells at the levels coming from the Sun. Incandescent light bulbs also emit a spectrum of radiation, and the story is relatively similar to the story we told for the Sun. Now, cell phones, WiFi routers, and microwave ovens all produce microwave radiation, which in the range of 1 GHz frequency. That's about 100,000 times lower frequency than visible light. Microwave radiation penetrates your skin and goes through your body. That's why microwave ovens work; the radiation permeates the food and heats it up. Compare that to putting food right next to the heating element of a broiler in which case the food's outside cooks very quickly before the whole thing is done. Anyway, the point is that microwave radiation penetrates your body. That might sound scary, but microwave photons are too low in energy to damage your cells the way that ultraviolet does. So even though microwaves heat you up a little bit, they don't give you cancer the way that sunlight does. Now let's compare power levels in a cell phone and in a WiFi router. Since the phone is right next to your head, about half of the phone's emitted power goes through your brain. On the other hand, if you're $3\text{m}$ away from a WiFi router and we approximate the size of a head cross as $20\text{cm}$ , then the fraction of the WiFi router's power going through your head is only about $$\text{WiFi power fraction} = \frac{(20\text{cm})^2}{\underbrace{4 \pi (3\text{m})^2}_\text{surface area of sphere}} = 0.00035 \, .$$ So all together, the ratio of phone energy to WiFi router energy going through your brain is \begin{align} \frac{\text{phone power through brain}}{\text{WiFi power through brain}} &= \frac{\text{phone emitted power}}{\text{WiFi emitted power}} \times \frac{\text{phone power fraction}}{\text{WiFi power fraction}} \\ &= \frac{1\text{W}}{0.1\text{W}} \times \frac{0.5}{0.00035} \\ &\approx 14,000 \, . \end{align} So a cell phone puts about 14,000 times more power through your brain than a WiFi router. If your folks are afraid of the WiFi router, they should be terrified by cell phones.
{ "source": [ "https://physics.stackexchange.com/questions/482014", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/109166/" ] }
482,287
I get that Earth's mass is very large, so its acceleration is very tiny. But wouldn't the acceleration accumulate over a period of time and become noticeable?
It seems you have the same misunderstanding like most people have before fully understanding Newtonian physics. They think: Only the moon rotates around the earth, and the earth stands still. But this is wrong. Actually the earth does accelerate towards the moon, in much the same way as the moon accelerates towards the earth. And that's why not only the moon, but also the earth rotates around their common barycenter (the $\color{red}{+}$ in the animation below), albeit with a smaller radius. (animated image from Wikipedia: Barycenter - Gallery ) Edit (in reply to question asked in comment, now moved to chat ): The attractive force is pointing vertically down to the center of the earth. It has no horizontal component. Therefore this force adds no horizontal speed to the moon's movement. The moon had already a horizontal speed since its creation billion years ago. The attractive force acts only vertically. Therefore the moon's path is a curve bending towards the earth, instead of just a straight line. The same applies to you when standing on the earth. The attractive force adds no horizontal speed to your movement, And since you had no horizontal speed from the beginning, it stays like this.
{ "source": [ "https://physics.stackexchange.com/questions/482287", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/232914/" ] }
482,302
Consider the following system: Newton's second law for rotational motion: \begin{equation}\tau=I\alpha \Leftrightarrow rF=\frac{1}{3}mr^{2}\alpha \Leftrightarrow \frac{d\omega}{dt}=\frac{3F}{mr}\end{equation} Considering RHS constant, we get $\omega=\frac{3F}{mr}t.$ I'm not sure if the angular velocity whould be inverse proportional to the radius (from natural experience I know that pushing farther requiers lower force). Also what happens if the bar is not fixed and the two opposite forces are acting at the ends of the bar. Since their sum is $\vec{0}$ there is translational equilibrum and so the axis of rotation is at the $C.M.$ but will the action of the two forces change the angular velocity from the previous situation?
It seems you have the same misunderstanding like most people have before fully understanding Newtonian physics. They think: Only the moon rotates around the earth, and the earth stands still. But this is wrong. Actually the earth does accelerate towards the moon, in much the same way as the moon accelerates towards the earth. And that's why not only the moon, but also the earth rotates around their common barycenter (the $\color{red}{+}$ in the animation below), albeit with a smaller radius. (animated image from Wikipedia: Barycenter - Gallery ) Edit (in reply to question asked in comment, now moved to chat ): The attractive force is pointing vertically down to the center of the earth. It has no horizontal component. Therefore this force adds no horizontal speed to the moon's movement. The moon had already a horizontal speed since its creation billion years ago. The attractive force acts only vertically. Therefore the moon's path is a curve bending towards the earth, instead of just a straight line. The same applies to you when standing on the earth. The attractive force adds no horizontal speed to your movement, And since you had no horizontal speed from the beginning, it stays like this.
{ "source": [ "https://physics.stackexchange.com/questions/482302", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/232917/" ] }
482,520
In the case there is no air and your eye are closed, then does falling from the sky under gravity have the same feeling as floating in space? Can our body feel that we are accelerating without the air hitting us. If not how are they different? Also are free fall and zero g the same thing cause when we are falling freely we are accelerating at g towards earth then why would it be called "zero g"?
In essence, yes. Being on a space station in orbit basically IS falling due to gravity, it's just that the astronaut and the space station keep missing the Earth due to constantly moving sideways so they never hit the/fall on the Earth. But they basically ARE falling. Our bodies can't tell the difference, because all your body parts are accelerating and moving at the same rate, they're not in any tension in relation to each other so it's like there's no force, none that you, the person, can feel anyway. There are some minor differences, tidal forces, but these effects are minor unless you're orbiting near a black hole etc. Tidal forces: slightly stronger gravity near the gravity source, so your feet, for example, are pulled sightly stronger, but these effects are minor usually. Astronauts on the ISS certainly don't feel it. The term "zero-g" just means you don't feel any gravity, not that there isn't any. Of course, if you were in the void, far far far away from any gravity source, you would still be in "zero-g" because you wouldn't feel any... because there is none. "g" here refers to a thing called "gravitational acceleration on Earth" btw, which is $g=9.81\:\rm m/s^2$ . Fighter pilots go through 5g and more because they accelerate a lot... gravitation itself being irrelevant here, it's all about the felt acceleration itself. Emphasis on felt. Astronauts accelerate too, as I've said, but they, the persons, don't feel it, because they aren't squished onto anything, like the fighter pilots are squished onto their jet engines.
{ "source": [ "https://physics.stackexchange.com/questions/482520", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/225252/" ] }
482,541
We know that when we give alternating current across a wire then it will generate an electromagnetic wave which propagates outward. But if we have a supply which can generate 610 to 670 terahertz of alternating current supply then does the wire generate blue light?
It would be hard to generate such a current and harder still to get it to produce any blue light - though this is theoretically possible. The main problem is that you are probably thinking of a metal wire. Metals absorb visible light, both reflecting it and turning it into lattice vibrations. This is because the wavelength of visible light is just a few thousand atoms long in size so it is in a "sweet spot" for exciting solid crystals. In fact, the tendency for solid objects to absorb, reflect and otherwise interact with visible light is why it is "visible". In a normal radio wave, your metal wire will need to be on the order of a wavelength of the radio wave you want to produce. This is typically on the order of meters. Automobiles of the 20th century had metal wires sticking out of them, about 1 meter long, called "antennas", to catch such waves. But for blue light the wavelength is only about 5 x $10^{-7}$ meters so any useful antenna would be very tiny because an "electron density wave" in your wire would be "turning around" before it got very far. The electromagnetic spectrum is divided up not so much by "wavelength and frequency" as by the way that any given part of the spectrum interacts with matter. So, radio waves will interact via electron currents in long metal wires. But visible light interacts more with lattice vibrations and non-ionizing atomic transitions. So, "current in a wire" type emission works in frequency up to a thing called "the terahertz gap" https://en.wikipedia.org/wiki/Terahertz_gap . Above this frequency other emission techniques are usually required. Blue light is about three orders of magnitude higher in frequency than the terahertz gap.
{ "source": [ "https://physics.stackexchange.com/questions/482541", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/210956/" ] }
483,301
This may be silly question, but why does a helium ballon rise? I know it rises because helium is less dense than air. But what about the material of the ballon. It is made up of rubber/latex which is quite denser than air. An empty ballon with no air in it falls, so why does a helium filled balloon rise?
The buoyant force* depends on the volume of the object (or at least the volume of the object submerged in the fluid) and the density of the fluid that object is in, not necessarily/directly on the density of the object. Indeed, you will usually see the buoyant force written as $$F_B=\rho_{\text{fluid}}V_{\text{sub}}g=w_{\text{disp}}$$ which just shows that the buoyant force is equal to the weight of the displaced fluid. We usually talk about more dense objects sinking and less dense objects floating because for homogeneous objects of mass $m$ we can write the volume as $V=m/\rho$ , so that when we compare the buoyant force to the object's weight (for example, wanting the object to float) we get $$m_{\text{obj}}g<F_B=\frac{\rho_{\text{fluid}}m_{\text{obj}}g}{\rho_{\text{obj}}}$$ i.e. $$\rho_{\text{obj}}<\rho_{\text{fluid}}$$ This is what we are familiar with, but keep in mind that this emerges from the buoyant force's dependency on the object's volume (not density) after we assumed that we had a homogeneous object. If our object is not homogeneous (like the balloon), then you have to be more careful. You do not just "plug in" the density of the rubber, since it is not purely the volume of the rubber material that is displacing the surrounding air. You have to differentiate between the entire balloon and the rubber material. So, the buoyant force would be given by $$F_B=\rho_{\text{fluid}}V_{\text{balloon}}g$$ whereas the weight is given by $$w_{\text{balloon}}=(m_{\text{rubber}}+m_{\text{He}})g=(\rho_{\text{rubber}}V_{\text{rubber}}+\rho_{\text{He}}V_{\text{He}})g$$ So, if we want floating, we want $$w_{\text{balloon}}<F_B$$ $$(\rho_{\text{rubber}}V_{\text{rubber}}+\rho_{\text{He}}V_{\text{He}})g<\rho_{\text{fluid}}V_{\text{balloon}}g$$ i.e. $$\frac{\rho_{\text{rubber}}V_{\text{rubber}}+\rho_{\text{He}}V_{\text{He}}}{V_{\text{balloon}}}<\rho_{\text{fluid}}$$ We end up with something a little more complicated, but if we treat the balloon as a single object then we get a similar result to the homogeneous case. Just define the density of the balloon as $$\rho_{\text{balloon}}=\frac{m_{\text{rubber}}+m_{\text{He}}}{V_{\text{balloon}}}$$ and so we end up with $$\rho_{\text{balloon}}<\rho_{\text{fluid}}$$ It should be noted that it's not just the fact that helium is in the balloon that causes it to rise then. You still need the volume of the balloon to be large enough to displace enough of the surrounding air. However, helium is used because it's density is so low that as we add more helium to make the balloon (buoyant force) larger, we are not making the balloon weigh too much more such that the buoyant force can eventually overcome the balloon's weight. To qualitatively summarize this, the density of the object only matters when we look at the object's weight. The volume of the object (more specifically, the volume the object takes up in the fluid) is what matters for the buoyant force. The relation of these two forces is what determines if something sinks or floats. If your object isn't homogeneous then you should look at the overall density of the object which is the total mass of the object divided by the volume the object takes up in the fluid. * If you want to know about where the buoyant force comes from, then Accumulation's answer is a great explanation. I did not address it here, because your question is not asking about where the buoyant force comes from. It seems like you are just interested in how comparisons of densities can determine whether something floats or sinks, so my answer focuses on this.
{ "source": [ "https://physics.stackexchange.com/questions/483301", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
483,307
I am finding a problem in understanding the sign convention associated with measuring the dip angle (Magnetic Inclination) in southern and northern hemisphere. I have tried to consult this matter on several books and found contradictory text.
The buoyant force* depends on the volume of the object (or at least the volume of the object submerged in the fluid) and the density of the fluid that object is in, not necessarily/directly on the density of the object. Indeed, you will usually see the buoyant force written as $$F_B=\rho_{\text{fluid}}V_{\text{sub}}g=w_{\text{disp}}$$ which just shows that the buoyant force is equal to the weight of the displaced fluid. We usually talk about more dense objects sinking and less dense objects floating because for homogeneous objects of mass $m$ we can write the volume as $V=m/\rho$ , so that when we compare the buoyant force to the object's weight (for example, wanting the object to float) we get $$m_{\text{obj}}g<F_B=\frac{\rho_{\text{fluid}}m_{\text{obj}}g}{\rho_{\text{obj}}}$$ i.e. $$\rho_{\text{obj}}<\rho_{\text{fluid}}$$ This is what we are familiar with, but keep in mind that this emerges from the buoyant force's dependency on the object's volume (not density) after we assumed that we had a homogeneous object. If our object is not homogeneous (like the balloon), then you have to be more careful. You do not just "plug in" the density of the rubber, since it is not purely the volume of the rubber material that is displacing the surrounding air. You have to differentiate between the entire balloon and the rubber material. So, the buoyant force would be given by $$F_B=\rho_{\text{fluid}}V_{\text{balloon}}g$$ whereas the weight is given by $$w_{\text{balloon}}=(m_{\text{rubber}}+m_{\text{He}})g=(\rho_{\text{rubber}}V_{\text{rubber}}+\rho_{\text{He}}V_{\text{He}})g$$ So, if we want floating, we want $$w_{\text{balloon}}<F_B$$ $$(\rho_{\text{rubber}}V_{\text{rubber}}+\rho_{\text{He}}V_{\text{He}})g<\rho_{\text{fluid}}V_{\text{balloon}}g$$ i.e. $$\frac{\rho_{\text{rubber}}V_{\text{rubber}}+\rho_{\text{He}}V_{\text{He}}}{V_{\text{balloon}}}<\rho_{\text{fluid}}$$ We end up with something a little more complicated, but if we treat the balloon as a single object then we get a similar result to the homogeneous case. Just define the density of the balloon as $$\rho_{\text{balloon}}=\frac{m_{\text{rubber}}+m_{\text{He}}}{V_{\text{balloon}}}$$ and so we end up with $$\rho_{\text{balloon}}<\rho_{\text{fluid}}$$ It should be noted that it's not just the fact that helium is in the balloon that causes it to rise then. You still need the volume of the balloon to be large enough to displace enough of the surrounding air. However, helium is used because it's density is so low that as we add more helium to make the balloon (buoyant force) larger, we are not making the balloon weigh too much more such that the buoyant force can eventually overcome the balloon's weight. To qualitatively summarize this, the density of the object only matters when we look at the object's weight. The volume of the object (more specifically, the volume the object takes up in the fluid) is what matters for the buoyant force. The relation of these two forces is what determines if something sinks or floats. If your object isn't homogeneous then you should look at the overall density of the object which is the total mass of the object divided by the volume the object takes up in the fluid. * If you want to know about where the buoyant force comes from, then Accumulation's answer is a great explanation. I did not address it here, because your question is not asking about where the buoyant force comes from. It seems like you are just interested in how comparisons of densities can determine whether something floats or sinks, so my answer focuses on this.
{ "source": [ "https://physics.stackexchange.com/questions/483307", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/140644/" ] }
483,607
I hope you will understand me correctly because there are some things that I translated. It is known that we see the world around us thanks to photons that are reflected from the surfaces of objects, so I have the following question: If you imagine, for example, a huge gray column 200 meters from the eyes. Why are the photons reflected off this column flying straight into your eyes all the time while you're looking? I mean, is it some huge stream flying in all directions, parts of which will necessarily fall into the eyes? How does this stream not mix with others? What does that even look like? An infinite number of randomly intersecting and moving points? How do we distinguish which photons are reflected from what?
Yes - we are surrounded by a "sea of photons". An individual object that reflects light (let's assume a Lambertian reflector - something that reflects incident photons in all directions) sends some fraction of the incident photons in all directions. "Some fraction" because the surface will absorb some light (there is no such thing as 100% white). The propagation of photons follows linear laws (at normal light intensities) so that two photons, like waves, can travel on intersecting paths and continue along their way without disturbing each other. Finally it is worth calculating how many photons hit a unit area per unit time. If we assume sunlight, we know that the intensity of the light is about 1 kW / m $^2$ . For the purpose of approximation, if we assume every photon had a wavelength of 500 nm, it would have an energy of $E = \frac{h}{\lambda} = 3.97 \cdot 10^{-19}\ J$ . So one square meter is hit with approximately $2.5\cdot 10^{21}$ photons. Let's assume your grey column reflects just 20% of these and that the visible component of light is about 1/10th of the total light (for the sake of this argument I can be off by an order of magnitude... this is for illustration only). At a distance of 200 m, these photons would have spread over a sphere with a surface of $4\pi R^2 \approx 500,000\ m^2$ , or $10^{14}$ photons per square meter per second. If your pupil has a diameter of 4 mm, an area of $12\ mm^2$ , it will be hit by about $12\cdot 10^8$ photons per second from one square meter of grey surface illuminated by the sun from 200 m away. At that distance, the angular size of that object is about 1/200 th of a radian. "Normal" vision is defined as the ability to resolve objects that are about 5 minutes of arc (there are 60 minutes to a degree and about 57 degrees to a radian). in other words, you should be able to resolve 1/(57*(60/5)) or about 1/600 of a radian. That's still lots of photons... Finally you ask "how do we distinguish what photons are reflected from what"? For this we have to thank the lens in our eye. A photon has a particular direction, and thanks to the lens its energy ends up on a particular part of the retina (this is what we call "focusing"). Photons from different directions end up in a different place. Nerves on the back of the retina tell us where the photons landed - and even what color they were. The visual cortex (part of the brain) uses that information to make a picture of the surrounding world in our mind. It's nothing short of miraculous.
{ "source": [ "https://physics.stackexchange.com/questions/483607", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/233452/" ] }
483,912
When light refracts in a prism it creates a rainbow. My question is, why don’t all windows or transparent objects create this dispersion, i.e. why is the refractive index dependent on frequency in a dispersive prism, and not in a window? (My guess is that the refractive index doesn’t change as much, but I don’t really have an idea).
It does create the rainbow, but it is almost impossible to notice. When light direction is changed on the glass-air interface - there is always a dispersion : light with different wavelength will refract at different angle and thus create rainbow. The issue is that when light hits second glass-air interface - incidence angle is opposite, and dispersion almost perfectly compensate, and this recombine light into white beam. In this recombined beam there is no angular difference for different colors, just slight lateral mismatch - so you can barely see rainbow at the sharp edges of light beam - but colors do not diverge anymore. You can still notice rainbow if you take very thick glass (~50mm), and very narrow and perfectly collimated beam (<0.05mm). Here you can see simulation with exaggerated dispersion: In a prism, where incidence angles for first and second refractions are very different - this compensation is not working and one can see the rainbow much easier, as there is now angular difference between different colors.
{ "source": [ "https://physics.stackexchange.com/questions/483912", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/196529/" ] }
483,992
Molecular weight of petrol is so much higher than water, but when it comes to physical property , weight, one litre of water weighs more than one litre of petrol. How is it possible?
Because water molecules are small and pack tightly together, causing water to have a greater density than petrol.
{ "source": [ "https://physics.stackexchange.com/questions/483992", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/233614/" ] }
484,002
The TISE is given by $$i\hbar\frac{d}{dt}|\psi(t)\rangle=\hat{H}|\psi(t)\rangle$$ Where $|\psi(t)\rangle\in\mathcal{H}$ is a state in the Hilbert space $\mathcal{H}$ . Now, in the position basis, the TISE is given by (WLOG. consider the 1 dimensonal case). $$i\hbar\frac{\partial}{\partial t}\Psi(x,t)=\hat{H}\Psi(x,t)$$ Where $$\Psi(x,t)=\langle x|\psi(t)\rangle.$$ I assume that because the first equation is more fundamental, it should be possible to derive the second from it. I thus took the inner product of the first equation with the bra $\langle x|$ . The LHS rather easily reduces to the second equation by the very definition of $\Psi(x,t)$ , but I had trouble with the RHS and the Hamiltonian operator. I know that $$\langle x|\hat{H}\psi(t)\rangle=\langle \hat{H}x|\psi(t)\rangle,$$ but couldn't advance further. How would you show that $$\langle x|\hat{H}\psi(t)\rangle=\hat{H}\langle x|\psi(t)\rangle=\hat{H}\Psi(x,t)~?$$ Is this even a sensible approach?
Because water molecules are small and pack tightly together, causing water to have a greater density than petrol.
{ "source": [ "https://physics.stackexchange.com/questions/484002", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/173028/" ] }
484,026
In the 2019 miniseries "Chernobyl", ordinary objects are depicted as being capable of becoming radioactive, such as clothes, water, stones. How exactly does something composed of a non-radioactive mass, become radioactive? I'm aware of the differences between alpha, beta, and gamma radiation, and I know how ionizing radiation works. However, it isn't clear to me how any radiation, including ionizing radiation makes something radioactive in any long-lasting sense of the word. I can imagine that ionizing radiation excites the atoms in the object, which makes the atom emit a photon until it becomes relaxed again. However, this doesn't sound like something that has a very long lasting effect? I can also imagine that radioactive particles, such as those from U-235, may stick to clothes or contaminate water. However, this too seems not that plausible, is there really that much U-235 in a nuclear reactor for dust particles to be a considerable problem in this regard? I'm not arguing that this isn't true, it simply isn't clear to me how the mechanism behind it works. I'm pretty sure this isn't clear to most non-physicists either.
There are three main effects: The first , and simplest, is particulate contamination. The uranium fuel rods were pulverized in the explosion and so dust particles contaminated with uranium and other isotopes (fission products in the fuel rods) were scattered to the wind. Don't underestimate the amount of dust and smoke released. There were several tons of highly radioactive material in the core. That makes a lot of dust. This is what produced the long-travelling dust cloud that triggered detectors in Minsk and Sweden and elsewhere. The dust can get on clothes and can be transferred by touch in the same way any contamination is spread. The problem for health is that each tiny dust particle contains trillions of radioactive atoms that are constantly decaying and emitting radiation. If you get some particles in your lungs, they will sit there radiating away into the surrounding tissues for many years. Not good. The second effect is from immediate (prompt) gamma radiation from the core. This was what produced the light effect above the reactor and why Legasov wouldn't let the helicopter pilot fly over the core. This is mainly what killed the firemen and the shift crew. Here, you have basically a beam of radiation coming directly from the dense core and the immense rate of decay occurring there. A third effect, is that the intense radiation (gammas and neutrons) can affect nuclei in stable atoms and activate them. That is, it converts the stable isotopes into radioactive isotopes, which will later decay. This is well-described in the other answers.
{ "source": [ "https://physics.stackexchange.com/questions/484026", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/233638/" ] }
484,543
I can't say confidently that an atom is mostly vacuum, but I am somewhat sure of it because electrons and nucleons cover little space, and everything other than these elementary particles in an atom is vacuum. Why is everything around us rigid even if the atom is mostly vacuum? EDIT The question over which this question is marked duplicate has a completely different premise as compared to this question as this question asks about rigidity of materials around us rather than interaction between things around us and their subsequent behavior that they do not pass through each other.
I think the other answers which mention electrostatics capture the physics behind things being rigid correctly. However, I wanted to specifically point to your question of "why are they rigid when they're mostly vacuum?" I'd like to draw your attention to Guyed Masts : A Guyed mast is a tower whose rigidity depends on several guy-wires surrounding it. If you're treating the mast in the picture above as a rigid structure, you have to include the guy wires too. If you didn't include them, the tower would flex and collapse. And if you look at the whole structure, almost all of it is empty air between the wires. This points to why things are rigid. If the electrostatic forces between atoms are configured into a stable configuration, like how the mast with guy-wires is stable, then it can be rigid even though most of it is empty space. It's the structure which makes things rigid or not.
{ "source": [ "https://physics.stackexchange.com/questions/484543", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/230533/" ] }
484,675
This new finding by Minev et al. seems to suggest that transitions between atomic states are not instantaneous, but continuous processes wherein a superposition smoothly adjusts from favoring one state to another (if I understand it correctly). The authors also claim to be able to catch a system "mid-jump" and reverse it. Popular articles are here and *here . I am curious if this finding rules out any interpretations of QM. It seems to generally go against the Copenhagen attitude, which describes measurements as collapsing physical systems into a definite classical state. The popular articles indeed claim that the founders of QM would have been surprised by the new finding. The link with the asterisk mentions that something called "quantum trajectories theory" predicts what was observed. Is this an interpretation, or a theory? And are they implying that other interpretations/theories don't work?
No. All news stories about this result are extremely misleading. The "quantum jump" paper demonstrates an interesting and novel experimental technique. However, it says absolutely nothing about the interpretation of quantum mechanics. It agrees with all proper interpretations, including the Copenhagen interpretation. What the researchers actually did When a quantum system transitions between two states, say $|0 \rangle$ to $|1 \rangle$ , the full time-dependence of the quantum state looks like $$|\psi(t) \rangle = c_0(t) |0 \rangle + c_1(t) |1 \rangle.$$ The amplitude $c_0(t)$ to be in $|0 \rangle$ smoothly and gradually decreases, while the amplitude $c_1(t)$ to be in $|1 \rangle$ smoothly and gradually increases. You can read this off right from the Schrodinger equation, and it has been known for a hundred years. It is completely standard textbook material. The researches essentially observed this amplitude changing in the middle of a transition, in a context where nobody had done so before. The authors themselves emphasize in their paper that what they found is in complete agreement with standard quantum mechanics. Yet countless news articles are describing the paper as a refutation of "quantum jumps", which proves the Copenhagen interpretation wrong and Bohmian mechanics right. Absolutely nothing about this is true. Why all news articles got it wrong The core problem is that popsci starts from a notion of "quantum jumps", which itself is wrong. As the popular articles and books would have it, quantum mechanics is just like classical mechanics, but particles can mysteriously, randomly, and instantly teleport around. Quantum mechanics says no such thing. This story is just a crutch to help explain how quantum particles can behave differently from classical ones, and a rather poor one at that. (I try to give some better intuition here .) No physicist actually believes that quantum jumps in this sense are a thing. The experiment indeed shows this picture is wrong, but so do thousands of existing experiments. The reason that even good popsci outlets used this crutch is two-fold. First off, the founders of quantum mechanics really did have a notion of quantum jumps. However, they were talking about something different: the fact that there is no quantum state "in between" $|0 \rangle$ and $|1 \rangle$ (which, e.g. could be atomic energy levels) such as $|1/2 \rangle$ . The interpolating states are just superpositions of $|0 \rangle$ and $|1 \rangle$ . This is standard textbook material: the states are discrete, but the time evolution is continuous because the coefficients $c_0(t)/c_1(t)$ can vary continuously. But the distinction is rarely made in popsci. (To be fair, there was an incredibly short period in the tumultuous beginning of " old quantum theory " where some people did think of quantum transitions as discontinuous. However, that view has been irrelevant for a century. Not every early quote from the founders of QM should be taken seriously; we know better now.) Second off, the original press release from the research group had the same language about quantum jumps. Now, I understand what they were trying to do. They wanted to give their paper, about a rather technical aspect of experimental measurement, a compelling narrative. And they didn't say anything technically wrong in their press release. But they should've known that their framing was basically begging to be misinterpreted to make their work look more revolutionary than it actually is. Interpretations of quantum mechanics There's a very naive interpretation of quantum mechanics, which I'll call "dumb Copenhagen". In dumb Copenhagen, everything evolves nicely by the Schrodinger equation, but when any atomic-scale system interacts with any larger system, its state instantly "collapses". This experiment indeed contradicts dumb Copenhagen, but it's far from the first to; physicists have known that dumb Copenhagen doesn't work for 50 years. (To be fair, it is used as a crutch in introductory textbooks to avoid having to say too much about the measurement process.) We know the process of measurement is intimately tied to decoherence, which is perfectly continuous. Copenhagen and, say, many worlds just differ on how to treat branches of a superposition that have completely decohered. Another issue is that proponents of Bohmian mechanics seem to latch onto every new experimental result and call it a proof that their interpretation alone is right, even when it's perfectly compatible with standard QM. To physicists, Bohmian mechanics is a series of ugly and complicated hacks, about ten times as bad as the ether, which is why it took last place in a poll of researchers working in quantum foundations. But many others really like it. For instance, philosophers who prefer realist interpretations of quantum mechanics love it because it lets them say that quantum mechanics is "really" classical mechanics underneath (which actually isn't true even in Bohmian mechanics), and hence avoid grappling with the implications of QM proper. (I rant about this a little more here .) Quantum mechanics is one of the most robust and successful frameworks we have ever devised. If you hear any news article saying that something fundamental about our understanding of it has changed, there is a 99.9% chance it's wrong. Don't believe everything you read!
{ "source": [ "https://physics.stackexchange.com/questions/484675", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/177858/" ] }
485,008
Sorry for the primitive question but when we inflate a rubber balloon and tie the end, its volume increases until its inner pressure equals atmospheric pressure. But after that equality is obtained why does the air goes out when we pop the balloon? If there is pressure equality what causes the air flow?
For an inflated and tied balloon, the inner and outer pressures aren't equal. The inner pressure is higher by an amount $2 \gamma |H|$ , where $\gamma$ is the inflated balloon's surface tension and $H$ is its mean curvature (which is $-1/R$ for a sphere). This is called the Young-Laplace equation . After the balloon is untied and deflates, the pressures equalize and the surface tension becomes negligible.
{ "source": [ "https://physics.stackexchange.com/questions/485008", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/234121/" ] }
485,037
The Celsius unit is arbitrarily defined, based on the boiling and freezing point of water. Is it a coincidence, then, that the SI unit of temperature Kelvin, which is used in all natural equations, has the same length as the Celsius unit?
Kelvin history The kelvin unit was designed so that a change of $1\ \text{K}$ corresponds to a change of $1\ ^\circ\text{C}$ . This makes sense because people were working in Celsius at the time. Kelvin just realized that the Celsius scale couldn't go down arbitrarily negative. It stopped at $-273.15\ ^\circ\text{C}$ . The idea was to then make a new scale, the Kelvin scale which has the same gradations as the Celsius scale (for compatibility with the existing scale) but with the property that $0\ \text{K}$ corresponds to this special $-273.15\ ^\circ\text{C}$ temperature. In other words, it is not a coincidence but rather the kelvin was historically defined so that the two scales had the same gradation. There is a bit of confusion regarding the triple point of water ( $273.16\ \text{K}$ , or $0.01\ ^\circ\text{C}$ ) and the freezing point of water at standard pressure ( $273.15\ \text{K}$ or $0.00\ ^\circ\text{C}$ ). Let me clarify. The Celsius, or Centigrade, scale was historically defined as follows. $0\ ^\circ\text{C}$ was defined to be the temperature (measured by, for example, a mercury thermometer) at which water (at standard atmospheric pressure: $101\,325\ \text{Pa}$ ) freezes. $100\ ^\circ\text{C}$ was chosen to be the temperature (at standard pressure) at which water boiled. Thus one degree Celsius is a gradation of temperature (as measured by a mercury thermometer, for example) equivalent $\frac{1}{100}$ of the temperature difference between the freezing and boiling points of water at standard pressure. As early as the $17^{\text{th}}$ century scientists began to understand that the Celsius scale didn't go infinitely negative. In fact, the value where the Celsius scale would stop could be calculated and measured and it was found to occur at around $-273\ ^\circ\text{C}$ . It seems to me that further refinement of laboratory experiments found the temperature to be $-273.15\ ^\circ\text{C}$ . That is if you started at the freezing point of water $(0\ ^\circ\text{C})$ , and went down by $273.15$ of the gradations described above, you would hit absolute zero. Ok, we still haven't rigorously defined the kelvin. In 1967 people wanted to give good definitions to the units. The freezing point of water was a bad physical reference point because it depended on the water being at atmospheric pressure. But pressure varies with the weather and elevation on Earth so different labs might calibrate their thermometers differently by this metric. However, the temperature of the triple point of water is unambiguous (at least regarding pressure) because it only occurs when the pressure is at the right value. The triple point of water occurs at $0.01\ ^\circ\text{C}$ . Thus, in 1967 it was resolved to define the kelvin as $\frac{1}{273.16}$ of the temperature of the triple point of water. This sets 1) $0\ \text{K}$ to be absolute zero as desired, 2) ensures the gradations of Kelvin were referred to a decent physical reference quantity and 3) has the effect that gradations of the Kelvin scale are the exact same as gradations of the Celsius scale. I will leave the answer here for now. See A Peruzzi 2018 J. Phys.: Conf. Ser. 1065 12011: On the redefinition of the kelvin for details on the redefinition of the kelvin which went into effect last month.
{ "source": [ "https://physics.stackexchange.com/questions/485037", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/141461/" ] }
485,091
If a rocket in space fires its thrusters, it is propelled forwards as per the laws of motion. This can be measured by its position relative to other bodies in the universe. Hypothetically if there was a universe that was completely empty except for the rocket and it then fired its thrusters, surely the same forces would apply (even if its movement could not be measured). Just because we can’t measure an event, is that the same thing as saying that it never happened? Is it correct to say that the rocket didn’t move?
Within the context of Newtonian mechanics, there's a simple answer: velocities are not absolute, but differences in velocities are. So you can state that acceleration occurs unambiguously. In special relativity, this is a bit more complicated because of relativistic velocity addition, but all observers can unambiguously compute a "proper" acceleration for every object, which is the acceleration in that object's momentary rest frame. In fact, the same logic still works in general relativity; acceleration is unambiguous even in a universe without matter. However, in certain philosophical stances inspired by general relativity, the question is trickier because one might take a hardline Machian position, where motion should only be defined in relation to other matter. But in this case you can still answer the question because there is motion relative to the exhaust.
{ "source": [ "https://physics.stackexchange.com/questions/485091", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/234141/" ] }
485,919
What is the difference between these two? $\langle x|x\rangle$ and $|x\rangle\langle x|$ Are they the same? If they're the same, why are they used in these two different forms?
They are not the same. One is the outer product and one is the inner product. In the finite-dimensional real-number case for example, if $$|x\rangle = \begin{bmatrix}1\\2\\3\end{bmatrix},$$ then $$|x\rangle\langle x| = \begin{bmatrix}1\\2\\3\end{bmatrix} \begin{bmatrix}1&2&3\end{bmatrix} = \begin{bmatrix}1&2&3\\2&4&6\\3&6&9\end{bmatrix},$$ while $$\langle x | x\rangle = \begin{bmatrix}1&2&3\end{bmatrix} \begin{bmatrix}1\\2\\3\end{bmatrix} = 1 + 4 + 9 = 14.$$
{ "source": [ "https://physics.stackexchange.com/questions/485919", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/232303/" ] }
486,300
I recently read an interesting article that states that a human being can perceive a flash of as few as 5 or so photons , and the human eye itself can perceive even a single photon. The brain will filter this out, however. I wanted to calculate how far away you'd have to be standing from our sun for not a single one of its photons to be hitting your pupil over a given second. The first thing I did was assume that the sun emits $10^{45}$ photons per second, because, well, that's the only number I could find through internet research. The next step is to assume that the average angle between photons emitted from the sun is pretty much the same, and is equal to $3.6 × 10^{-43}$ degrees. The next step is to assume that the average human pupil diameter is 0.005 meters, and then draw a triangle like so: The length of the white line through the center of the triangle equals the distance at which two photons from the sun would be further apart than your pupil is wide, meaning not even one photon should hit your eye. I broke the triangle into two pieces and solved for the white line by using the law of sines, and my final result is ridiculous. $3.97887×10^{41} $ meters is the length of the white line. For reference, that's over $10^{14}$ times the diameter of the observable universe. My conclusion says that no matter how far you get from the sun within our observable universe, not only should some of the photons be hitting your pupil, but it should be more than enough for you to visually perceive. But if I was right, I'd probably see a lot more stars from very far away every night when I looked up at the sky. Why is my calculation inconsistent with what I see?
The problem with your derivation is that you distributed the photons over a 360° circle, so the photons only spread out in a two-dimensional circle. This means that the intensity of light drops off at a rate proportional to $1/r$ instead of $1/r^2$ (where $r$ is the distance from the center of the sun) like it does in a three-dimensional universe. So, starting with $N$ photons emitted per second, the intensity of photons at a distance $r$ from the sun is given by $$I = \frac{N}{4\pi r^2}.$$ This comes from spreading out the photons over the surface of a sphere surrounding the sun. The number of photons seen by your eye per second is just the intensity multiplied by the area of the iris of your eye: $$n = IA_\text{eye} = \frac{N}{4\pi r^2}A_\text{eye}.$$ You are looking for the distance beyond which you would see less than one photon per second: $$n = \frac{N}{4\pi r^2}A_\text{eye} \lt 1$$ Solving for $r$ gives $$r > \sqrt\frac{NA_\text{eye}}{4\pi}$$ Plugging in your numbers gives $$r > \sqrt{\frac{(10^{45})\pi(0.005\,\textrm{m}/2)^2}{4\pi}} = 4\cdot10^{19} \,\textrm{m} \approx 4000\,\textrm{light-years}$$ This distance is still well within our own galaxy.
{ "source": [ "https://physics.stackexchange.com/questions/486300", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/234714/" ] }
486,790
Is a single radon-daughter atom in air a solid? The Wikipedia article on radon says: Unlike the gaseous radon itself, radon daughters are solids and stick to surfaces, such as dust particles in the air. If such contaminated dust is inhaled, these particles can also cause lung cancer. The statement made me wonder about the right terminology. Is a single radon-daughter atom in air, like 218 Po, a solid or a gas? I would think it is a gas because it resembles a vapor atom, or a sublimated atom from a solid. And maybe it is a 'potential' solid?
When the Wikipedia article says that radon daughters are "solids", the authors actually mean, " If you get a bunch of radon daughter atoms together , then they would form a solid." The state of matter is a property of a large number of atoms, so a single atom in isolation doesn't strictly have a well-defined state. That said, states of matter are primarily a function of the interactions between atoms. Atoms that weakly interact* with themselves and their environment are likely to be gases, while atoms that strongly interact* with other atoms are likely to be liquids or solids. So Wikipedia appears to be using the state of matter as a shorthand for the strength of interactions. Essentially, radon daughters, unlike radon (which is a noble gas), stick to each other and to the walls, which is the same property that makes large collections of radon daughters solids. *"weakly" and "strongly" don't refer to the fundamental weak and strong nuclear interactions here, of course, but to the general idea of having a small or large coupling constant in whatever interaction you're examining.
{ "source": [ "https://physics.stackexchange.com/questions/486790", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/71627/" ] }
487,011
Given a planet that orbits a star, and a moon that orbits that planet, is it possible to define a maximum orbital radius of that moon, beyond which the moon would no longer orbit the planet, but the star instead? I initially (naively) thought this point would be where the star's gravity outweighed that of the planet: $$d_\text{max} = d_\mathrm p - d_\mathrm px$$ $$x = \frac{1}{\sqrt{\frac{m_\mathrm p}{m_\mathrm s}}+1}$$ Where: $d_\text{max} = $ maximum orbital radius of the moon (around the planet), $d_\mathrm p =$ orbital radius of the planet (around the sun), $m_\mathrm p =$ mass of the planet, $m_\mathrm s = $ mass of the star. But I quickly realised this assumption was wrong (unless my shoddy maths is wrong, which is very possible), because this gives a value of $258\,772\ \mathrm{km}$ using values of the Sun, Moon, and Earth. $125\,627\ \mathrm{km}$ closer to the Earth than the Moon's actual orbital radius (values from Wikipedia). Is there a maximum orbital distance? How can it be calculated?
The concept you're looking for is that of a planet's Hill sphere. If a planet of mass $m$ is in a roughly circular orbit of radius $a$ about a star of mass $M$ , then the radius of this "sphere" is given by $$ r_H = a \sqrt[3]{\frac{m}{3M}}. $$ For the Sun-Earth system, this yields $r_H \approx 0.01 \text{ AU}$ , or about 1.5 million kilometers. The calculation given in the Wikipedia article shows how to derive this in terms of rotating reference frames. But for a qualitative explanation of why your reasoning didn't work, you have to remember that the moon and the planet are not stationary; both of them are accelerating towards the star. This means that it's not the entire weight of the moon that matters, but rather the tidal force on the moon as measured in the planet's frame. This effect, along with the fact that the centripetal force needed for the star to "steal" the planet is a bit less when the moon is between the star and the planet, leads to the expression given above. As pointed out by @uhoh in the comments, the L1 and L2 Earth-Sun Lagrange points are precisely this distance from the Earth. These are precisely the points where the gravitational forces of the Earth and the Sun combine in such a way that an object can orbit the Sun with the same period as the Earth, but at a different radius. In a rotating reference frame, this means that the influences of the Earth, the Sun, and the centrifugal force are precisely canceling out; any closer to Earth than that, and the Earth's forces dominate. Thus, the L1 and L2 Lagrange points are on the boundary of the Hill sphere.
{ "source": [ "https://physics.stackexchange.com/questions/487011", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/235029/" ] }
487,180
Can someone please explain, intuitively (without any formula, I understand the formulas), why the equivalent capacitance of capacitors in series is less than the any individual capacitor's capacitance? Let's take a simple case. Say we have 2 capacitors with Capacitance 2 (ignoring units), and we place them in series. A voltage $V$ develops in both, and a charge $+Q$ acculumates on one of their plates. Using the capacitance formulas, the equivalent capacitance is $1/2$ the original. Indeed we get $Q/2V$ , where $Q/V$ is the original capacitance. But why? aren't we in total acculumating a charge of $2Q$ over a potential difference of $2V$ ? Why just $1Q$ ? (again, I'm speaking intuitively)
Can someone please explain, intuitively (without any formula, I understand the formulas), why the equivalent capacitance of capacitors in series is less than the any individual capacitor's capacitance? I assume you know that the larger the capacitor plates are, the greater the capacitance, all other things being equal. Also I assume you know the greater the separation of the plates (the thicker the dielectric between the plates) the less the capacitance all other things being equal. Given these assumptions, consider the diagrams below. The top diagram to the left shows two capacitors in parallel. It is equivalent to the diagram to the top right. If two or more capacitors are connected in parallel, the overall effect is that of a single (equivalent) capacitor having a total plate area equal to the sum of the plate areas of the individual capacitors. Thus for parallel capacitors the equivalent capacitance is the sum of the capacitances. The bottom middle diagram shows two capacitors in series. It is equivalent to the diagram to the bottom right. If two or more capacitors are connected in series, the overall effect is that of a single (equivalent) capacitor having the sum total of the plate spacings of the individual capacitors. Thus for series capacitors the equivalent capacitor is less than the individual capacitors. If the capacitors are the same and equal $C$ , the equivalent capacitance is $C/2$ For reference the diagram includes the relevant equations for capacitance based on the physical parameters ( $A$ , $d$ , $e$ ) and electrical parameters ( $Q$ , $V$ ). This is starting to make sense. But do you mind elaborating a teeny bit more on why the total charge for the series case is $Q$ not $2Q$ ? The total charge on the equivalent series capacitance is $Q/2$ and not $Q$ . There is less charge on the two capacitors in series across a voltage source than if one of the capacitors is connected to the same voltage source. This can be shown by either considering charge on each capacitor due to the voltage on each capacitor, or by considering the charge on the equivalent series capacitance. The bottom left diagram shows one capacitor of capacitance $C$ connected to a voltage $V$ . The charge on the capacitor is $Q=CV$ after it is fully charged as shown. The bottom middle diagram shows two capacitors of the same capacitance $C$ in series across the same voltage source. The voltage across each is $V/2$ . Since $Q=CV$ this means the charge on each will be $Q=C\frac{V}{2}$ . However, as pointed out by @Kaz, the conductor and plates between the two capacitors don’t contribute to charge separation. To put it another way, the net charge on the plates and conductor between the capacitors is zero. This results in the charge on the equivalent capacitance equal to $Q=C\frac{V}{2}$ as shown on the bottom right diagram. The same conclusion can be reached by considering that the equivalent capacitance of two equal capacitors in series is one half the capacitance of each, or $C_{equiv}=\frac{C}{2}$ . Consequently the charge on the equivalent series capacitance is the same as the charge on each of the series capacitors, or $\frac{C}{2}V$ as shown on the bottom right diagram. Hope this helps
{ "source": [ "https://physics.stackexchange.com/questions/487180", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/233507/" ] }
487,583
I've been surfing the web for quite a while, finding the answers I would need, but couldn't find a convincing one. First of all I need to remind you that this a very long/continuous question, so please kindly take your time. I'll provide some illustrations to make things easier and more pleasant to read. Assume that I'm pushing a box against the table with a force of $80\ \mathrm N$ , in accordance to Newton's 3rd Law, it'll exert a force equal but opposite to the box. This is fairly simple to understand. Now, here's the confusing part, assume, that somehow I could exert the $80\ \mathrm N$ just to that single upper most molecule of the box. {Neglect the possibilities that it'll penetrate to the box or any such.} If that was the case, how does the box actually "exert" a force on the table or rather How does the force of my hand exert a force on the table via the box? Here are my assumed possibilities: Possibility 1A In this case the force exerted on that molecule "pushes" the molecule below it and so on, until the very last molecule of the box "pushes" the table's molecule and thus exerting a force on it. The diagram above gives a pretty clear idea of my assumption. But , if this was the case then this would happen: If I'm going to push that object on that particular point, where that section of molecules of the box isn't "directly" in contact with the surface of the weighting scale, then it won't "read" my pushing force, which obviously doesn't make any sense, and I've tried this experiment a few days back and clearly the scale reads it. Possibility 1B My next assumption would be that the pushed molecules somehow push the molecules next to it and so with the other side, and therefore the scale reads my "push". At first this seems pretty logical, but after further thoughts, I then questioned my self, if the molecules could affect other molecules, don't they sum up? In other words, if that single molecule that was directly "in contact" with the source of the $80\ \mathrm N$ (let's say my hypothetical microscopic finger) could cause other molecules on that object to experience the same force, this means that every molecule on that object experiences $80\ \mathrm N$ of downward pushing force, and the weighting scale would read an astonishing force of $720\ \mathrm N (80\times9)$ , which is simply impossible as it'll break the fundamental laws of Physics. Possibility 2 The assumptions below are based on my logic which frankly, I doubt, which simply means a force is divided equally amongst each individual molecules, meaning that an object with less mass, let's say 5 molecules, would experience more "individual" force then a "10 molecule" object as the main force is divided less and thus higher acceleration. Now moving to the 2nd possibility, which for me is slightly more sensible. Here, I assume that the force divides equally to each of the molecule, so even if it was in the weighting scale scenario, the sum exerted would always be equal to my push which is $80\ \mathrm N$ . Unfortunately, this assumption has its weakness also, and this doesn't go along with my intuition. Here's my explanation. Let's change the situation a bit, imagine that my goal is to penetrate through the first layer of the molecules, using my hypothetical "molecular" nail, I exert a force of $45\ \mathrm N$ to that box, if my assumption holds true then the force, would divide equally with the number of molecules in that object, which is $5\ \mathrm N$ for each. This is counter-intuitive because the force needed to penetrate/break that particular molecule varies as the number of molecules increase/decrease, if there were 15 molecules, then the force exerted on each molecule which includes the one I would want to break to be $3\ \mathrm N$ , which basically means the more the molecules in an object the more the force needed to break the bond of that particular molecule only (not all of the bonds) . Here's a scenario I visualized: Imagine a driller drilling a hole of $5\ \mathrm{cm}$ in depth through the wall, it doesn't matter how thick or wide the wall is, the amount of force needed to drill a $5\ \mathrm{cm}$ hole stays the same, or simply, poking through a piece of an A4 paper is just as easy as a wider one (A3). Note that "breaking" in this case is not physically breaking the molecules into pieces but rather breaking its bonds. I jut wanted to make my explanation easy and concise to understand so I prefer less intricate phrases. Main Question I made some assumptions already but each of them seems to be quite contradictive. Am I missing something here? Or is there something new I need to learn? I'm currently sitting in highschool so there definitely is alot of things beyond my knowledge. I need to admit there are alot of my explanations that are doubtful, even to me personally, and I'll not be surprised if there are a few misconceptions here and there, but I'll be very glad to be corrected. Kindly take your time to answer. Any answer would be greatly appreciated!
All the answers here seem to be correct but excessively technical. I think there are more intuitive ways to think about it so I will give it a try. The box is a solid. Solids are not only arrangements of atoms floating toghether, they are related by forces. These forces (which as explained by Hotlab are electromagnetic in nature) act just like the forces on a spring. In our simplistic model, you should imagine each atom to be connected by springs to the neighbours (the details are much more complex). If one atom gets away from its neighbours then the spring pulls them back, if it gets too close then the spring pushes the atoms away to a more relaxed state. So for the sake of clarity we are going to assume that our model consists of a rectangular grid of identical atoms connected by springs to their upper, lower, left and right atoms each and only. No atom is connected to the atom in the lower left for example and no atom is connected to more than those 4 atoms. Simply said, each atom is connected with springs to the atoms of its von Neumann neighborhood , like in this image: Let's name the atom that you are going to push $C$ (for "central") and let's call its neighbour to the left $L$ , the one to the right $R$ and the atom below it $D$ (for down). And let's ignore for a moment the rest of the ensamble. So, think about it. Right now nothing is moving, everything is in equilibrium, all the springs are in their relaxed state (neither expanded nor contracted). Now you start to push $C$ downwards. As you push $C$ it starts to move downwards (because according to Newton's II Law of motion that force has to generate an acceleration). As $C$ moves down it starts to compress the $C-D$ string and thus a force on the spring starts to arise that wants to expand it, this force is resisting more and more of your downward initial force so that $C$ starts to slow down (as your force on it is counteracted more and more by the string's need to expand). Meanwhile as the $C$ atom was going down, the $C-L$ and $C-R$ are being expanded and thus a force arises also on them, the difference now is that those forces want to contract both the springs (since they are larger than their relaxed length). This string $C-L$ pulls on $C$ to the left and upwards and the string $C-R$ pulls to the right and upwards. So we have 4 forces acting on $C$ right now: your push from above, the upwards reaction of the $C-D$ string, the left-upwards reaction of the $C-L$ string and the right-upwards reaction of the $C-R$ string. As $C$ continues to move, all these forces are going to change (except for your constant push from above), until it reaches a state of equilibrium where all the spring reactions are as strong as needed to stop you from continuing moving $C$ ; they reach a point where they exactly counteract your force pushing on $C$ . You can tell that this makes sense if you watch this diagram: I've colored in black the arrows representing the forces acting on the atom $C$ . As you can see the net force equals zero, at this moment $C$ stops moving and the system reaches equilibrium (your force is counteracted by the others). You can see that there is a component of the force of the $C-R$ string to the right and the one of the $C-L$ string to the left, since the system is horizontally mirror-symmetric with respect to $C$ . This means that the net force has no horizontal components, and $C-R$ is pulling to the right exactly as hard as $C-L$ is pulling to the left. What about the vertical component of the net force? As you can see, all the three reactions of the springs go upwards, so they sum up to the same value you are pushing downwards. I'm not going to calculate exactly how they sum, but clearly (because of the same symmetry argument) the upward contribution of $C-L$ is the same as the upward contribution of $C-R$ , together with the upward contribution of $C-D$ string they can oppose perfect resistance to your downward push. But the system would not remain in this state for long. This would be the end if $R$ , $L$ and $D$ were fixed (nailed to the background). But hey are free, thus they are going to move accordingly to the forces they also experience. These forces experienced by the neighbouring atoms I've color coded with yellow and are depicted as arrows inside their corresponding atom. Those forces are exerted by the springs as they want to expand (in the case of $C-D$ ) or contract (in the case of $C-L$ and $C-R$ ). The thing is that these atoms are not fixed but they are free to move. So under these forces (the yellow arrows) they are going to start to move from their original positions. Now is not just $C$ that has moved and thus expanded or contracted 3 neighbouring springs, now we have 3 atoms moving and 9 springs exerting forces in response. I'm just not going to draw all of that. Also in the next step there are going to be 6 atoms relocating and 16 springs exerting different forces. As you can see the evolution of this system explodes in terms of complexity. This means that the task of calculating each force and the new positions on each step gets larger and larger, and it is just crazy to ask someone to accomplish it. These are only 20 atoms but real solids have trillions of them, they are not always so ordered as in this lattice either, they are 3D instead of 2D, the actual electromagnetic forces involved are not acting strictly like springs but a bit differently, there could be different types of atoms and molecules with different string strengths (the chemical bonds) across the solid, the von Neumann neighbourhood could be a simplistic approximation since atoms could be linked to their second most distant neighbours or diagonally, ... But in principle this model should be quite accurate in macroscopic terms. In physics, when we reach a point where there's an explosion (a unchained increase) in the number of calculations needed to be accomplished to understand the phenomenon (when even simulating it in a computer would take billions of years for a real solid) we tend to avoid this kind of microscopic interactions view and start pondering what the overall behaviour looks like at the macroscopic scale. For these we either use statistical mechanics (which tells us about the average forces nature and the average reaction of each broad region of the grid) or continuum mechanics (where we start with the assumption that there are no atoms, no springs, but a continuous elastic infinitely divisible material, and use differential calculus to explain the entire system as a solid object without parts). Look at my crude simulation of the evolution of this system after several more steps using only the microscopic approach of calculating each force on each atom: The force (introduced by yourself) is not multiplied across the lattice, it only gets more and more redistributed. You can think of it also as a Gothic cathedral. The entire mechanical system of a Gothic cathedral is made in such a way that a huge load on the top (force exerted by gravity) like the weight of the central tower, is redistributed over a larger area on the ground across these "mechanical channels" called flying buttresses . The force is the same but now it is spread so that the pressure doesn't collapse the ceiling of the cathedral. Our case is similar, only that when viewed in detail (microscopic detail), your solid redistributes the force to the entire lattice dynamically; it takes some time for that force to be redistributed because each spring has to communicate the interaction throughout moving parts across the solid until equilibrium between your force and all the reaction forces of the causal chain you have generated counteract each other. Again, when this state of equilibrium between forces is reached there is no net force (the sum of all forces cancel out), and if there's no net force then there is no movement finally. The final state is that the solid would get compressed as if your force was more or less distributed between all the atoms of the top layer (even if you are pressing on one of them only), since the springs of the top layer will all have forces pulling downwards or at least some component of that would be transferred when you move $C$ downwards to all the atoms in that top layer. The solid would look like a bunch of horizontal layers that are vertically compressing vertical the springs between them. Like this: But if the solid is not so solid (the springs are more elastic, less reactive to expansions and contractions, less rigid), you can see that the force will get distributed in such a way that the "solid" would deform. Your concentrated pressure would not be distributed fairly in the top layer (even if it will always be distributed in the entire lattice). The end result (when things stop moving) would look like this: It all depends on the strength of the springs; the cohesive force of the solid. The absolutely rigid scenario is impossible, but since electromagnetic "springs" (chemical bonds) are extremely non-elastic (they react strongly to any attempt to compress or extend them), the solid looks a lot like that (it gets compressed uniformly from above). In the elastic case you have materials like jello that you can press on a point, and the entire thing will deform as in the previous image while you maintain that force. But jello is in the other end of the "solidity" spectrum. So as you can see you can't push an atom independently of the others in a solid because it will push and pull its neighbours until the entire lattice has redistributed your initial force and every atom has been dragged by that single atom by means of its spring connections to the others. You can even buy or build a toy model of this system (in 3D it is even more realistic) and play with it to grasp the idea of how solids behave under distributed or concentrated pressures. It is great to play with this microscopic model of solid matter in your hands. You can understand all the aspects I mentioned of how this system works and get to strengthen this understanding deep inside your brain. SOUND WAVES: AN INTERESTING ASPECT I've mentioned the fact that analyzing the entire lattice microscopically, calculating each force and the relative movement of each atom is just madness and that there are models inside statistical mechanics and continuum mechanics that can explain this. But I haven't done any calculation nor approach in that sense. Let's do it now, at least vaguely. We can focus for a moment our attention on the column of atoms just below the $C$ atom, ignoring the rest of the system. This is also a solid: a vertical rod with only one atom of width. Let's see how your force propagates downwards using this animation I extracted from "The Mechanical Universe" series . We could totally calculate each and every interaction for each instant in time by simply using Newton's Laws of Motion and Hooke's Law (which describes the specific nature of forces exerted by springs). But this is, as I said, impractical when the number of atoms and springs is large. But! Only watching a few of these atoms you can get the sensation that there is a macroscopic (a wide context understanding) behaviour for the system. It looks like the perturbation is been propagated; it looks like a wave! So we can avoid calculating billions of interactions because the reality is that this is just a wave propagating downwards (more like a pulse but still a wave). We have equations that perfectly and simply describe how waves behave, so this has to be used. In particular this wave is a longitudinal wave . What about the other atoms in the lattice? Well, let's focus for a moment on the atoms of the same row of $C$ and only on the ones at right-hand side. We are moving $C$ downwards so the interactions would look like this animation: Again this looks a lot like a wave propagating (since the force has actually to be distributed in a finite amount of time). But the difference is that in this case the wave is not longitudinal but transverse . But there is something to note: in the previous animation atoms move up and down only (they might be fixed with a vertical rod, each of them, where they can slide). In our system this is not a limitation, and since $R$ is not only pushed downwards by the displaced $C$ but it is also pushed to the left, the actual wave is a combination of longitudinal and transverse oscillations. The same complex waves that we see in the oceans: Look at those atoms and how they oscillate in circles (nor only back and forth and not only up and down but with a combination of both motions). Also, your solid is not only this layer nor the previous column of atoms, it is both, and each part of the lattice will suffer the propagation of these complex waves in different forms depending on the distance from $C$ and the orientation. Because of symmetry, this wave is not only propagating to the right of $C$ but also to the left of $C$ . And also remember, yours is not a force applied with oscillating intensity but it is just a pulse, a single wave front. When the wave front has propagated to the entire solid, the situation ends (our springs damp any future oscillations, and we reach the equilibrium/static state). These pressure waves propagating across the entire solid are in fact sound waves. Incredible, right? Sound waves are redistributing the forces of the solid after your action just like a Gothic cathedral. Sounds even poetic to me. So, if the springs are more rigid, then they transmit quickly the interaction (since they react strongly to any relative change between the atoms), while in the case of more elastic springs we have slower waves. This is actually the reason why sound waves propagate faster in stiffer objects. The elasticity of these springs is related to the chemical properties of the atoms of your solid. For example, for lead the sound waves propagate at $v=1210 \;\mathrm m/\mathrm s$ , while for the stiffer aluminium block the sound waves reach $v=6320 \;\mathrm m/\mathrm s$ , more than 6 km each second! Obviously we are totally unable to notice this effect when we push a solid object, the dynamical evolution of the atomic grid is so extremely fast that we are actually always seeing the static result; we push objects, and they move as a coherent monolithic entity when in reality it we are applying the force to a single part of it. Not only extreme speeds make this an invisible phenomenon but also, since we are macroscopic creatures, we really never would see the displacement of the atoms as the wave passes. That's why we generally speak about rigid solids in terms of general mechanical laws of motion ignoring the fact that this behaviour emerges from trillions of minuscule Newtonian mechanical interactions. HEAT: ANOTHER INTERESTING ASPECT Finally I want to point to this simulation of a solid block of just a few atoms colliding with the floor. Look at how I lied a little on the fact that we reach a static final situation: after compression, all these springs keep interacting with each other (all the waves keep bouncing inside the solid, reflecting and interfering with themselves in a complex way). The solid never ceases to change shape (in minuscule amounts). These interactions become background noise vibrations, and these vibrations is what we perceive, as macroscopic beings, as the temperature of the object. There is no damping. What's interesting in the animation is that the atoms were not vibrating randomly before the impact of the object. With our atom-spring lattice model we can show that a solid object moving with certain kinetic energy will indeed heat up a little when colliding with another, part of the energy is kept as the overall kinetic energy of the block as it bounces upwards again, but a fair amount of the original energy is not stored as random movement of the molecules of the solid. This is the reason why objects don't reach the same altitude after bouncing on the floor. All of this is explained just by this simple model! Just as a bonus, this is the second bounce: you can see that now it is just one atom that suffers the force in the collision (instead of the entire bottom layer of atoms of the previous animation). This is similar to the experiment of your question. Look at how the wave propagates so quickly that it is almost invisible in both GIFs. It is just a few frames. In the first one it is more visible: the wave traverses the solid from bottom to the top in less than half a second. ADDENDUM: EXAMPLE FOR A SIMPLE NETWORK CALCULATION Since you are so particularly interested in the actual force distribution and how does it work I'm going to expand here on the small details of how an actual calculation can be made for a network of interconnected masses attached by springs. For that we first need to understand the nature of the forces involved. Since they are springs we can use Hooke's Law; $F=-k(L-L_0)$ Which tells us that the force excerted by a spring is proportional to the stretching or contraction of it. $L_0$ is the length of the spring when it is in the relaxed state, and $L$ is the length of the string in general. So $L-L_0$ is the change of length of the string from that relaxed state. $k$ is the stiffness coeficient of the string. And the minus (-) sign is there because for an expasion ( $L-L_0>0$ ) the force has to go in the direction of contraction and for a contraction ( $L-L_0<0$ ) the force has to point in the direction of expansion. Now let's immagine our simple model: four atoms, connected by springs in a configuration identical to that of our $C$ , $R$ , $L$ and $D$ atoms. The distance between adyacent atoms is 1 angstrom (a tenth of a nanometer). This distance will also be the relaxed length of each of our springs. Which means that on this configuration they are under no tension at all. So we have $L_0 = 1 \;angstrom$ for all the springs. Now suppose that I fix the positions of the $R$ , $L$ and $D$ atoms holding them while we change the position of the $C$ atom. All the springs then are going to change in size depending on where I put $C$ , and thus all the strings are going to excert a force on $C$ (a force that wasn't there before in the relaxed situation). So, to give some concrete numbers I will move $C$ in the downwards direction for 0.5 angstroms (half the way to the $D$ 's position). Now the length of the $C-D$ spring has decreased to 0.5 angstroms, and thus a force should appear in the upwards direction (since the contraction happened in the downward direction and Hooke's law has that "-" sign in front of everything). So the force excerted by this string on $C$ is going to be $F_D=-k(L-L_0)=-k(0.5-1)=k/2$ . But the lengths of the $C-R$ and $C-L$ springs has also changed. The new length can be calculated by using pythagoras theorem since the springs lengths can be regarded as the hypothenuses of a right triangle with base 1 angstrom and height 0.5 angstroms: As you can see, the lengths of the $C-R$ and $C-L$ springs are now both equal to $L=\sqrt{0.5^2+1^2}=1.118\; angstroms$ . From basic trigonometry we know that the angle at which these springs are inclined with respect to the horizontal is the inverse tangent of the slope and the slope is the ratio between height and base. So, the force of the $C-R$ spring is going to be $F_R=-k(L-L_0)=-k(1.118-1)=-0.118k$ which is negative because the force is pointing in the opposite direction of expansion (which is considered positive), and the force of the $C-L$ spring is going to be $F_L=-k(L-L_0)=-k(1.118-1)=-0.118k$ which again is the same (note how since the system is mirror symetric we could have avoided this calculation by just saying "they have both to be the same because of symetry"). The only difference between them is that the direction of expansion is defined positive differently on them, the $C-R$ spring expands to the left end and the $C-L$ spring expands to the right end, thus the forces are pointed one to the right and the other to the left, both inclined with respect to the horizontal at $\alpha = 26.57^\circ$ . So let's suppose one final parameter of our model. Let's say that $k = 132.106\; N/angstrom$ . This means that the strings in our model are able to react with $132.106\; N$ of force for each angstrom we expand or contract them. Since we have contracted the $C-D$ spring by half an angstrom the intensity of the force (regardless of signs) is $|F_D|=k/2 = 66.05\; N$ . For the force of the $C-R$ and $C-L$ springs we have $|F_R|=|F_L|=0.118k=15.59 \; N$ each. Since we now know the value of each force applied on $C$ when on this particular position by the three springs, and since we also know how those forces are oriented (one is pointed downwards, the other is pointed to the upper-left with an angle of $26.57^\circ$ and the last one is pointed to the upper-right with the same inclination of $26.57^\circ$ ), we can compute the net force applied on $C$ . We only need to decompose the forces in their horizontal and vertical components. This can be done with simple trigonometry like so: Finally we can compute the horizontal component of the net force as the sum of the horizontal components of the all the forces and the same with the vertical component. Having both vertical and horizontal total contributions we can finally obtain the actual value for the net force and its direction: All the horizontal contributions of the different forces cancel each other out perfectly in this configuration, and only the vertical contributions add up. So the final answer here is that if $C$ moves to this particular position it will be subjected to a lifting force of $80\; N$ . Why $80\;N$ ? Because I choosed the value of $k$ and the value of the displacement of $C$ such that this would be the result in our model. This system is not in equilibrium since the net force on $C$ is not zero. That means that if I let $C$ go from this position it will start to move upwards. While it changes position the springs are going to change lengths and the net force might change. If the movement is attenuated (by some added up friction or heating of the springs) then ultimately after some oscillations the entire system will return to the initial T-shape configuration (since in that situation we saw there was no net force, thus no change). But! if instead of letting $C$ go you were pushing it with $80\;N$ downwards then the total net force would be balanced! because you will be canceling these spring forces with that of you pressing on this particular atom with that particular force. So, your original question is actually this problem but in reverse. You push with $80\;N$ of force downwards and with this reasoning it has been shown that after 0.5 angstroms (if and only if the stiffness of the springs is k $=132.106\; N/angstrom$ ) the entire system would be at equilibrium and your applied force would be exactly balanced by the others so anything would move after that. The reality (as someone pointed out) is that, because of inertia, after passing the 0.5 angstroms tick your $C$ atom would keep moving towards $D$ . But as it does that the total force on $C$ is going to change to an upwards force and thus the $C$ atom would in fact oscillate around the 0.5 angstrom position forever. If there is some damping then it will come at rest to that Y-shaped configuration. This is the end result of you pushing the $C$ atom with a constant force in this 4 atom system. But What would happen if I released the other atoms of the system (instead of keeping them fixed)? Then the calculation turns much more tedius (not complicated since you would only have to apply the same reasoning and basic trigonometry but for many more forces). The result of this calculation is that everything would bend a little as you push it and the entire ensable would move downwards as you keep pushing it. So here you have an example of what I was telling you, the force applied to one atom can move the enitre object as it was one monolithic structure, the minuscule bendigs of the solid are imperceptible due to the extreme strenght of the atomic bonds (those springs are trully stiff). The dynamical evolution is also imperceptible since it happens with microscopic variations of the positions of single atoms and molecules, and because it happens at the speed of sound! So the end result is that there is no macroscopically noticiable real difference between you pushing a single atom of a solid or the entire solid. I should note also that if you pushed a single atom with $80\;N$ of force you would probably break all the springs connected to it (the bonds are not binded by such strong forces), So in real life you would only be able to strip that atom from the solid. But been able to push that entire force into the surface of just one single atom is beyond any everyday experience. Also the atom in contact with that atom would be striped from your finger. In general you push with larger contact surfaces, the force is distributed evenly across that contact boundary so that the subsequent interaction can be reagrded as in our models (the springs never snap). The qualitative result is the same for any network of atoms. But the specific calculations as I mentioned earlier are totally unfeasable if you want to know the actions and reactions on each atom and spring for each instant of a billion atoms ensamble. Don't ask me to do that because it would just be an unscientific approach to the problem. ONE FINAL CLARIFICATION You seem to be worried (at least in the chat) about how forces can be redistributed like this. I think you might have one missconception here. There are conservation laws for energy and momentum in mechanics (and many other variables), but conservation of force is not a law of nature and has never been regarded as one. If a force dissapears somewhere it is not replaced by any other force. We can create forces and destroy them as nothing. Don't confuse that with Newton's III law, which in fact is a cryptic form of conservation of momentum not force.
{ "source": [ "https://physics.stackexchange.com/questions/487583", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
488,527
This is not a duplicate, non of the answers gives a clear answer and most of the answers contradict. There are so many questions about this and so many answers, but none of them says clearly if the electron's change of orbitals as per QM can be expressed at a time component or is measurable (takes time or not), or is instantaneous, or if it is limited by the speed of light or not, so or even say there is no jump at all. I have read this question: Quantum jump of an electron How do electrons jump orbitals? where Kyle Oman says: So the answer to how an electron "jumps" between orbitals is actually the same as how it moves around within a single orbital; it just "does". The difference is that to change orbitals, some property of the electron (one of the ones described by (n,l,m,s)) has to change. This is always accompanied by emission or absorption of a photon (even a spin flip involves a (very low energy) photon). and where DarenW says: A long time before the absorption, which for an atom is a few femtoseconds or so, this mix is 100% of the 2s state, and a few femtoseconds or so after the absorption, it's 100% the 3p state. Between, during the absorption process, it's a mix of many orbitals with wildly changing coefficients. Does an electron move from one excitation state to another, or jump? where annav says: A probability density distribution can be a function of time, depending on the boundary conditions of the problem. There is no "instantaneous" physically, as everything is bounded by the velocity of light. It is the specific example that is missing in your question. If there is time involved in the measurement the probability density may have a time dependence. and where akhmeteli says: I would say an electron moves from one state to another over some time period, which is not less than the so called natural line width. the type of movement in electron jump between levels? where John Forkosh says: Note that the the electron is never measured in some intermediate-energy state. It's always measured either low-energy or high-energy, nothing in-between. But the probability of measuring low-or-high slowly and continuously varies from one to the other. So you can't say there's some particular time at which a "jump" occurs. There is no "jump". How fast does an electron jump between orbitals? where annav says: If you look at the spectral lines emitted by transiting electrons from one energy level to another, you will see that the lines have a width . This width in principle should be intrinsic and calculable if all the possible potentials that would influence it can be included in the solution of the quantum mechanical state. Experimentally the energy width can be transformed to a time interval using the Heisneberg Uncertainty of ΔEΔt>h/2π So an order of magnitude for the time taken for the transition can be estimated. H atom's excited state lasts on average $10^{-8}$ secs, is there a time gap (of max 2*$10^{-8}$ secs) betwn. two consec. photon absorpt.-emiss. pairs? So it is very confusing because some of them are saying it is instantaneous, and there is no jump at all. Some are saying it is calculable. Some say it has to do with probabilities, and the electron is in a mixed state (superposition), but when measured it is in a single stable state. Some say it has to do with the speed of light since no information can travel faster, so electrons cannot change orbitals faster then c. Now I would like to clarify this. Question: Do electrons change orbitals as per QM instantaneously? Is this change limited by the speed of light or not?
Do electrons change orbitals as per QM instantaneously? In every reasonable interpretation of this question, the answer is no . But there are historical and sociological reasons why a lot of people say the answer is yes. Consider an electron in a hydrogen atom which falls from the $2p$ state to the $1s$ state. The quantum state of the electron over time will be (assuming one can just trace out the environment without issue) $$|\psi(t) \rangle = c_1(t) |2p \rangle + c_2(t) | 1s \rangle.$$ Over time, $c_1(t)$ smoothly decreases from one to zero, while $c_2(t)$ smoothly increases from zero to one. So everything happens continuously, and there are no jumps. (Meanwhile, the expected number of photons in the electromagnetic field also smoothly increases from zero to one, via continuous superpositions of zero-photon and one-photon states.) The reason some people might call this an instantaneous jump goes back to the very origins of quantum mechanics. In these archaic times, ancient physicists thought of the $|2 p \rangle$ and $|1 s \rangle$ states as classical orbits of different radii, rather than the atomic orbitals we know of today. If you take this naive view, then the electron really has to teleport from one radius to the other. It should be emphasized that, even though people won't stop passing on this misinformation , this view is completely wrong . It has been known to be wrong since the advent of the Schrodinger equation almost $100$ years ago. The wavefunction $\psi(\mathbf{r}, t)$ evolves perfectly continuously in time during this process, and there is no point when one can say a jump has "instantly" occurred. One reason one might think that jumps occur even while systems aren't being measured, if you have an experimental apparatus that can only answer the question "is the state $|2p \rangle$ or $|1s \rangle$ ", then you can obviously only get one or the other. But this doesn't mean that the system must teleport from one to the other, any more than only saying yes or no to a kid constantly asking "are we there yet?" means your car teleports. Another, less defensible reason, is that people are just passing it on because it's a well-known example of "quantum spookiness" and a totem of how unintuitive quantum mechanics is. Which it would be, if it were actually true. I think needlessly mysterious explanations like this hurt the public understanding of quantum mechanics more than they help. Is this change limited by the speed of light or not? In the context of nonrelativistic quantum mechanics, nothing is limited by the speed of light because the theory doesn't know about relativity. It's easy to take the Schrodinger equation and set up a solution with a particle moving faster than light. However, the results will not be trustworthy. Within nonrelativistic quantum mechanics, there's nothing that prevents $c_1(t)$ from going from one to zero arbitrarily fast. In practice, this will be hard to realize because of the energy-time uncertainty principle: if you would like to force the system to settle into the $|1 s \rangle$ state within time $\Delta t$ , the overall energy has an uncertainty $\hbar/\Delta t$ , which becomes large. I don't think speed-of-light limitations are relevant for common atomic emission processes.
{ "source": [ "https://physics.stackexchange.com/questions/488527", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/132371/" ] }
488,562
Recently I had the following misconceptions: Static friction always opposes the motion of body. The force of friction cannot initiate motion in a body. Now I came to know that my understanding was wrong and that friction indeed can cause motion in bodies, and that static friction does not always oppose the motion of a body. But I found this quite bizarre, how can a force which has always been taught to us to oppose motion, oppose points 1. and 2.?
Friction opposes relative motion between two bodies. Note that might mean that friction can create motion relative to i.e. you. For example, dropping an item on a moving belt. Friction opposes and reduces the relative motion of the item and belt until they move together. But now they’ve started moving relative to you.
{ "source": [ "https://physics.stackexchange.com/questions/488562", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
488,955
The book The Ideas Of Particle Physics contains a brief treatment of quantum gravity, in which the claim is asserted that if one attempts to construct a model of gravity along the same lines as QED, the result is non-renormalizable and the reason why can be traced to the fact that in this case the force-carrier (the graviton) is "charged" in the sense that it contains energy and therefore couples to energy which includes other gravitons. This is in contrast to the photons in QED, which are uncharged and therefore do not couple with each other. Is this assertion accurate, or is it instead an oversimplification of something more complex?
I think this is a misleading oversimplification. Gluons carry color charge and couple to themselves, yet QCD is renormalizable. Similarly, W bosons carry weak isospin and couple to themselves, yet electroweak theory is renormalizable. In general, non-abelian gauge theories are renormalizable despite the fact that their force-carriers couple to each other. The problem with gravity is that its coupling constant $G$ is not dimensionless (in units where $\hbar$ and $c$ are 1). Consequently, any perturbation expansion in $G$ will involve higher and higher powers of the Riemann curvature tensor. Rather than there being a finite number of possible “counterterms” during renormalization, as in renormalizable theories, there are an infinite number of them.
{ "source": [ "https://physics.stackexchange.com/questions/488955", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/40292/" ] }
488,993
In a zinc/copper Daniell cell correct me if I am wrong : Zinc has 2 valence electrons. So it wants to get rid of them. To do so it sends them to the copper which needs 2 to complete its valence shell. There needs to be a wire between the zinc and the copper for this reaction to happen. So technically the plates are not charged. It's just the charges flowing out that create the electric field. TLDR : Are the plates of a battery more like a capacitor with excess charges on the plates? Or do they simply throw in and out electrons near their terminals and the individual plates of zinc and copper are neutral? My confusion is this : I understand that the zinc wants to get rid of electrons, and the copper wants more electrons, but : The zinc and copper atom are "neutral". it's only the defecit of electrons on the conductor near the positive terminal and the excess of electrons near the negative terminal, That for me would make an electric field . Or Maybe it's the "wanting to get rid" and the "wanting to get more" electrons that create an electric field, if it's indeed that please confirm ! Thanks !
I think that most confusion about batteries comes from ignoring the electrolyte. For example: Zinc has 2 valence electrons. So it wants to get rid of them. To do so it sends them to the copper which needs 2 to complete its valence shell. The actual reaction is between the metallic zinc and the dissolved zinc. Zinc wants to get rid of two electrons and it does so by becoming an ion and going into solution. This reaction is energetically favorable and can occur even if there is a small electric field opposing it at the surface of the electrode. However, the reaction products near the electrode surface, zinc ions and electrons, are highly charged and quickly produce a strongly opposing field which overcomes the energetic favorability and halts the reaction. For the reaction to proceed the reaction products must be removed from the region near the electrode surface. The electrons can be removed from the electrode surface by transport through the wire, and the ions can be removed from the surface by transport through the fluid. The transport of electrons requires the complementary reaction at the copper electrode, and the transport of the ions requires a complementary transport of the solute ion in the electrolyte. Understanding the electrolyte is essential for understanding batteries, and is the usual neglected piece. There needs to be a wire between the zinc and the copper for this reaction to happen. The purpose of the wire is not to make the reaction happen. The reaction is energetically favorable, so it briefly happens regardless. The purpose of the wire is to remove the reaction products so it can continue to happen. Note that in this process not all of the excess electrons on the plate are normally removed. As soon as a few are removed the reaction proceeds and replenishes those few. The reaction thus proceeds at the rate that the products are removed from the immediate vicinity of the electrode surface. In abnormal situations, like a short circuit, a substantial fraction of the excess charges at the surface can be depleted and the current is limited by the reaction kinetics. This manifests as an “internal resistance” for the cell.
{ "source": [ "https://physics.stackexchange.com/questions/488993", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/235756/" ] }
489,214
Let's imagine there are two, isolated, stationary worlds in space (called A and B), very far apart from each other. I live on World A, and some aliens live on World B. I want to learn about the aliens on World B by talking to them in person. My lifespan is a quadrillion years, so I'm not worried about dying while traveling to them. However, I would like to see the alien civilization as close to its infancy as possible. In other words, I would rather see alien cavemen than alien astronauts. If I travel too slowly, I give their civilization too much time to develop into astronauts—no good. If I travel fast enough (close to the speed of light), time passes faster for World B than for me and my spaceship, due to time dilation (correct me if I'm wrong). Thus, I'm worried that if I travel too fast, time might pass so quickly for World B that they develop into astronauts before I arrive. Am I right to worry about this? If so, what's the optimal speed to ensure that I arrive earliest in their civilization's development? If my reasoning is wrong and traveling faster is always better, then why?
Suppose that A and B are at rest relative to each other (which you have) and in their mutual rest frame are separated by 100 light years. That means that no signal can travel from A to B (or vice-versa) in less than 100 years. Signals include optical or radio signals, which travel at the speed of light, and also material projectiles like spacecraft, which are slower. So, if you leave in your spacecraft when you receive, at A, a signal that says "what to expect on planet B now that it's the year 2019," the earliest you can arrive at B is their year 2219. The message you got was old, and it takes time for you to arrive. Time dilation has the effect of compressing the time in your trip. On your way from A to B, you'll receive 200 years worth of their news broadcasts: the 100 years' worth that were already in transit to you when you left, and the (at least) 100 years' worth that are emitted while you are en route. But if you travel with a relativistic factor $\gamma=(1-v^2/c^2)^{-1/2}=100$ , you'll only have about a year to study all of that news.
{ "source": [ "https://physics.stackexchange.com/questions/489214", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/235875/" ] }
489,215
There are many different specific definitions of energy within different physics fields: In thermodynamics we have at least U (system internal energy), F (Helmholtz Free Energy), G (Gibbs Free Energy), H (Enthalpy). Or we have the relativistic stress-energy-momentum tensor, etc. Various forms or configurations of energy seem to be included/excluded in these different definitions. I'm looking for the most appropriate "type" of energy to use in quantifying the potential to disrupt (disorder) an ordered system. That is, some amount of "available" energy that can cause increase of entropy of the system. I do not want to get more specific than what I've said about what kind of ordered system it is. Just a configuration of matter and energy that has less than that amount of matter/energy's maximum possible entropy so has the potential to be disordered. Disordering energy/matter-with-energy could come from outside the system I suppose, or be energy/mass within the system. So I guess we're talking about a thermodynamically open system, but thermodynamics tends to scope out energy that's locked up in/as the mass of matter. For the most general case of describing energy available to disorder any ordered system, what type of energy definition should I use and why?
Suppose that A and B are at rest relative to each other (which you have) and in their mutual rest frame are separated by 100 light years. That means that no signal can travel from A to B (or vice-versa) in less than 100 years. Signals include optical or radio signals, which travel at the speed of light, and also material projectiles like spacecraft, which are slower. So, if you leave in your spacecraft when you receive, at A, a signal that says "what to expect on planet B now that it's the year 2019," the earliest you can arrive at B is their year 2219. The message you got was old, and it takes time for you to arrive. Time dilation has the effect of compressing the time in your trip. On your way from A to B, you'll receive 200 years worth of their news broadcasts: the 100 years' worth that were already in transit to you when you left, and the (at least) 100 years' worth that are emitted while you are en route. But if you travel with a relativistic factor $\gamma=(1-v^2/c^2)^{-1/2}=100$ , you'll only have about a year to study all of that news.
{ "source": [ "https://physics.stackexchange.com/questions/489215", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/235879/" ] }
489,291
I often hear the story of how Einstein came up to the conclusion that time would slow down the faster you move, because the speed of light has to remain the same. My question is, how did Einstein know that measuring the speed of light wouldn't be affected by the speed at which you are moving. Was this common knowledge already before Einstein published his paper on special relativity? If not, what led him to that conclusion?
Besides Michelson and Morley experimental results, Einstein also considered the theoretical aspects. It can be derived from Maxwell's equations that the speed at which electromagnetic waves travel is: $c=\left(\epsilon_{0}\mu_{0}\right)^{-1/2}$ . Since light is an electromagnetic wave, that means that the speed of light is equal to the speed of the electromagnetic waves. $\epsilon_{0}$ and $\mu_{0}$ are properties of the vacuum and are constants, so $c$ will also be a constant. Thus from Maxwell's theory of electromagnetism alone we can already see that the speed of light in vacuum should be constant. On the other hand, Galilean invariance tells us that the laws of motion have the same form in all inertial frames. There is no special inertial frame (as far as Newton's laws are concerned). Another key element here is Galilean transformation , which was the tool used for transforming from one inertial frame to another. It can be easily seen that considering the first two elements to be valid: Maxwell's theory of electromagnetism - speed of light is constant Galilean invariance - the laws of motion have the same form in all inertial frames means that we can no longer apply the Galilean transformation, because otherwise we will get a contradiction. Thus at least one of these three "key elements" must be wrong. Maxwell's theory of electromagnetism - speed of light is constant Galilean invariance - the laws of motion have the same form in all inertial frames Galilean transformation It turned out that the last one (Galilean transformation) was wrong. Einstein considered the first two correct and built the special theory of relativity. The correct transformation from one inertial frame to another, in the assumption of the validity of the Maxwell's theory and Galilean invariance, turns out to be Lorentz transformation . It is nice to check that the Lorentz transformation does indeed reduce to the Galilean transformation in the $v\ll c$ limit. That's why, in a sense, Galilean transformation is not wrong, but rather incomplete or a particular case. We can say that Galilean transformation needed to be generalized, and this was acomplished by introducing the invariance of the speed of light and maintaining the Galilean invariance. How do Maxwell's equations predict that the speed of light is constant Maxwell's equations in differential form: $$\tag{1}\nabla\cdot \mathbf{E}=\frac{\rho}{\epsilon_{0}}\label{1}$$ $$\tag{2}\nabla\cdot \mathbf{B}=0\label{2}$$ $$\tag{3}\nabla\times\mathbf{E}=-\frac{\partial \mathbf{B}}{\partial t}\label{3}$$ $$\tag{4}\nabla\times \mathbf{B}=\mu_{0}\mathbf{J}+\mu_{0}\epsilon_{0}\frac{\partial \mathbf{E}}{\partial t}\label{4}$$ We can try to derive a wave equation in vacuum. Since we are considering the vacuum, we do not have charge densities, so equation ( $\ref{1}$ ) becomes: $$\tag{5}\nabla\cdot \mathbf{E}=0\label{5}$$ In vacuum we do not have current densities either, so equation ( $\ref{4}$ ) becomes: $$\tag{6}\nabla\times \mathbf{B}=\mu_{0}\epsilon_{0}\frac{\partial \mathbf{E}}{\partial t}\label{6}$$ Now if we apply the curl to equation ( $\ref{3}$ ), we get: $$\tag{7}\nabla\times\left(\nabla\times\mathbf{E}\right)=-\frac{\partial}{\partial t}\left(\nabla\times\mathbf{B}\right)\label{7}$$ We can use vector identity to evaluate the LHS of equation ( $\ref{7}$ ): $$\tag{8}\nabla\times\left(\nabla\times\mathbf{E}\right)=\nabla\left(\underbrace{\nabla\cdot\mathbf{E}}_{=0}\right)-\nabla^2\mathbf{E}\label{8}$$ $$\tag{9}\nabla\times\left(\nabla\times\mathbf{E}\right)=-\nabla^2\mathbf{E}\label{9}$$ For the RHS of equation ( $\ref{7}$ ), we can replace $\nabla\times\mathbf{B}$ with the expression we have from equation ( $\ref{6}$ ): $$\tag{10}-\frac{\partial}{\partial t}\left(\nabla\times\mathbf{B}\right)=-\frac{\partial}{\partial t}\left(\mu_{0}\epsilon_{0}\frac{\partial \mathbf{E}}{\partial t}\right)=-\mu_{0}\epsilon_{0}\frac{\partial^2 \mathbf{E}}{\partial t^2}\label{10}$$ Putting all together: $$\tag{11}-\nabla^2\mathbf{E}=-\mu_{0}\epsilon_{0}\frac{\partial^2 \mathbf{E}}{\partial t^2}\label{11}$$ $$\tag{12}\nabla^2\mathbf{E}-\mu_{0}\epsilon_{0}\frac{\partial^2 \mathbf{E}}{\partial t^2}\label{12}=0$$ The general form of a wave equation is: $$\tag{13}\nabla^2\mathbf{\Psi}-\frac{1}{v^2}\frac{\partial^2 \mathbf{\Psi}}{\partial t^2}\label{13}=0$$ where $v$ is the velocity of the wave. Equation ( $\ref{12}$ ) decribes an electromagnetic wave moving with velocity $v=\frac{1}{\sqrt{\epsilon_{0}\mu_{0}}}$ . Since light is an electromagnetic wave, that means that light is also propagating at this speed in vacuum. And since both $\epsilon_{0}$ and $\mu_{0}$ are constant, that means that $\frac{1}{\sqrt{\epsilon_{0}\mu_{0}}}$ is also a constant. Hence light moves at a constant speed in vacuum.
{ "source": [ "https://physics.stackexchange.com/questions/489291", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/235906/" ] }
489,461
When I saw that thing I did not understand how that shape is formed? To be ideal, take a vertical smooth plane. Now aim at the wall with a thin water tube. Then the outer layer forms a "parabolic shape" with stagnation point as focus of it. I found it by tracing that shape. How could you explain this observation? Also could you provide the equation in terms of velocity of flow, angle of contact with the wall and gravitational constant?
Each of the water particles gets pushed to the side by the other particles as the water hits the wall. If we neglect the viscosity of the water, each of these particles follows a throwing-parabola, but under different initial launch angles. If we assume the jet hits the wall horizontally, the water particles are thrown with the same (maximum) initial velocity in every direction. The shape that you observed is then given by the envelope of all possible parabolas. For all parabolas $$y(x) = x \tan \beta - \frac{g\,x^2}{2\,{v_0}^2 \cos^2\beta} + h_0$$ with initial launch angles $\beta$ , the envelope is $$y_\mathrm{H} (x) = \frac{{v_0}^2}{2\,g} - \frac{g\,x^2}{2\,{v_0}^2} + h_0.$$ So it forms indeed a parabola. Edit: The envelope can be derived als follows: If we define the family of curves implicitly by $$F(x,y,\tan(\beta))=y - x \tan \beta + \frac{g\,x^2}{2\,{v_0}^2 \cos^2\beta}=y - x \tan \beta + \frac{g\,x^2(1+\tan^2\beta)}{2\,{v_0}^2 }=0$$ the envelope of the family is given by ( Source ) $$F = 0~~\mathsf{and}~~{\partial F \over \partial \tan\beta} = 0$$ We have $${\partial F \over \partial \tan\beta}=-x+\frac{gx^2\tan\beta}{v_0^2}=0 ~~ \Leftrightarrow ~~ \tan\beta=\frac{v_0^2}{gx}$$ Substituting that into $F$ we get $$F=y-\frac{v_0^2}{g}+\frac{g(x^2+v_0^4/g^2)}{2v_0^2}=0 ~~\Leftrightarrow ~~ y_\mathrm{H} (x) = \frac{{v_0}^2}{2\,g} - \frac{g\,x^2}{2\,{v_0}^2}$$
{ "source": [ "https://physics.stackexchange.com/questions/489461", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/187997/" ] }
489,689
Consider having a circuit which consists of a battery and one resistor. $V = 10$ volts, $R = 5$ ohms, so $I = 2$ Amperes, and $P = 20$ watts. If we double the voltage and resistance, the current will be the same and the power will be equal to 40 watts, hence the resistor will be hotter than the first case. Now here is the silly question. The current in each of the two cases is constant and equals the current of the other. The speed which charges move across the wire is constant. So why is more heat generated while the speed of charges is constant? I know that the potential is doubled, but the potential is potential, and we can't make use of potential energy unless it is converted to kinetic energy. How can the resistor make use of this (potential) energy, the mechanism which the resistor turns potential energy into heat.
Your initial circuit is like this: So you get 2A flowing and a power of 20W dissipated in the resistor. Then you double the voltage and the resistance: The two batteries add up to single source of 20V. The two resistors add up to a total resistance of 10 $\Omega$ . So, as you correctly state, the current is the same as before (2A). Therefore the power is now 40W But this power is shared between the two resistors: 20W each, exactly as before. You might also note that the potential at the point between the resistors is 10V, so each resistor has 10V across it, exactly as before. Really all you've done is doubled up the circuit so that you have twice of what you had before.
{ "source": [ "https://physics.stackexchange.com/questions/489689", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/233951/" ] }
490,288
I am not asking why an intrinsic property, like spin can have more then a single value. I understand particles (electrons) can come to existence with either up or down spin. I am asking why it can change while the particle exists. Electrons are defined in the SM as elementary particles, and its intrinsic properties include both EM charge and spin. The electron is a subatomic particle, symbol e− or β− , whose electric charge is negative one elementary charge. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. The EM charge of the electron is defined as -1e, and the spin as 1/2. Electrons have an electric charge of −1.602×10^−19 coulombs,[66] which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. The electron has an intrinsic angular momentum or spin of 1 / 2 .[66] This property is usually stated by referring to the electron as a spin- 1 / 2 particle. https://en.wikipedia.org/wiki/Electron In quantum mechanics and particle physics, spin is an intrinsic form of angular momentum carried by elementary particles, composite particles (hadrons), and atomic nuclei.[1][2] Although the direction of its spin can be changed, an elementary particle cannot be made to spin faster or slower. In addition to their other properties, all quantum mechanical particles possess an intrinsic spin (though this value may be equal to zero). https://en.wikipedia.org/wiki/Spin_(physics) he spin transition is an example of transition between two electronic states in molecular chemistry. The ability of an electron to transit from a stable to another stable (or metastable) electronic state in a reversible and detectable fashion, makes these molecular systems appealing in the field of molecular electronics. https://en.wikipedia.org/wiki/Spin_transition So basically an electron can change its spin from up to down or vica versa, thought it is an intrinsic property. Electrons EM charge cannot change. In science and engineering, an intrinsic property is a property of a specified subject that exists itself or within the subject. So both EM charge and spin are intrinsic properties of electrons. Though, electrons are coming to existence with a certain EM charge and spin. Still, EM charge is unchanged as long as the electron exists, but spin can change. I do understand that electrons can have intrinsic properties, that can have either a single value, or a set of values. I do understand that some elecrons come into existence with EM charge and spin up. Some electrons come into existence with EM charge and spin down. What I do not understand, is, how can spin change while the electron still exists, whereas EM charge cannot, though, both are intrinsic properties. Do we know that when an electron undergoes spin flip (spin transition), that the electron that had originally spin up is the same quantum system that after the spin transition has spin down? Can it be that the electron before with spin up ceases to exist (vacuum fluctuation), and then another electron is coming into existence with spin down? Why do we say that the electron that had spin up (which is an intrinsic property) is the same quantum system as the electron that has later on (after spin flip) spin down? After the big bang, at the baryion asymmetry, some electrons came into existence with spin up and some with spin down. Do we call these the same electrons? Is spin the only intrinsic property of the electron that can change (like helicity)? Question: How can an intrinsic property of an electron change (spin flip)? Are there any intrinsic properties (of elementary particles), that do have multiple values available, but still can't change?
It doesn't matter. Suppose two electrons approach each other, exchange a photon, and leave with different spins. Are these "the same electrons" as before? This question doesn't have a well-defined answer. You started with some state of the electron quantum field and now have a different one; whether some parts of it are the "same" as before are really up to how you define the word "same". Absolutely nothing within the theory itself cares about this distinction. When people talk about physics to other people, they use words in order to communicate effectively. If you took a hardline stance where any change whatsoever produced a "different" electron, then it would be very difficult to talk about low-energy physics. For example, you couldn't say that one atom transferred an electron to another, because it wouldn't be the "same" electron anymore. But if you said that electron identity was always persistent, it would be difficult to talk about very high-energy physics, where electrons are freely created and destroyed. So the word "same" may be used differently in different contexts, but it doesn't actually matter. The word is a tool to describe the theory, not the theory itself. As a general comment: you've asked a lot of questions about how words are used in physics, where you take various quotes from across this site out of context and point out that they use words slightly differently. While I appreciate that you're doing this carefully, it's not effective by itself -- it's better to learn the mathematical theory that these words are about . Mathematics is just another language, but it's a very precise one, and that precision is just what you need when studying something as difficult as quantum mechanics. Another question, which I think you implied in your (many) questions, is: under what circumstances are excitations related by changes in intrinsic properties called the same particle? Spin up and spin down electrons are related by rotations in physical space. But protons and neutrons can be thought of as excitations of the "nucleon" field, which are related by rotations in "isospin space". That is, a proton is just an "isospin up nucleon" and the neutron is "isospin down", and the two can interconvert by emitting leptons. So why do we give them different names? Again, at the level of the theory, there's no actual difference. You can package up the proton and neutron fields into a nucleon field, which is as simple as defining $\Psi(x) = (p(x), n(x))$ , but the physical content of the theory doesn't change. Whether we think of $\Psi$ as describing one kind of particle or two depends on the context. It may be useful to work in terms of $\Psi$ when doing high-energy hadron physics, but it's useful to work in terms of $p$ and $n$ when doing nuclear physics, where the difference between them is important. It always comes down to what is useful in the particular problem you're studying, which can be influenced by which symmetries are broken, what perturbations apply, what is approximately conserved by the dynamics, and so on. It's just a name, anyway.
{ "source": [ "https://physics.stackexchange.com/questions/490288", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/132371/" ] }
490,290
If there was a device, black-box, that salinated water flows in and output is salt and de-salinated water? But de-salinated water is colder than input of salinated water. Is it technically possible?
It doesn't matter. Suppose two electrons approach each other, exchange a photon, and leave with different spins. Are these "the same electrons" as before? This question doesn't have a well-defined answer. You started with some state of the electron quantum field and now have a different one; whether some parts of it are the "same" as before are really up to how you define the word "same". Absolutely nothing within the theory itself cares about this distinction. When people talk about physics to other people, they use words in order to communicate effectively. If you took a hardline stance where any change whatsoever produced a "different" electron, then it would be very difficult to talk about low-energy physics. For example, you couldn't say that one atom transferred an electron to another, because it wouldn't be the "same" electron anymore. But if you said that electron identity was always persistent, it would be difficult to talk about very high-energy physics, where electrons are freely created and destroyed. So the word "same" may be used differently in different contexts, but it doesn't actually matter. The word is a tool to describe the theory, not the theory itself. As a general comment: you've asked a lot of questions about how words are used in physics, where you take various quotes from across this site out of context and point out that they use words slightly differently. While I appreciate that you're doing this carefully, it's not effective by itself -- it's better to learn the mathematical theory that these words are about . Mathematics is just another language, but it's a very precise one, and that precision is just what you need when studying something as difficult as quantum mechanics. Another question, which I think you implied in your (many) questions, is: under what circumstances are excitations related by changes in intrinsic properties called the same particle? Spin up and spin down electrons are related by rotations in physical space. But protons and neutrons can be thought of as excitations of the "nucleon" field, which are related by rotations in "isospin space". That is, a proton is just an "isospin up nucleon" and the neutron is "isospin down", and the two can interconvert by emitting leptons. So why do we give them different names? Again, at the level of the theory, there's no actual difference. You can package up the proton and neutron fields into a nucleon field, which is as simple as defining $\Psi(x) = (p(x), n(x))$ , but the physical content of the theory doesn't change. Whether we think of $\Psi$ as describing one kind of particle or two depends on the context. It may be useful to work in terms of $\Psi$ when doing high-energy hadron physics, but it's useful to work in terms of $p$ and $n$ when doing nuclear physics, where the difference between them is important. It always comes down to what is useful in the particular problem you're studying, which can be influenced by which symmetries are broken, what perturbations apply, what is approximately conserved by the dynamics, and so on. It's just a name, anyway.
{ "source": [ "https://physics.stackexchange.com/questions/490290", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/233141/" ] }
490,426
Aluminium being such a good conductor, how is it possible that it is helping me keep my food warm ?? Because ultimately it should conduct the heat that is inside to the outside for exchange and should have no effect (maybe even cool it faster by increasing the surface area). Then why is it that we wrap our food with aluminium foil ? How does it keep my food warm ?
Being a shiny surface the aluminium sheet reflects radiant heat and reduces the heat loss by radiation by as much as $90\%$ . Being impermeable the sheet stops the movement of hot air from the vicinity of the surface of the food into the surrounding by convection currents. This also has the effect of reducing the rate at which water evaporates from the surface of the food, evaporation requiring an input heat from the food. However as you point out aluminium is a good conductor of heat and so does not reduce heat loss by this mechanism although it does trap a layer of air between the sheet and the food. This does reduce the loss of heat by conduction as air is a bad conductor of heat. You may have seen these properties of reduced heat loss at the end of a marathon with the use of "space blankets"?
{ "source": [ "https://physics.stackexchange.com/questions/490426", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
490,574
If I stand in front of a train and throw a penny at it, the penny will bounce back at me. For the penny to reverse its direction, at some point its velocity must go to zero. This is the point it hits the train. Two objects in contact have the same velocity, so the train must come to a stop for the penny to change its direction. I assume I'm getting some principles wrong. Is it because I assumed a perfectly rigid body, when in practice the train actually deforms ever so slightly?
If you assume rigid bodies (no deformation) then actually the coin never needs to have zero velocity since it can instantaneously change directly. In reality though, the centre of mass must pass through zero velocity (in your frame of reference). However, this does not mean that the train will stop! It will slightly decrease its speed due to change of the coins momentum, but this will be unmeasurably small. What will happen is that the point of contact will stop (kinda, see below), and the amount of time it is stopped for (or close to) will be tiny because of how light the coin is (you can play the same game with a fly). This will leave a small dent in the front of the train, but actually most of that dent will bounce back due to the flexibility of the metal. For a coin there will likely be a remaining dent, but for a fly there will be no lasting damage to the metal and the only effect of the initial dent will have been a small sound (and a flat fly). Now, let us look at the problem on a smaller scale. For rigid bodies we assume that matter is continuous and that touching really means touching. However, reality is different and matter is made of atoms and touching means electrostatic repulsion. This means that the train will apply force to the coin (and vice versa) when the atoms are not touching! The atoms of the train never have to actually stop! Otherwise, the same basic arguments still hold and for a sufficiently fast train, a coin will leave a dent. However, if we are talking about trains and coins on atomic levels then our model has become too complicated... Exercise: you could try to calculate the impulse (integrated force) or even the peak force on the coin due to the metal front of the train. You will have to estimate the speed of the train, the mass of the coin, and the depth of the indent in the train front. I'm guessing even for a coin the peak force will be pretty high!
{ "source": [ "https://physics.stackexchange.com/questions/490574", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/236443/" ] }
490,796
When the centripetal force on an orbiting body disappears (e.g. if it the body is a ball and the force was exerted by a string and the string rips, or, more unrealistically, if the body is the earth and the sun suddenly disappeared), the body continues in linear motion. How is angular momentum conserved here?
If the Sun were to magically disappear then the Earth would fly off at a tangent to its orbit. The trajectory would look like this: The green dot shows the position of the Earth at the instant the Sun disappears. The distance from the Sun, $d$ , is the Earth's orbital distance and the velocity $v$ is the Earth's orbital velocity. When the Sun disappears the Earth heads off in a straight line at constant velocity as shown by the horizontal dashed line, so after some time $t$ it has moved a distance $x = vt$ as I've marked on the diagram. The question is now how the angular momentum can be conserved. The answer is that angular momentum is given by the vector equation: $$ \mathbf L = \mathbf r \times m\mathbf v $$ where $\mathbf r$ is the position vector, $\mathbf v$ is the velocity vector and $\times$ is the cross product. We are going to end up with the vector $\mathbf L$ pointing out of the page and the magnitude of $L$ is given by: $$ |\mathbf L| = m\,|\mathbf r|\,|\mathbf v|\,\sin\theta \tag{1} $$ but looking at our diagram we see that: $$ \sin\theta = \frac{d}{|\mathbf r|} $$ and if we substitute this into our equation (1) for the angular momentum we get: $$ |\mathbf L| = m\,|\mathbf r|\,|\mathbf v|\,\frac{d}{|\mathbf r|} = m\,|\mathbf v|\,d \tag{2} $$ And this equation tells us that the angular momentum is constant i.e. it depends only on the constant velocity $\mathbf v$ and the original orbital distance $d$ . Although it initially seems odd an object doesn't have to be moving in a circle to have a constant angular momentum. In fact for any system the angular momentum is always constant unless some external torque is acting on the system.
{ "source": [ "https://physics.stackexchange.com/questions/490796", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/236518/" ] }
491,058
When I forget to water my plants, and their soil becomes very dry, during the next watering I can see that the soil becomes hydrophobic. I can even see pockets of air between the repelled blob of water and the soil. On the contrary, when the soil is moist, it very quickly absorbs the water. This goes against the intuition, that the diffusion , in order to equalize the concentration soil-water, would create the opposite effect. What is the physical explanation of this phenomenon? I tried to search on google, but the explanations are of low quality and very "high level" without real physics involved.
This effect appears like a paradox, as dry soil makes a very bad water conductor. Two effects prevent water from infiltrating: Air in the soil pores cannot escape: dry soil includes lots of air bubbles in small to large pores. If you expect water to get in, how do you think the air could get out? Often it gets stuck, and no water can infiltrate anymore. This effect also leads to dangerous flash-floods even during droughts , when strong rain events hit super-dry soil. Sticky capillary forces: Water forms droplets and becomes sticky due to surface tension when in contact with lots of air. The effect is even stronger in smaller pores. So it is very hard for water to create a flowing stream. Under these circumstances, water cannot simply diffuse , because random walk is hindered by air bubbles and capillary forces. As soon as the soil is saturated, water and solutes can actually diffuse. As a consequence, water flows faster in wet soil , since it can develop a continuous stream. The dashed line in the graph below expresses this fact by showing the hydraulic conductivity (i.e. how fast water infiltrates) as a function of soil moisture (from dry to wet). --- EDIT: The actual behaviour of water infiltration is a superposition of the two effects described above. Very dry soil is often compacted and therefore exhibits small pores. Small pores cannot absorb huge amounts of water quickly enough, because (1) the only path for the air to escape is upwards, which is where the rainfall water influx is sustaining a surface cover, and (2) water is hesitant to fill the pores since capillary counterforces are inversally related to pore radius . One might argue that air bubbles can still rise in water, but in a small-pore domain, bubble formation and uprising is slow and inefficient. Eventually, this tragedy can lead to flash floods.
{ "source": [ "https://physics.stackexchange.com/questions/491058", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/46147/" ] }
491,589
A friend of mine told me that if you were to stand beside plate of metal that is millions of degrees hot, inside a 100% vacuum, you would not feel its heat. Is this true? I understand the reasoning that there is no air, thus no convection, and unless you're touching it, there's no conduction either. I'm more so asking about thermal radiation emitted by it.
I'm more so asking about thermal radiation emitted by it. Here's a quantitative estimate. Suppose that the hot plate remained intact long enough to do the experiment. For a rough estimate, we can treat the hot metal plate as a blackbody. According to Wien's displacement law , the electromagnetic radiation emitted by a blackbody at temperature $T$ is strongest at the wavelength $$ \lambda = \frac{b}{T} \quad b\approx 2.9\times 10^{-3}\ \mathrm{m\cdot K}. \tag{1} $$ The total power emitted per unit area is given by the Stefan-Boltzmann law $$ \frac{P}{A}= \sigma T^4 \quad \sigma\approx 5.7\times 10^{-8}\ \mathrm{\frac{W}{m^2\cdot K^4}}. \tag{2} $$ For $T=10^6\ \mathrm K$ , these estimates give $$ \lambda\approx 2.9\times 10^{-9}\ \mathrm m $$ and $$ \frac{P}{A}\approx 5.7\times 10^{16}\ \mathrm{\frac{W}{m^2}}. $$ This wavelength is in the X-ray range, and this power level is more than a trillion times the power a person on earth would receive from the sun if there were no clouds and no air. Would you feel it? I'm not sure. Probably only very briefly.
{ "source": [ "https://physics.stackexchange.com/questions/491589", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/120768/" ] }
492,035
Someone told me that it is not inertia, but I think it is inertia, because it will rotate forever. In my understanding, inertia is the constant motion of an object without external force. Am I wrong?
Is it inertia that a rotating object will rotate forever without external force? Someone told me that this is not inertia [...] Well, sort of - it’s somewhat correct to say it is inertia, and somewhat correct to say it isn’t. One has to be precise with language! But there is some truth to what you were told. “Inertia” generally refers to the tendency of objects to continue moving in a straight line with a fixed velocity unless an external force is applied to them. It is basically a single word that encapsulates Newton’s first law of motion. It is a very fundamental law of nature, and at some level, no one really knows why it’s true . The different parts of the rotating object are definitely not moving in a straight line, and it’s not the case that no forces are acting on them. So there is more than just inertia at play. What is happening with a rotating rigid body is that each part of the body “wants” to maintain its fixed velocity according to the law of inertia, but the rigidity of the body is preventing it from doing so (since the pieces of the body have different velocity vectors so with fixed velocities they would all fly off in different directions). At the microscopic level, each piece of the body is applying forces to the adjacent pieces. Those forces are causing those adjacent pieces to change their velocity, according to Newton’s second law of motion. The end result of this highly complicated process is surprisingly simple: the body rotates. But the underlying cause is more than just inertia. Now, I said it’s also somewhat correct to say that it is inertia that’s making bodies keep rotating. This is because there is also a rotational analogue of inertia that in informal speech among physicists might still be referred to as “inertia” (although calling it rotational inertia is more appropriate, and it will also commonly be described under the terms “moment of inertia” or “conservation of angular momentum”, or even more fancy terms like “rotational symmetry of space + Noether’s theorem”, although each of these terms describes something a bit more complicated than just rotational inertia). This rotational inertia is the tendency of rotating rigid bodies to continue rotating at a fixed angular velocity in their center of mass frame, unless a torque is applied to them. Rotational inertia differs from ordinary “linear” inertia in that it is a derived principle: it can be derived mathematically from Newton’s laws of motion, so in that sense it has (in my opinion) a slightly less fundamental status among the laws of physics. Rigid bodies don’t “want” to keep rotating in the same fundamental sense that particles “want” to keep moving in a straight line with a fixed velocity - they do end up rotating but it’s because of a process we understand well and can analyze mathematically (starting from Newton’s laws), rather than some mysterious natural phenomenon we observe experimentally and accept as an axiom without being able to say much more about why it’s true.
{ "source": [ "https://physics.stackexchange.com/questions/492035", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/176092/" ] }
492,112
The electric field $\bf{E}$ represents how much force would act on a particle at a certain position per unit charge. However, if we actually place a particle in that position, the electric field will have a singularity there (because of the $\frac{1}{r^2}$ in Coulomb's law). Isn't this kind of a paradox? In my eyes, this makes the concept of electric field useless, because it cannot be used to calculate the force on a particle.
It's true that a point particle with finite charge is problematic in electromagnetism because of the infinite field and associated energy near such a particle. However, we don't need that concept in order to make a defining statement about the electric field. Rather, we can use $$ {\bf E} = \lim_{r \rightarrow 0} \frac{\bf f}{q} $$ where $\bf f$ is the force on a charged sphere of radius $r$ with a finite charge density $\rho$ independent of $r$ , and $q = (4/3) \pi r^3 \rho$ is the charge on the sphere. This charge $q$ will tend to zero as the radius does, and it does so sufficiently quickly that no infinities arise and everything is ok.
{ "source": [ "https://physics.stackexchange.com/questions/492112", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/146744/" ] }
492,353
Everyone is familiar with the whirring sound of jet engines when seeing an aircraft taking off from a nearby airport. It is distinctly very loud on the ground and one can hear it even when the airplane is miles away. Although one can hear a 'white noise' like sound when inside an airplane, the engines don't sound very loud in spite of being just meters away from them. I understand that the cabin is well insulated from the outside, but I would expect to hear a similar whirring sound of the engines. So what is the phenomenon that makes jet engines sound louder on earth compared to inside the aircraft cabin?
Sound is a pressure & velocity wave in fluid medium, i.e. air. Air molecules wiggle back and forth and bump into other air molecules so they wiggle too so you have a whole chain of wiggling air molecules. The jet engine moves air molecules A LOT, hence it's extremely loud. As the sound moves away from the jet engine the energy disperses over a larger and larger area and so the sound pressure level drops. The pressure drops by half every time you double the distance. That's 6 dB per doubling of distance or 20 dB per decade. If it's 120 dB at 10 meters, it's still 100 dB at 100m, 80 dB at 1km and 60 dB at 10km. That's why you can easily hear it on the ground. There is no easy way for sound to get into the cabin, because the cabin is air tight and fully sealed. The air molecules outside can wiggle like crazy but the air molecules inside don't care. It's still fairly loud in the cabin but that's due to mechanical sound transmission through the wings and the fuselage. The vibration of the jet engine wiggles the wings which will wiggle the fuselage which will wiggle the panels which will wiggle the air molecules inside the cabin, which will wiggle your ear drum. Planes are carefully designed to minimize the transmission but the amount of energy from the jet engine is enormous, so even if you eliminate 99.999% of the energy, it's still quite loud and getting to 99.9999% or 99.99999% is difficult and very expensive.
{ "source": [ "https://physics.stackexchange.com/questions/492353", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/197242/" ] }
492,357
An aircraft is flying horizontally in a circle of radius $b$ with constant speed $u$ at an altitude $h$ . A radar tracking unit is located at $C$ . Write expressions for the components of the velocity of the aircraft in the spherical coordinates of the radar station for a given position $\beta.$ At some point in the solution to this problem they state that $\theta=\beta/2$ . I don't see how that is evident from the figure. Can anyone show me how this is deduced? I also wonder why $$\sin\phi=\frac{h}{R}\implies\dot{\phi}=-\frac{h\dot{R}}{rR}.$$ If I differentiate both sides I get $$\dot{\phi}\cos{\phi}=-\frac{h\dot{R}}{R^2}\implies\dot{\phi}=-\frac{h\dot{R}}{R^2\cos{\phi}}.$$ What am I missing?
Sound is a pressure & velocity wave in fluid medium, i.e. air. Air molecules wiggle back and forth and bump into other air molecules so they wiggle too so you have a whole chain of wiggling air molecules. The jet engine moves air molecules A LOT, hence it's extremely loud. As the sound moves away from the jet engine the energy disperses over a larger and larger area and so the sound pressure level drops. The pressure drops by half every time you double the distance. That's 6 dB per doubling of distance or 20 dB per decade. If it's 120 dB at 10 meters, it's still 100 dB at 100m, 80 dB at 1km and 60 dB at 10km. That's why you can easily hear it on the ground. There is no easy way for sound to get into the cabin, because the cabin is air tight and fully sealed. The air molecules outside can wiggle like crazy but the air molecules inside don't care. It's still fairly loud in the cabin but that's due to mechanical sound transmission through the wings and the fuselage. The vibration of the jet engine wiggles the wings which will wiggle the fuselage which will wiggle the panels which will wiggle the air molecules inside the cabin, which will wiggle your ear drum. Planes are carefully designed to minimize the transmission but the amount of energy from the jet engine is enormous, so even if you eliminate 99.999% of the energy, it's still quite loud and getting to 99.9999% or 99.99999% is difficult and very expensive.
{ "source": [ "https://physics.stackexchange.com/questions/492357", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/177310/" ] }
492,711
It's been mentioned elsewhere on this site that one cannot define a position operator for the one-photon sector of the quantized electromagnetic field, if one requires the position operator have certain formal properties. This is a theorem that holds only for massless particles of helicity $|\lambda| \geq 1$ , in particular it does not apply to massless scalars. A lot of people, particularly mathematical physicists or older quantum field theory textbooks, seem to interpret this to mean that we should never speak of the position of anything in relativistic quantum field theory. But it still seems possible to say something about where a photon is. For example, if I have an ideal cavity and excite the lowest mode with one photon, I know that the photon is in that cavity. Furthermore, I can localize the photon arbitrarily well using smaller and smaller cavities. When an optics experiment is done using a laser beam, it is perfectly meaningful to talk about photons being in the beam. We can also speak of a photon being emitted by an atom, in which case it is obviously localized near the atom when the emission occurs. Furthermore, in the usual analysis of the double slit experiment one has, at least implicitly, a wavefunction for the photon, which successfully recovers the high school result. When one talks about scattering experiments, such as in photon-photon scattering, one has to talk about localized wavepackets in order to describe a real beam. Furthermore, unlike the massive case, where the Compton wavelength provides a characteristic length, there is no characteristic length for photons, suggesting that beams can be made arbitrarily narrow in principle: the complaint that you would start causing pair production below the Compton wavelength doesn't apply. In other words, while the theorem is airtight, it doesn't seem to impose any practical limitations on things we would actually like to do experimentally. But you can find very strange-sounding descriptions of what this theorem is telling us online. For example, on PhysicsForums you can read many obviously wrong statements (e.g. here and here and here ) such as: The photon has no rest frame. Computing an expectation of position for such an object is nonsense. One good reason is that photons are massless and move at the speed of light and have no rest frame! Then also they are bosons, so you can't tell which are which. These are wrong because they also apply to massless scalars, for which there does exist a (Newton-Wigner) position operator. It also just doesn't make sense -- if you can't measure the position of something if you're not in its rest frame, then how can I catch a ball? In relativistic quantum (field) theory there is no concept of single photons. You cannot define "position" for an electromagnetic field or of photons, which are certain states of this field (namely single-photon Fock states). Nobody thinking about classical electromagnetic waves would ever come to the idea to ask, what the position of a field might be. This is wrong because the one-particle sector of a quantum field theory is perfectly well-defined, and it is perfectly valid to define operators acting on it alone. It can be shown that in the context of relativistic quantum theory the position operator leads to violations of causality. This is rather vague because quantum field theory is causal, so it's unclear how "the position operator" overturns that. It could just be that PhysicsForums is an exceptionally low-quality site, but I think the real problem is that interpreting this theorem is actually quite tricky. What nontrivial physical consequences does the nonexistence of a formal photon position operator have?
We could spend forever playing whac-a-mole with all of the confusing/confused statements that continue popping up on this subject, on PhysicsForums and elsewhere. Instead of doing that, I'll offer a general perspective that, for me at least, has been refreshingly clarifying. I'll start by reviewing a general no-go result, which applies to all relativistic QFTs, not just to photons. Then I'll explain how the analogous question for electrons would be answered, and finally I'll extend the answer to photons. The reason for doing this in that order will probably be clear in hindsight. A general no-go result First, here's a review of the fundamental no-go result for relativistic QFT in flat spacetime: In QFT, observables are associated with regions of spacetime (or just space, in the Schrödinger picture). This association is part of the definition of any given QFT. In relativistic QFT, the Reeh-Schlieder theorem implies that an observable localized in a bounded region of spacetime cannot annihilate the vacuum state. Intuitively, this is because the vacuum state is entangled with respect to location. Particles are defined relative to the vacuum state. By definition, the vacuum state has zero particles, so the Reeh-Schlieder theorem implies that an observable representing the number of particles in a given bounded region of spacetime cannot exist: if an observable is localized in a bounded region of spacetime, then it can't always register zero particles in the vacuum state. That's the no-go result, and it's very general. It's not restricted to massless particles or to particles of helicity $\geq 1$ . For example, it also applies to electrons. The no-go result says that we can't satisfy both requirements: in relativistic QFT, we can't have a detector that is both perfectly reliable, localized in a strictly bounded region. But here's the important question: how close can we get to satisfying both of these requirements? Warm-up: electrons First consider the QFT of non-interacting electrons, with Lagrangian $L\sim \overline\psi(i\gamma\partial+m)\psi$ . The question is about photons, and I'll get to that, but let's start with electrons because then we can use the electron mass $m$ to define a length scale $\hbar/mc$ to which other quantities can be compared. To construct observables that count electrons, we can use the creation/annihilation operators. We know from QFT $101$ how to construct creation/annihilation operators from the Dirac field operators $\psi(x)$ , and we know that this relationship is non-local (and non-localizable) because of the function $\omega(\vec p) = (\vec p^2+m^2)^{1/2}$ in the integrand, as promised by Reeh-Schlieder. However, for electrons with sufficiently low momentum, this function might as well be $\omega\approx m$ . If we replace $\omega\to m$ in the integrand, then the relationship between the creation/annihilation operators becomes local. Making this replacement changes the model from relativistic to non-relativistic, so the Reeh-Schlieder theorem no longer applies. That's why we can have electron-counting observables that satisfy both of the above requirements in the non-relativistic approximation. Said another way: Observables associated with mutually spacelike regions are required to commute with each other (the microcausality requirement). The length scale $\hbar/mc$ is the scale over which commutators of our quasi-local detector-observables fall off with increasing spacelike separation. Since the non-zero tails of those commutators fall off exponentially with characteristic length $\hbar/mc$ , we won't notice them in experiments that have low energy/low resolution compared to $\hbar/mc$ . Instead of compromising strict localization, we can compromise strict reliability instead: we can construct observables that are localized in a strictly bounded region and that almost annihilate the vacuum state. Such an observable represents a detector that is slightly noisy. The noise is again negligible for low-resolution detectors — that is, for detector-observables whose localization region is much larger than the scale $\hbar/mc$ . This is why non-relativistic few-particle quantum mechanics works — for electrons. Photons Now consider the QFT of the electromagnetic field by itself, which I'll call QEM. All of the observables in this model can be expressed in terms of the electric and magnetic field operators, and again we know from QFT $101$ how to construct creation/annihilation operators that define what "photon" means in this model: they are the positive/negative frequency parts of the field operators. This relationship is manifestly non-local. We can see this from the explicit expression, but we can also anticipate it more generally: the definition of positive/negative frequency involves the infinite past/future, and thanks to the time-slice principle , this implies access to arbitrarily large spacelike regions. In QEM, there is no characteristic scale analogous to $\hbar/mc$ , because $m=0$ . The ideas used above for electrons still work, except that the deviations from localization and/or reliability don't fall off exponentially with any characteristic scale. They fall of like a power of the distance instead. As far as this question is concerned, that's really the only difference between the electron case and the photon case. That's enough of a difference to prevent us from constructing a model for photons that is analogous to non-relativistic quantum mechanics for electrons, but it's not enough of a difference to prevent photon-detection observables from being both localized and reliable for most practical purposes. The larger we allow its localization region to be, the more reliable (less noisy) a photon detector can be. Our definition of how-good-is-good-enough needs to be based on something else besides QEM itself, because QEM doesn't have any characteristic length-scale of its own. That's not an obstacle to having relatively well-localized photon-observables in practice, because there's more to the real world than QEM. Position operators What is a position operator? Nothing that I said above refers to such a thing. Instead, everything I said above was expressed in terms of observables that represent particle detectors (or counters). I did that because the starting point was relativistic QFT, and QFT is expressed in terms of observables that are localized in bounded regions. Actually, non-relativistic QM can also be expressed that way. Start with the traditional formulation in terms of the position operator $X$ . (I'll consider only one dimension for simplicity.) This single operator $X$ is really just a convenient way of packaging-and-labeling a bunch of mutually-commuting projection operators, namely the operators $P(R)$ that project a wavefunction $\Psi(x)$ onto the part with $x\in R$ , cutting off the parts with $x\notin R$ . In fancy language, the commutative von Neumann algebra generated by $X$ is the same as the commutative von Neumann algebra generated by all of the $P(R)$ s, so aside from how things are labeled with "eigenvalues," they both represent the same observable as far as Born's rule is concerned. If we look at how non-relativistic QM is derived from its relativistic roots, we see that the $P(R)$ s are localized within the region $R$ by QFT's definition of "localized" — at least insofar as the non-relativistic approximation is valid. In this sense, non-relativistic single-particle QM is, like QFT, expressed in terms of observables associated with bounded regions of space. The traditional formulation of single-particle QM obscures this. Here's the point: when we talk about a position operator for an electron in a non-relativistic model, we're implicitly talking about the projection operators $P(R)$ , which are associated with bounded regions of space. The position operator $X$ is a neat way of packaging all of those projection operators and labeling them with a convenient spatial coordinate, so that we can use concise statistics like means and standard deviations, but you can't have $X$ without also having the projection operators $P(R)$ , because the existence of the former implies the existence of the latter (through the spectral theorem or, through the von-Neumann-algebra fanciness that I mentioned above). So... can a photon have a position operator? If by position operator we mean something like the projection operators $P(R)$ , which are both (1) localized in a strictly bounded region and (2) strictly reliable as "detectors" of things in that region, then the answer is no. A photon can't have a position operator for the same reason that a photon can't have a non-relativistic approximation: for a photon, there is no characteristic length scale analogous to $\hbar/mc$ to which the size of a localization region can be compared, without referring to something other than the electromagnetic field itself. What we can do is use the usual photon creation/annihilation operators to construct photon-detecting/counting observables that are not strictly localized in any bounded region but whose "tails" are negligible compared to anything else that we care about (outside of QEM), if the quasi-localization region is large enough. What is a physical consequence? What is a physical consequence of the non-existence of a strict position operator? Real localized detectors are necessarily noisy. The more localized they are, the noisier they must be. Reeh-Schlieder guarantees this, both for electrons and for photons, the main difference being that for electrons, the effect decreases exponentially as the size of the localization region is increased. For photons, it decreases only like a power of the size.
{ "source": [ "https://physics.stackexchange.com/questions/492711", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/83398/" ] }
492,887
I have regularly heard that the Michelson-Morley experiment demonstrates that the speed of light is constant in all reference frames. By doing some research I have found that it actually demonstrated that the luminiferous aether probably didn't exist and that the speed of light didn't vary depending on which direction the planet was travelling in. I don't see how it demonstrated that motion towards a light source for instance doesn't affect the observer's speed relative to the light, as there were no moving parts in the experiment. The other sources I've looked at which say that the Michelson Morley experiment proved nothing like this one: Is the second postulate of Einstein's special relativity an axiom? and this one: How can we show that the speed of light is really constant in all reference frames? tend to say that Maxwell's equations were actually more significant to Einstein as they predict that light moves at a constant velocity, and this velocity has to be relative to something (or in relativity's case, everything). That something was thought to be the aether, but in the absence of that why could it not be relative to whatever emitted it? It seems like a more obvious immediate conclusion to come to than the idea that it's the same relative to everyone and all the counterintuitive results that ensue. Another idea is that the speed of light is the universal speed limit and therefore must have a fixed value just to work under galilean relativity. But then that argument goes in circles: "Why can't you go faster than the speed of light?" "Because otherwise your mass becomes infinite." "Why does your mass become infinite?" "Because of Einstein's special relativity." But this is based on the original fact that you can't go faster than the speed of light, so there's no argument I can find which completely answers why the speed of light has to be constant, other than that it has been regularly tested since. So my questions are: Is there something I'm missing about the Michelson-Morley experiment or Maxwell's equations which explains my objections and definitively shows that the speed of light is constant and it is impossible to go faster than it? If not, is there any other specific example, ideally which would have been there for Einstein, which I can use to explain to people with no knowledge of relativity why it is the case?
For a basic treatment of the Michelson-Morley experiment please see 1 . It's not important to know the technical details of the experiment to answer your questions though. The only relevant thing is the result, let me put it in basic terms since you seem to struggle with the "physics slang": While the total velocity of a ball thrown from a truck is the sum of the velocity of the ball relative to the truck and the velocity of the truck relative to the observer, the velocity of a light beam emitted from the truck is not. Much more the velocity of the light beam seems completely independent of the velocity of the truck. Michelson and Morely didn't have a truck, they had the earth orbiting the sun. Please make it clear to yourself that this experimental fact can be explained by stating that the speed of light is constant. If I say to you the speed of light is constant in every frame of reference, then the above result isn't surprising at all to you. But you want more. You want me to prove to you that the speed of light is universally constant. I cannot. There will never be an experiment that shows that this axiom is universally true. How should one ever construct such an experiment, how should one, for example, test the theory in the Andromeda galaxy? It's impossible, but it doesn't matter: Why not just stick with the axiom, as long as we can explain everything we see around us with it? As you already said there's an interesting connection between the invariance of the speed of light and Maxwell's equations. One can indeed prove that the speed of light has to be constant, otherwise, Maxwell's theory can't be true for all inertial frames. But this is no proof that can convince you either, since accepting Maxwells equations is no different to accepting the invariance of the speed of light. Furthermore, the basis of Einstein's theory is not the invariance of the speed of light, but the invariance of the speed of action. Which cannot be concluded from Maxwell's theory, even though it's a reasonable guess. Physical theories are not provable. But as long as they comply with reality, we accept them as truths. Addendum: I recommend this short lecture for layman by R. Feynman on the topic. Feynman and I present a very similar line of reasoning.
{ "source": [ "https://physics.stackexchange.com/questions/492887", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/191560/" ] }
492,888
Tidal friction is commonly spoken of as the reason for the slowing of Earth's rotation. What if the moon didn't exist? We would still have ocean currents and wind due to the Coriolis effect and the resulting friction certainly must be significant. Is it possible to quantify the contribution of each source of friction?
For a basic treatment of the Michelson-Morley experiment please see 1 . It's not important to know the technical details of the experiment to answer your questions though. The only relevant thing is the result, let me put it in basic terms since you seem to struggle with the "physics slang": While the total velocity of a ball thrown from a truck is the sum of the velocity of the ball relative to the truck and the velocity of the truck relative to the observer, the velocity of a light beam emitted from the truck is not. Much more the velocity of the light beam seems completely independent of the velocity of the truck. Michelson and Morely didn't have a truck, they had the earth orbiting the sun. Please make it clear to yourself that this experimental fact can be explained by stating that the speed of light is constant. If I say to you the speed of light is constant in every frame of reference, then the above result isn't surprising at all to you. But you want more. You want me to prove to you that the speed of light is universally constant. I cannot. There will never be an experiment that shows that this axiom is universally true. How should one ever construct such an experiment, how should one, for example, test the theory in the Andromeda galaxy? It's impossible, but it doesn't matter: Why not just stick with the axiom, as long as we can explain everything we see around us with it? As you already said there's an interesting connection between the invariance of the speed of light and Maxwell's equations. One can indeed prove that the speed of light has to be constant, otherwise, Maxwell's theory can't be true for all inertial frames. But this is no proof that can convince you either, since accepting Maxwells equations is no different to accepting the invariance of the speed of light. Furthermore, the basis of Einstein's theory is not the invariance of the speed of light, but the invariance of the speed of action. Which cannot be concluded from Maxwell's theory, even though it's a reasonable guess. Physical theories are not provable. But as long as they comply with reality, we accept them as truths. Addendum: I recommend this short lecture for layman by R. Feynman on the topic. Feynman and I present a very similar line of reasoning.
{ "source": [ "https://physics.stackexchange.com/questions/492888", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/234892/" ] }
492,907
When was any one liter of Long Island Sound water last in the Atlantic Ocean? The Sound for this model is: Area: 1,268 mi² Mean depth: 63 feet Length 110 miles Eastern opening into the Atlantic is 9 miles wide An average tide of 6 feet, twice a day. As a sailor and swimmer in the Sound, I picture it as a relatively closed polluted body refreshed by the Atlantic. But I started wondering just how often the water near me, 50 miles from Plum Gut, turns over. Tidal currents are 1 knot, or about a mile an hour. In other words, each day the water at my location flows west for 6 hours as the tide rises, then back east for 6 miles as it drains, at the rate of one mile an hour. The most simple model in my mind pictures a “plug” of Atlantic Ocean water traveling 6 miles west into the Sound before draining out - but of course, there’s turbulence and mixing – but how much mixing? And how much mixing at any one place lengthwise along the body? My hope is that someone here commands a better understanding of how to model this.
For a basic treatment of the Michelson-Morley experiment please see 1 . It's not important to know the technical details of the experiment to answer your questions though. The only relevant thing is the result, let me put it in basic terms since you seem to struggle with the "physics slang": While the total velocity of a ball thrown from a truck is the sum of the velocity of the ball relative to the truck and the velocity of the truck relative to the observer, the velocity of a light beam emitted from the truck is not. Much more the velocity of the light beam seems completely independent of the velocity of the truck. Michelson and Morely didn't have a truck, they had the earth orbiting the sun. Please make it clear to yourself that this experimental fact can be explained by stating that the speed of light is constant. If I say to you the speed of light is constant in every frame of reference, then the above result isn't surprising at all to you. But you want more. You want me to prove to you that the speed of light is universally constant. I cannot. There will never be an experiment that shows that this axiom is universally true. How should one ever construct such an experiment, how should one, for example, test the theory in the Andromeda galaxy? It's impossible, but it doesn't matter: Why not just stick with the axiom, as long as we can explain everything we see around us with it? As you already said there's an interesting connection between the invariance of the speed of light and Maxwell's equations. One can indeed prove that the speed of light has to be constant, otherwise, Maxwell's theory can't be true for all inertial frames. But this is no proof that can convince you either, since accepting Maxwells equations is no different to accepting the invariance of the speed of light. Furthermore, the basis of Einstein's theory is not the invariance of the speed of light, but the invariance of the speed of action. Which cannot be concluded from Maxwell's theory, even though it's a reasonable guess. Physical theories are not provable. But as long as they comply with reality, we accept them as truths. Addendum: I recommend this short lecture for layman by R. Feynman on the topic. Feynman and I present a very similar line of reasoning.
{ "source": [ "https://physics.stackexchange.com/questions/492907", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/237385/" ] }
494,286
I know that this question has been asked here before, but I am a little bit confused with all the answers. So when we move, we apply force on the ground in the backward direction. So, is the ground applying a force on us in the forward direction by virtue of friction or by virtue of normal reaction? Some answers seem to suggest that it is the former, while some say that friction is the reaction force. I used to think that normal reaction is the force that only exists so that solid objects don't pass through each other.
Both are required for walking $^*$ . You need friction to accelerate when you want to start walking, stop walking, change speeds while walking, etc. This is because you need a horizonal force to change your horizontal speed. This force is friction. It arises due to interactions between your feet and the ground you walk on. Therefore, by Newton's third law, the ground is pushed on by friction in the opposite direction of your horizonal acceleration. However, don't discount the normal force. It is a vertical force (on level ground). Therefore this force is what keeps you from accelerating downward into the ground due to gravity. It also is one of the factors in determining how strong the previously mentioned friction force can be. A larger normal force typically means a larger possible friction force before sliding between your feet and the ground occurs. Therefore, without the normal force you wouldn't be able to walk either. $^*$ Of course other forces like gravity, internal forces in your body, etc. are also important for walking. The physics of walking can get pretty complex. However you just asked about these two forces (friction and normal force), so I will just focus on those two.
{ "source": [ "https://physics.stackexchange.com/questions/494286", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/236411/" ] }
494,955
A bucket is rotating in a uniform circular motion. When the bucket is at the topmost point in the loop, the water does not fall. At this point both the tension and gravitational force is acting on the water, then it should be pulled with an even greater force. Then why does the water not fall? The same question can be asked about a motorcyclist performing a loop de loop. Why does he not fall on the topmost point when both gravity and normal reaction are acting downwards?
It is a common misconception that objects have to move in the direction of the force. This is false; the acceleration points in the direction of the force. This means the change in velocity points in the direction of the force. It is not the velocity that points in the direction of the force. At the top of the circle the water is definitely pushed down by both gravity and the normal force. However, the velocity of the water at the top of the circle is horizontal. Therefore, the velocity picks up a downward component. This doesn't remove the horizontal component though. The velocity just starts to point down as well as horizontal, and the circle continues. Note that this is also true for the bucket, so the water stays in the bucket. A similar system you can think of that you are probably familiar with is projectile motion. At the top of the trajectory the force points down, the velocity is horizontal, and the projectile continues on its parabolic path with both horizontal and vertical velocity. The difference between the projectile and the bucket is that the net force is constant for the projectile. The horizontal component of the velocity never changes. For the bucket the net force is always changing so that the motion is circular. The vertical and horizontal components of the velocity are always changing around the circle. The projectile is falling, but the water isn't purely falling. It's also being pushed by the normal force provided by the bucket.
{ "source": [ "https://physics.stackexchange.com/questions/494955", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/236411/" ] }
495,576
Coulomb gave the law for the force between two static charges while considering them to be points in space. But the differential form of Gauss' Law talks about charge densities, a thing possible only if charges are smeared out in space. Even Feynman addresses to the problem in his lectures when he says that on solving for the electrostatic energy in the field of a point charge we get infinity as the limit. So do we know now that whether charges are point-like or smeared out?
It's not a trivial matter to define this question in such a way that it has a definite answer, and you certainly can't get a good answer within classical physics. Even Feynman addresses to the problem in his lectures when he says that on solving for the electrostatic energy in the field of a point charge we get infinity as the limit. Yes, this is a nice way approaching the issue. Now consider that classical electromagnetism is inherently a relativistic theory, so $E=mc^2$ applies. For a particle with mass $m$ , charge $q$ , and radius $r$ , we would expect that the inertia $m$ of the particle can't be greater than $\sim E/c^2$ , where $E$ is the energy in the electric field. This results in $r\gtrsim r_0=ke^2/mc^2$ , where $r_0$ is called the classical electron radius, although it doesn't just apply to electrons. For an electron, $r_0$ is on the order of $10^{-15}$ meters. Particle physics experiments became good enough decades ago to search for internal structure in the electron at this scale, and it doesn't exist, in the sense that the electron cannot be a composite particle such as a proton at this scale. This would suggest that an electron is a point particle. However, classical electromagnetism becomes an inconsistent theory if you consider point particles with $r\lesssim r_0$ . You can try to get around this by modeling an electron as a rigid sphere or something, with some charge density, say a constant one. This was explored extensively ca. 1900, and it didn't work. When Einstein published the theory of special relativity, he clarified why this idea had been failing. It was failing because relativity doesn't allow rigid objects. (In such an object, the speed of sound would be infinite, but relativity doesn't allow signaling faster than $c$ .) What this proves is that if we want to describe the charge and electric field of an electron at scales below $r_0$ , we need some other theory of nature than classical E&M. That theory is quantum mechanics. In nonrigorous language, quantum mechanics describes the scene at this scale in terms of rapid, random quantum fluctuations, with particle-antiparticle pairs springing into existence and then reannihilating.
{ "source": [ "https://physics.stackexchange.com/questions/495576", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/234071/" ] }
495,732
My parents bought this "air conditioner", but I am very skeptical that this can cool a room, or even cool anything. I doubt that it even has a cooling element, I suspect that it is just a fan + humidifier. But even if this device had a cooling element, it still couldn't cool a room: If air is cooled, the resulting heat can't just vanish, it has to go somewhere, because of the 1st law of thermodynamics (energy conservation). In a normal full-sized air conditioner, the air is cooled and the resulting hot air is blown outside. But in this mini "air conditioner", the heat cant go outside, it can only stay in the room, keeping the room at the same temperature. Am I missing something or is this a scam as I suspected? In response to a comment: I'm interested in using this cooler in Germany, where the relative humidity is typically 70%.
I doubt that it even has a cooling element, i suspect that it is just a fan + humidifier. The fan+humidifier is the cooling element for this unit. It uses purely evaporative cooling to reduce the temperature of the system. It can do this because the phase change between liquid and vapour requires energy. By just passing a convective current of relatively dry air over a liquid water reservoir, heat is taken from the air to evaporate the water. This results in the humidified air being a lower temperature than before it entered the humidifier. In this case, the heat doesn't just vanish. The heat lost is stored in the latent heat of vaporization of the water. If the vapour in the room were to begin condensation, the heat in the room would start to increase. Basically, you're just using the humidity as a sort of thermal battery. You're able to store some of the heat in the room in the form of increased relative humidity, instead of having it go towards increasing temperature directly. The energy doesn't leave the system; it's just taken a different form as internal energy of the phase. You can only remove so much heat this way, and the rate of heat removal decreases as the room's relative humidity approaches 100%. If you want to use that for constant cooling, you will need some way to remove the moist air and replace it with dry air (one that doesn't involve a dehumidifier that puts heat back into the room).
{ "source": [ "https://physics.stackexchange.com/questions/495732", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/237633/" ] }
495,733
Question - "As shown in figure a body of mass 1 kg is shifted from A to D on inclined planes by applying a force slowly such that the block is always in contact with the plane surfaces. Neglecting the jerks experienced at C and B, what is the total work done by the force?" Given - $\mu$ AB = 0.1 $\mu$ BC = 0.2 $\mu$ CD = 0.4 My approach was to simply calculate the frictional forces by using $\mu mg\cos\theta$ and multiplying them by their respective distances covered in each part. After that, I calculated the gain in potential energy. But when I check the solutions to the problem, it is stated that the work done by friction is $\mu mgl$ in each case. Shouldn't the Frictional force be $\mu R$ and then we substitute the Reactional force $R$ as $R\cos\theta$ ?
I doubt that it even has a cooling element, i suspect that it is just a fan + humidifier. The fan+humidifier is the cooling element for this unit. It uses purely evaporative cooling to reduce the temperature of the system. It can do this because the phase change between liquid and vapour requires energy. By just passing a convective current of relatively dry air over a liquid water reservoir, heat is taken from the air to evaporate the water. This results in the humidified air being a lower temperature than before it entered the humidifier. In this case, the heat doesn't just vanish. The heat lost is stored in the latent heat of vaporization of the water. If the vapour in the room were to begin condensation, the heat in the room would start to increase. Basically, you're just using the humidity as a sort of thermal battery. You're able to store some of the heat in the room in the form of increased relative humidity, instead of having it go towards increasing temperature directly. The energy doesn't leave the system; it's just taken a different form as internal energy of the phase. You can only remove so much heat this way, and the rate of heat removal decreases as the room's relative humidity approaches 100%. If you want to use that for constant cooling, you will need some way to remove the moist air and replace it with dry air (one that doesn't involve a dehumidifier that puts heat back into the room).
{ "source": [ "https://physics.stackexchange.com/questions/495733", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/155406/" ] }
496,441
I understand how a prism works and how a single raindrop can scatter white light into a rainbow, but it seems to me that in normal atmospheric conditions, we should not be able to see rainbows. When multiple raindrops are side-by-side, their emitted spectra will overlap. An observer at X will see light re-mixed from various originating raindrops. The volume of rain producing a rainbow typically has an angular diameter at least as wide as the rainbow itself, does it not? So why can we still see separate colours? EDIT: To emphasise the thing I am confused about, here is a rainbow produced from a single raindrop... ...here are the rainbows produced by two raindrops, some significant distance apart... ...so shouldn't many raindrops produce something like this? I will accept an answer which focuses on this many-raindrops problem, I will not accept an answer which goes into unnecessary detail as to how a single raindrop produces a rainbow.
This isn't quite how rainbows work. The standard explanation is that light bounces around inside each droplet, and getting reflected once, and exiting at an angle: Image source However, the real picture is a little bit more complicated. When sunlight hits a water droplet, the rays will refract when they come in, (partially) reflect back when they hit the back of the droplet, and then (partially) refract on their way out. For each droplet, though, there are a bunch of rays hitting the droplet at different locations, and each of them will bounce around differently and exit at a different angle, so that the end result looks like this: Because there is a reflection inside the droplet, the light is mostly sent backwards, and because there are two steps where refraction happens, the angles are a bit wonky. But here's the important thing: the angle at which the light exits increases, has a maximum, and then decreases again, a fact which is clearly visible by following the dots as they go down from the negative- $x$ axis, stop, and then go back up again. This means that if the relative angle between the Sun, the droplet, and your head is smaller than a certain maximal angle $\theta_\mathrm{max}$ , usually equal to about $\theta_\mathrm{max}\approx 42°$ , then the droplet will appear bright to you (and, since this isn't an individual droplet but a misty conglomerate, the mist will have a diffuse glow), and if the angle is larger than that, then there will be no extra light going towards your eyes from those droplets. In other words, then, this process will produce a disk that's bright, centered at the anti-solar point (i.e. where your eyes receive the on-axis reflections in the diagram above) and with diameter $\theta_\mathrm{max}\approx 42°$ , and this is precisely what's observed, particularly when the rainbow happens against a darker background: Image source Notice, in particular, that the inside of the (primary) rainbow is much brighter than the outside. Moreover, notice that the brightness of this disk increases as you go from the center to the edge: this is caused because the rays cluster at the turning point at $\theta_\mathrm{max}$ (notice in the ray diagram that there's many more dots in that region than there are near the axis). This clustering means that, for each color, the disk of light has a particularly bright edge, called a caustic . So what's with the colors? Although your diagram's geometry is off, as you correctly note, the standard diagram (the first figure in this answer) is kind of misleading, because for it kind of implies that for every red ray that hits your eyes, there will be another droplet at another angle sending a yellow ray (or green, blue, orange, indigo, and so on) on the same path ─ and that is indeed correct! This is what happens inside this disk of light. The thing with this process, though, is that the maximal angle of aperture of the cone of light that's reflected by each droplet depends very sensitively on the refractive index of the water that makes up the droplet, and this refractive index also depends on the wavelength of the light, so that the size of the disk increases with the wavelength, with the red disk being the largest, then the orange, yellow, green, blue, indigo and violet being successively smaller. This means that, at the edge of the disk produced by the red light, where it is the brightest, there is no light of other colours to compete with it, so the light looks red there. A bit closer in, at the edge of the orange disk, there is no light of yellow, green, or blue colors, since those disks are smaller ─ and, also, the light from the red disk is fainter, because it's not at the maximal-brightness edge and the orange disk does have its maximum shine there. Thus, at that location, the orange light wins out, and the light looks overall orange. And so on down the line: for each color in the spectrum, the edge of the disk is brighter than the larger disks, and the smaller disks don't contribute at all, so the edge of each disk shines with its respective color. For further reading on the creation of rainbows see e.g. this excellent previous Q&A . And finally, to address the subquestion: why aren't the different colours blurred together once they reach the retina? Basically, because in the human eye the retina is not exposed directly to the air $-$ the human eye is a fairly sophisticated optical re-imaging system, which uses a lens at the front of the eye to focus the incoming light onto the retina: If this lens was not present (say, if the retina was where the dashed gray line is, and the lens had no effect) then you would indeed have light of different colors hitting every cell of the retina, and the retina would report a big jumbled uniformly-coloured mess to the brain. Luckily, of course, the lens is present, and the effect of the lens is to re-focus the light, so that (at least, when the eye is focused at infinity) light coming in collimated from different angles will be focused at different lateral positions in the retina. Since the different colors are coming in at different angles, collimated from the rainbow which is effectively at infinity, this means that all the red light will be focused onto certain retina cells, and the blue light will be focused onto different retina cells at a different location, and so on. It's extremely important to note that this has nothing to do with the fact that what you're seeing is a rainbow, and this re-imaging scheme coming from the focusing by the lens at the front of the eye (and the potential blurring problem we'd have if the lens wasn't present) is universal to seeing any objects at all, colored or not, rainbows or not. For more details of how the eye works, see your favourite optics textbook.
{ "source": [ "https://physics.stackexchange.com/questions/496441", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/9070/" ] }
496,447
Is this a valid implication/alternative explanation of universal red-shift? I thought I'd ask this when I read this question about light clocks. I had speculated this way: If light speed determines the rate of time, could the speed of light be varying over time and we be unaware of it? We measure light speed by itself. But if light speed was varying over a scale of minutes or hours, we'd see variation in the Sun's spectrum over time, either red shift or blue shift, and we don't. If it was varying over a period of years, we'd see variation in the spectra of stars, and we don't. If it was varying over a period of billions of years, we'd see variation in the spectrums of distant galaxies, and we do. Is this a possible implication of universal red-shift? Or even an alternative to the expansion explanation?
This isn't quite how rainbows work. The standard explanation is that light bounces around inside each droplet, and getting reflected once, and exiting at an angle: Image source However, the real picture is a little bit more complicated. When sunlight hits a water droplet, the rays will refract when they come in, (partially) reflect back when they hit the back of the droplet, and then (partially) refract on their way out. For each droplet, though, there are a bunch of rays hitting the droplet at different locations, and each of them will bounce around differently and exit at a different angle, so that the end result looks like this: Because there is a reflection inside the droplet, the light is mostly sent backwards, and because there are two steps where refraction happens, the angles are a bit wonky. But here's the important thing: the angle at which the light exits increases, has a maximum, and then decreases again, a fact which is clearly visible by following the dots as they go down from the negative- $x$ axis, stop, and then go back up again. This means that if the relative angle between the Sun, the droplet, and your head is smaller than a certain maximal angle $\theta_\mathrm{max}$ , usually equal to about $\theta_\mathrm{max}\approx 42°$ , then the droplet will appear bright to you (and, since this isn't an individual droplet but a misty conglomerate, the mist will have a diffuse glow), and if the angle is larger than that, then there will be no extra light going towards your eyes from those droplets. In other words, then, this process will produce a disk that's bright, centered at the anti-solar point (i.e. where your eyes receive the on-axis reflections in the diagram above) and with diameter $\theta_\mathrm{max}\approx 42°$ , and this is precisely what's observed, particularly when the rainbow happens against a darker background: Image source Notice, in particular, that the inside of the (primary) rainbow is much brighter than the outside. Moreover, notice that the brightness of this disk increases as you go from the center to the edge: this is caused because the rays cluster at the turning point at $\theta_\mathrm{max}$ (notice in the ray diagram that there's many more dots in that region than there are near the axis). This clustering means that, for each color, the disk of light has a particularly bright edge, called a caustic . So what's with the colors? Although your diagram's geometry is off, as you correctly note, the standard diagram (the first figure in this answer) is kind of misleading, because for it kind of implies that for every red ray that hits your eyes, there will be another droplet at another angle sending a yellow ray (or green, blue, orange, indigo, and so on) on the same path ─ and that is indeed correct! This is what happens inside this disk of light. The thing with this process, though, is that the maximal angle of aperture of the cone of light that's reflected by each droplet depends very sensitively on the refractive index of the water that makes up the droplet, and this refractive index also depends on the wavelength of the light, so that the size of the disk increases with the wavelength, with the red disk being the largest, then the orange, yellow, green, blue, indigo and violet being successively smaller. This means that, at the edge of the disk produced by the red light, where it is the brightest, there is no light of other colours to compete with it, so the light looks red there. A bit closer in, at the edge of the orange disk, there is no light of yellow, green, or blue colors, since those disks are smaller ─ and, also, the light from the red disk is fainter, because it's not at the maximal-brightness edge and the orange disk does have its maximum shine there. Thus, at that location, the orange light wins out, and the light looks overall orange. And so on down the line: for each color in the spectrum, the edge of the disk is brighter than the larger disks, and the smaller disks don't contribute at all, so the edge of each disk shines with its respective color. For further reading on the creation of rainbows see e.g. this excellent previous Q&A . And finally, to address the subquestion: why aren't the different colours blurred together once they reach the retina? Basically, because in the human eye the retina is not exposed directly to the air $-$ the human eye is a fairly sophisticated optical re-imaging system, which uses a lens at the front of the eye to focus the incoming light onto the retina: If this lens was not present (say, if the retina was where the dashed gray line is, and the lens had no effect) then you would indeed have light of different colors hitting every cell of the retina, and the retina would report a big jumbled uniformly-coloured mess to the brain. Luckily, of course, the lens is present, and the effect of the lens is to re-focus the light, so that (at least, when the eye is focused at infinity) light coming in collimated from different angles will be focused at different lateral positions in the retina. Since the different colors are coming in at different angles, collimated from the rainbow which is effectively at infinity, this means that all the red light will be focused onto certain retina cells, and the blue light will be focused onto different retina cells at a different location, and so on. It's extremely important to note that this has nothing to do with the fact that what you're seeing is a rainbow, and this re-imaging scheme coming from the focusing by the lens at the front of the eye (and the potential blurring problem we'd have if the lens wasn't present) is universal to seeing any objects at all, colored or not, rainbows or not. For more details of how the eye works, see your favourite optics textbook.
{ "source": [ "https://physics.stackexchange.com/questions/496447", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/238893/" ] }
496,678
If I understand correctly, there is a small probability the same electron to be found anywhere in the universe. Suppose that an anti-electron collides with an electron, annihilating it and producing two photons. Assuming the speed of light limit is correct, right after the collision, the probability that the electron is found at a distance of $d$ from the collision must still be non-zero for a time of at least $d/c$ . But wouldn't this mean that it's still possible for the annihilated electron to still be present somewhere else and collide with another anti-electron, meaning that the same electron was annihilated twice?
No, you have to apply the superposition principle consistently. Schematically, the initial state is $$|\text{electron here} \rangle + |\text{electron there} \rangle$$ where I've dropped normalization constants, and a $+$ denotes quantum superposition. Now suppose a lot of positrons come through, so electron states get annihilated, $$|\text{electron here} \rangle \mapsto |\text{some gamma rays here} \rangle,$$ $$|\text{electron there} \rangle \mapsto |\text{some gamma rays there} \rangle.$$ What you're essentially claiming is that the final state is $$|\text{some gamma rays here } \textbf{and} \text{ some gamma rays there}\rangle$$ but if you just apply linearity, the final state is actually $$|\text{some gamma rays here} \rangle + |\text{some gamma rays there} \rangle.$$ This reasoning implies you can't get double the gamma rays, no matter how severe the speed of light delay or any other delays are. Instead the superposition of electron positions can at best turn into a superposition of gamma ray positions. The mistake you made is essentially forgetting that the electromagnetic field behaves quantum mechanically too. (It's a forgivable mistake, which was made by many in the early days of quantum mechanics, leading to precisely the same kinds of paradoxes.)
{ "source": [ "https://physics.stackexchange.com/questions/496678", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
497,560
This question has been puzzling me lately. I'm sure you've seen demonstrations of metal containers imploding when evacuated. Here, for example, are two videos of vacuum collapse: experiment 1 , experiment 2 . However, when the same experiment is conducted with a material as fragile as glass, nothing implodes or shatters. Two videos of the same experiment conducted with glass: experiment 3 , experiment 4 . Nothing is special about the quality of the glass used in experiments 3 and 4. The glass is also not very thick. Yet, evacuating the glass, almost to 100% vacuum, doesn't so much as put a scratch on it, but the metal containers implode with great force. What is the reason for this difference? My guesses are: The total surface area of the glass used in experiments 3 and 4 are much smaller compared to the surface area of the metal in experiments 1 and 2. Greater surface area equates to greater force absorbed by the entire structure , even though the force per unit area remains the same. Ductile deformation (metal) is fundamentally different from brittle fracture (glass), involving different mechanisms of atomic displacement. (Blaise Pascal famously conducted several experiments with vacuum in a glass test tube in the 17th century.) Please share your thoughts on this. Thank you! Edit: I don't mean to say that glass never implodes/shatters in such experiments, obviously.
For a cylindrical pressure vessel loaded in compression (that is, vacuum inside), failure occurs by buckling instability in which a random and small inward perturbation of the stressed wall grows without bound at and beyond a certain critical load value. This is analogous to buckling instability in a thin column loaded in compression. The characteristic which resists buckling instability is not the yield strength but the stiffness of the wall, which depends on its thickness and on its elastic modulus. The thicker the wall and the higher the modulus, the more resistant to buckling the cylinder will be. The elastic modulus of common glass is about $48 \times 10^6$ psi compared to that of steel at $29 \times 10^6$ psi. The glass cylinder will hence be more resistant to implosion than a steel cylinder of identical size and wall thickness. Note also that putting a scratch or microcrack in glass renders it weak in tension. In compression, however, the applied stress tends to press microcracks shut, so scratching a glass vacuum vessel does not cause the sort of catastrophic failure you expect to get when the glass is in tension.
{ "source": [ "https://physics.stackexchange.com/questions/497560", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/239399/" ] }
497,904
I am learning the basics of Thermodynamics. Everywhere I read about the first law, it states "conservation of energy", and talks about how change in internal energy equals heat and work transfer. I am aware of work transfer being considered positive and negative depending on the point of view we want to set. That is okay. But it confuses me to see the word "conservation". If we take a very simple or at least very common real process like putting a plastic bottle completely filled with liquid water into a freezer (or whatever environment that is constantly under 273 K) and wait for thermal equilibrium to happen, the bottle will have expanded because water will have frozen increasing its volume and pushing the bottle's limits. In this case : The system (the bottle) has lost or given away a whatever amount of heat and it will also have generated a work transfer (to make the bottle expand). It doesn't matter if we consider that work positive once or negative twice, in both cases energy has left the system in the form of work. The total amount of internal energy of the system has clearly decreased . So apparently there isn't really any "conservation" happening. I do not intend to hate on thermodynamics, it's actually beautiful, I just want to understand the semantics. I read other similar questions like this one, but in none did I find a clear answer.
“Conserved” doesn’t mean “never changes”. It means “this stuff is real, and the only way you have less or more is if some is taken away or added”. You can then follow that additions or subtractions. Since your cold bottle has less energy, the conservation law says that energy has not disappeared, it’s just gone somewhere. You can find it. You can figure out how it got there.
{ "source": [ "https://physics.stackexchange.com/questions/497904", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/215458/" ] }
498,588
When an object is experiencing free fall, it has a constant acceleration and hence an increasing velocity (neglecting friction). Thus its momentum is increasing. But according to law of conservation of momentum, shouldn't there be a corresponding decrease in momentum somewhere else ? Where is it ?
Linear Momentum is conserved only in systems with net external force equal to zero . For a body falling on Earth, it experiences Earth's gravitational force so its linear Momentum increases. But if you include Earth in your system then definitely, momentum is conserved, as an equal amount of momentum of Earth is increased in upward direction. But individually for both it's not conserved, there is an external force of gravity on each.
{ "source": [ "https://physics.stackexchange.com/questions/498588", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/143993/" ] }
498,706
Anyone who has removed a sticker, knows that often they must be pulled off slowly, otherwise they tear. Why is this?
The glue that holds them on flows like a very viscous liquid on long time scales but is stiff on short time scales. If you pull on it suddenly, it acts stiff and holds fast, causing the paper sticker to tear. If you pull slowly, the glue flows and pulls apart before the paper has a chance to tear. This sort of behavior is called viscoelasticity . Where does this property come from? In a goopy glue of the sort used on paper stickers and the like, the glue is actually a very viscous liquid which readily wets things like paper, plastics, wood, glass, and so forth. If you grab a blob of this stuff and pull it slowly apart, its molecular chains drag and slip past each other and the blob will slowly elongate as the chains slip. However, if you apply a large tensile load suddenly , there's no opportunity for the chains to begin slipping and the glue behaves instead like a chunk of stiff plastic.
{ "source": [ "https://physics.stackexchange.com/questions/498706", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57925/" ] }
499,053
An object cannot escape the event horizon by catapulting it outside the black hole. However, what if instead of relying on escape velocity, the object was tethered to a ship orbiting well outside the event horizon? The object needs not to be pulled out at speeds higher than $c$ , but rather can be pulled slowly. Assume the gravity gradient between the ship's orbit and the tether's other end is manageable, the mass of the object is small enough to be pulled without too much burden on the ship's engines, and the tether is strong enough. (Or it's just a loose end). Would such an object be pulled-out of the event horizon?
In General Relativity, no amount of force, exerted through a tether or in any other way, can extract an object from the interior of a black hole. There are no “tricks” to get around this fact, any more than there are tricks to make a perpetual motion machine possible. All future-directed timelike worldlines within the interior lead to the singularity, not just ones for freely falling objects. This is a consequence of the black hole’s geometry . The gravity gradient is irrelevant . The mass of the object is irrelevant . The strength of the tether is irrelevant . All that matters is the spacetime geometry and the possible worldlines that it allows.
{ "source": [ "https://physics.stackexchange.com/questions/499053", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/179223/" ] }
499,186
Nature mostly prefers simultaneous events: Acceleration is produced without any delay on applying force. Angular acceleration is produced without any delay on applying torque. A bulb glows simultaneously as we close the circuit. Heat is transferred from one place to another as we allow it without any delay. I don't think there are any such events except transmission of waves which takes time in physics. Why is this so? EDIT I really appreciate the answer given by Aaron Stevens but there is something still unclear to me.Calculus as well as Newtonian mechanics start with approximations but those approximations are really beneficial to us as they simply the complex mathematical equations involved in several problems. As Aaron Stevens wrote in his answer that on microscopic scale there is a delay between force and acceleration.I think that in our standard model we take no delay of time which is still an approximation.We give answers to the problems based on our standard models which are based on numerous no. of approximations and assumptions.Similarly in the next example he said that there is nothing rigid on microscopic scale.I again say that we assume that every thing is rigid.It is my gentle request that please give answers based on the assumptions we made while building our standard model.
None of the processes you describe are instantaneous. Acceleration is produced without any delay on applying force. Angular acceleration is produced without any delay on applying torque. If you are looking at the microscopic scale, it takes time for fields to change in order for forces to be produces. For example, E&M changes propagate at the speed of light. If you are looking at macroscopic bodies, there is no such thing as a rigid body. "Information" about the presence of a force propagates through the body at the speed of sound in the body. This propagation also takes time. A cool example of this is shown in this video bulb glows simultaneously as we close the circuit No, it actually takes time for the current to build up in a circuit. This happens very quickly to us, but it is not instantaneous Heat is transferred from one place to another as we allow it without any delay. Heat transfer is probably the slowest process you have listed here. Think about cooking on a stove top, or preheating your oven for baking. It takes time to transfer heat that depends on the thermal diffusivity of the objects in question. If you are arguing that the heat transfer starts instantaneously, then that still is not correct. You would have to define some sort of energy threshold that determines when you say the heat transfer has officially started, and this threshold will always be obtained in some finite amount of time. I don't think there are any such events except transmission of waves which takes time in physics. Even neglecting the above cases, there are plenty of processes in physics that take a finite amount of time. An object hitting the ground after falling from a table. Two galaxies colliding. The charging of a capacitor in an RC circuit. The list goes on and on. Any process can be considered to be "instantaneous" if you are looking on slow enough time scales, and when operating on these time scales it is perfectly reasonable to say that certain processes are instantaneous. However, don't confuse approximation with reality . If you look fast enough, you will always find delays.
{ "source": [ "https://physics.stackexchange.com/questions/499186", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/230533/" ] }
499,520
I've always thought that there is nothing in the universe that cannot be compressed or deformed under enough force but my friend insists that elementary particles are exempt from this. My thought is that if two such objects collided, since there is no compression there is no distance across which the collision occurs which means that it would be instantaneous. The impulse in a collision is equal to the force divided by time, so as time approaches zero then the momentary force would approach infinity and it seems absurd to be able to produce infinite force from a finite energy.
Under special relativity nothing can be incompressible: consider any object of nonzero size and finite mass in its rest frame; when you apply a force to it on one side it will start moving. If it were completely incompressible, the other end would start moving simultaneously. Since the ends are spatially separated, there is a frame in which the other end would start to move before the application of the force, which is in contradiction with special relativity. Elementary particles are generally assumed to be point-shaped, hence of zero size, so they would be trivially exempt.
{ "source": [ "https://physics.stackexchange.com/questions/499520", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/240297/" ] }
500,562
I've always thought that it was because the frictional force on the tire was increased due to the bulging of the tires increasing the surface area in contact with the road. However, a colleague of mine reminded me that frictional force is independent of surface area. Why then is fuel economy decreased when tire pressure is low?
Lower pressure increases surface contact and increases static friction, and static friction does not involve heat loss, so that is good. But rolling friction is not good and does involve heat loss. Rolling friction heating is due to the inelastic deformation the rubber of the tire experiences when it is in contact with the road. See this article on rolling resistance from Wikipedia: https://en.wikipedia.org/wiki/Rolling_resistance When the rubber is in contact with the road for each revolution, it is compressed, it then expands when it leaves the surface. The compression and expansion is not perfectly elastic, thus there is heat loss in the form of friction. The lower the tire pressure, the more rubber that is in contact with the road for each revolution, and the greater the friction heat loss. These increased heat losses add up to lower fuel economy. Hope this helps.
{ "source": [ "https://physics.stackexchange.com/questions/500562", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/98596/" ] }
500,652
So, this goes to something so fundamental, I can barely express it. The Schrödinger's Cat thought experiment ultimately asserts that, until the box is opened, the cat is both dead AND alive. Now, this is obviously ludicrous. The cat either died or lived at some point; someone opening the box and observing it had zero influence on it. Saying the cat was both alive and dead till the box was opened seems to be some kind of hardware defect in some people's thinking. I mean, with all respect, I don't know how I can be polite about it. We humans aren't THAT important. Things happen whether we see them or not. I mean, do I really even need to state that? The question, then: Is Schrödinger's Cat meant to be taken at all physically?
Before reading this answer (and to those who are downvoting), I am addressing if the cat is both alive and dead. I don't think the question is asking for a complete explanation of the Schrodinger's cat experiment, nor is it asking how this links to all of the deeper mysteries of quantum mechanics and how we should think of them. Therefore, while there is much to be gained in thinking of many different interpretations, I will not be addressing them here. Schrodinger's cat is not both dead and alive any more than an electron simultaneously exists at every point in space. You are using a pop-sci explanation of Schrodinger's cat that indeed falls apart when you dig deeper. $^*$ The key point is that a system cannot be in multiple states at once. Schrodinger's cat (or if you hate this example, think "quantum system") is always in a single state. Typically the example says that there is an equal probability of us "measuring" the cat to be either alive or dead once we open the box. Therefore, the cat is in a state that is a superposition of our "life states" $|\text{alive}\rangle$ and $|\text{dead}\rangle$ : $$|\text{cat}\rangle=\frac{1}{\sqrt{2}}\left(|\text{alive}\rangle+|\text{dead}\rangle\right)$$ This state tells us that there is a probability of $0.5$ of observing the cat as alive and a probability of $0.5$ of observing the cat as dead. This is because $$|\langle\text{alive}|\text{cat}\rangle|^2=0.5$$ $$|\langle\text{dead}|\text{cat}\rangle|^2=0.5$$ Once we open the box (perform a "life state" measurement of the system), the state of the cat collapses to one of the life states (eigenstates of the "life measurement operator"). So we observe the cat as either alive or dead. It is important to understand that before we open the box the cat is not both alive and dead. The system cannot be in multiple states at once. It is in a single state, and this state is described as a superposition of life states. Once we open the box the cat is in a new single state which is one of the two life states. We cannot determine which state the cat ends up in though, only the probabilities it will end up in a certain state. Of course Schrodinger's cat is crazy to think about because we are trying to apply QM formalism to the macroscopic world, but this is precisely how quantum systems work. We can express the state $|\psi\rangle$ of a quantum system as a superposition of eigenstates $|a_i\rangle$ of a Hermitian operator $A$ : $$|\psi\rangle=\sum_ic_i|a_i\rangle$$ We do not say that the system is in every state $|a_i\rangle$ at once. It is in a single state (the superposition) that tells us the probability $|c_i|^2$ of the system being in one of the states $|a_i\rangle$ after making a measurement of the physical quantity associated with operator $A$ . $^*$ I will use the Copenhagen interpretation of QM for my answer, since it is the most widely used interpretation to teach introductory QM. This is just one way to view this thought experiment, and it certainly is not a complete explanation. There are other interpretations that get to deeper meanings, more practical understand of measurements, etc. For that I'll refer you to the other answers, but I am not claiming this is the only way to view this scenario or QM in general. This question is not asking for a full explanation of the Schrodinger's cat experiment with a look into the deeper meaning of QM, so I am not going to get into all of that. The main point of this answer does not depend on the QM interpretation anyway.
{ "source": [ "https://physics.stackexchange.com/questions/500652", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/187375/" ] }
500,833
I've often wondered if a gas cylinder is connected to a hob (say, or a boiler or whatever) and someone turns the knob to allow gas to flow, why is that when a flame is held to light the gas it isn't possible for the newly lit gas (i.e. ignited gas) to travel back through the pipe to the source, hence igniting the gas in the cylinder? Is it because the gas is held at pressure, so that the gas escapes the pipes with pressure and somehow burns “outward”, rather than returning down the pipe?
A gas flame is essentially a (chemical) reaction front , a (thin) layer in which a hydrocarbon (e.g. methane) is oxidised acc.: $$\text{CH}_4(g) + 2\text{O}_2(g) \to \text{CO}_2(g) + 2\text{H}_2\text{O}(g)$$ This oxidation reaction is colloquially known as burning or combustion . It's obvious from the equation the combustion needs requisite amounts of oxygen, $\text{O}_2$ , coming from the air which contains about $20$ percent of it. However, in order for the reaction to be viable, the ratio of combustant to oxygen, here: $$\frac{\text{CH}_4}{\text{O}_2}$$ must fall within certain limits. Too much $\text{CH}_4$ and combustion is not possible. Similarly too much $\text{O}_2$ and the reaction does not proceed. The ratio is optimal in the reaction front, but not outside of it. Inside the pipe there's not enough oxygen to sustain combustion. There is also such a thing as a reverse (or inverse) flame. In this video , oxygen burns inside a propane atmosphere: Photo above: oxygen burning in propane. Below for comparison, propane burning in air:
{ "source": [ "https://physics.stackexchange.com/questions/500833", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/25644/" ] }
500,894
Pretty self explanatory. I’m wondering if the strong nuclear force could be overcome by a strong enough magnet?
Protons and neutrons are in orbitals within the nucleus which have angular momentum , so the statement of "stationary charges" is true only to first order. The magnetic fields in laboratory experiments are not strong enough to induce a proton or a neutron to exit the nucleus. In astronomical observations, neutron stars and magnetars are studied and there the magnetic fields are strong enough to change the shape of an atom and affect the nucleus of atoms. For nuclei in the iron region of the nuclear chart it is found that fields in the order of magnitude of $10^{17}G$ significantly affect bulk properties like masses and radii. It is possible that if stronger astrophysical fields exist, the nucleus may break apart due to the magnetic field. This is studied in astrophysics as the "Coulomb breakup" of the nucleus, for example here .
{ "source": [ "https://physics.stackexchange.com/questions/500894", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/213489/" ] }
500,901
Instantaneous velocity is defined as the limit of average velocity as the time interval ∆t becomes infinitesimally small. Average velocity is defined as the change in position divided by the time interval during which the displacement occurs. When the time interval is infinitesimally small, there shouldn't be any considerable change in position. Thus the instantaneous velocity should be 0.
$$v_\text{average}=\frac{\Delta s}{\Delta t}$$ $$v_\text{instantaneous}=\lim_{\Delta t\to0}\frac{\Delta s}{\Delta t}$$ If the time interval gets infinitesimally small $\Delta t\to 0$ , then you are dividing with something very, very tiny - so the number should become very big: $$\frac{\cdots}{\Delta t}\to \infty \quad\text{ when } \quad\Delta t\to0$$ If the change in position gets infinitesimally small $\Delta s\to 0$ , then you are multiplying with something very, very tiny - so the number should become very small: $$\frac{\Delta s}{\cdots}\to 0 \quad\text{ when } \quad\Delta s\to0$$ Now, what if both happen at the same time, $\frac{\Delta s}{\Delta t}$ ? What if, as in your case, the $\Delta s$ is tied to $\Delta t$ so that when one becomes very small, the other one does as well? Then how do you know, which of them that affects the number the most? The denominator or the numerator? Does the number become very large or very tiny? $$\frac{\Delta s}{\Delta t}\to\text{ ?}\quad\text{ when } \quad\Delta t\to0$$ You seem to be assuming that the tiny change in position $\Delta s$ is the one that dominates, so the result should go towards $0$ - but why wouldn't you assume the tiny time interval $\Delta t$ to dominate instead, so the result goes towards infinity $\infty$ ? The answer is that anything can happen, depending on the values. It depends on the exact relationship between them. If the result goes towards an infinitely large number, we say that it is diverging . If it stabilises at some number, we say that it is converging . In work with physics you will often see it converging, since you will often deal with values that are interdependent and that "balance off" at some resulting number. In the case of velocity, the result does indeed converge towards some value, which we then choose to call the instantaneous velocity . This is what calculus is all about: the mathematical discipline of going towards - converging towards - a limit and then figuring out what that limit is.
{ "source": [ "https://physics.stackexchange.com/questions/500901", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/239917/" ] }
500,916
In the page 91 of Many particle physics by Mahan, why $S(+\infty,t) C(t)S(t,t')C'(t')S(t`,-\infty)$ in the numerator can be written as $C(t)C`(t`)S(\infty,-\infty)$ ? And why in the first place the wavefunction in the positve infinity is presumed to be the same as in the negative infinity?
$$v_\text{average}=\frac{\Delta s}{\Delta t}$$ $$v_\text{instantaneous}=\lim_{\Delta t\to0}\frac{\Delta s}{\Delta t}$$ If the time interval gets infinitesimally small $\Delta t\to 0$ , then you are dividing with something very, very tiny - so the number should become very big: $$\frac{\cdots}{\Delta t}\to \infty \quad\text{ when } \quad\Delta t\to0$$ If the change in position gets infinitesimally small $\Delta s\to 0$ , then you are multiplying with something very, very tiny - so the number should become very small: $$\frac{\Delta s}{\cdots}\to 0 \quad\text{ when } \quad\Delta s\to0$$ Now, what if both happen at the same time, $\frac{\Delta s}{\Delta t}$ ? What if, as in your case, the $\Delta s$ is tied to $\Delta t$ so that when one becomes very small, the other one does as well? Then how do you know, which of them that affects the number the most? The denominator or the numerator? Does the number become very large or very tiny? $$\frac{\Delta s}{\Delta t}\to\text{ ?}\quad\text{ when } \quad\Delta t\to0$$ You seem to be assuming that the tiny change in position $\Delta s$ is the one that dominates, so the result should go towards $0$ - but why wouldn't you assume the tiny time interval $\Delta t$ to dominate instead, so the result goes towards infinity $\infty$ ? The answer is that anything can happen, depending on the values. It depends on the exact relationship between them. If the result goes towards an infinitely large number, we say that it is diverging . If it stabilises at some number, we say that it is converging . In work with physics you will often see it converging, since you will often deal with values that are interdependent and that "balance off" at some resulting number. In the case of velocity, the result does indeed converge towards some value, which we then choose to call the instantaneous velocity . This is what calculus is all about: the mathematical discipline of going towards - converging towards - a limit and then figuring out what that limit is.
{ "source": [ "https://physics.stackexchange.com/questions/500916", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/142280/" ] }
500,927
In scanning electron microscopy images, carbon nanotubes looks quite different from the schematic hexagonal structured tubes which usually describes them. How come they are all bent and "furry"?
$$v_\text{average}=\frac{\Delta s}{\Delta t}$$ $$v_\text{instantaneous}=\lim_{\Delta t\to0}\frac{\Delta s}{\Delta t}$$ If the time interval gets infinitesimally small $\Delta t\to 0$ , then you are dividing with something very, very tiny - so the number should become very big: $$\frac{\cdots}{\Delta t}\to \infty \quad\text{ when } \quad\Delta t\to0$$ If the change in position gets infinitesimally small $\Delta s\to 0$ , then you are multiplying with something very, very tiny - so the number should become very small: $$\frac{\Delta s}{\cdots}\to 0 \quad\text{ when } \quad\Delta s\to0$$ Now, what if both happen at the same time, $\frac{\Delta s}{\Delta t}$ ? What if, as in your case, the $\Delta s$ is tied to $\Delta t$ so that when one becomes very small, the other one does as well? Then how do you know, which of them that affects the number the most? The denominator or the numerator? Does the number become very large or very tiny? $$\frac{\Delta s}{\Delta t}\to\text{ ?}\quad\text{ when } \quad\Delta t\to0$$ You seem to be assuming that the tiny change in position $\Delta s$ is the one that dominates, so the result should go towards $0$ - but why wouldn't you assume the tiny time interval $\Delta t$ to dominate instead, so the result goes towards infinity $\infty$ ? The answer is that anything can happen, depending on the values. It depends on the exact relationship between them. If the result goes towards an infinitely large number, we say that it is diverging . If it stabilises at some number, we say that it is converging . In work with physics you will often see it converging, since you will often deal with values that are interdependent and that "balance off" at some resulting number. In the case of velocity, the result does indeed converge towards some value, which we then choose to call the instantaneous velocity . This is what calculus is all about: the mathematical discipline of going towards - converging towards - a limit and then figuring out what that limit is.
{ "source": [ "https://physics.stackexchange.com/questions/500927", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/166913/" ] }
502,429
I understand that the speed of sound is inversely proportional to the density of the medium as shown here and as answered for this question. The problem now is that the speed of sound in air actually decreases with altitude although the density of the air decreases. This is shown here and here . I understand that the speed of sound also depends on the elasticity, but I'm not sure how this can change for air. So what is actually happening? How can the speed of sound decrease although the density has also decreased?
Wikipedia gives a pretty much straightforward answer. In an ideal gas, the speed of sound depends only on the temperature: $$ v = \sqrt{\frac{\gamma \cdot k \cdot T}{m}} $$ So it neither decreases, nor increases with altitude, but just follows air temperature as can be seen in this graph:
{ "source": [ "https://physics.stackexchange.com/questions/502429", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/27753/" ] }
503,017
Since sound travels as longitudinal waves, sound waves should only be able to propagate in a medium through compressions and rarefactions. However, water, as a liquid, is generally treated as an incompressible fluid. Since compression is essential to sound propagation, how do phenomena such as whale calls and underwater speakers work?
Water is compressible (nothing can be completely incompressible). Treating water as incompressible is just a (usually very good) approximation. Therefore, longitudinal waves are possible. Wikipedia reports the bulk modulus to be about $2.2\ \mathrm{GPa}$ . This puts the speed of sound in water at about $$v=\sqrt{\frac{\beta}{\rho}}=\sqrt{\frac{2.2\ \mathrm{GPa}}{1000\ \mathrm{kg/m^3}}}\approx1500\ \mathrm{m/s}$$
{ "source": [ "https://physics.stackexchange.com/questions/503017", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/242511/" ] }
503,488
When an object moves in a circle, there's an acceleration towards the center of the circle, the centripetal acceleration, which also gives us the centrifugal force (since $F = ma$ is the equation for a force and the acceleration of an object, therefore, is caused by a force). But according to newton's third law, for every action, there is an equal and opposite reaction, which would mean that because of the centripetal force there's an equal force outwards, which I would say is the centrifugal force. But this is obviously not true since that would mean that the net acceleration on the object moving in the circle would be 0. So my question is, what is actually this reaction force that's created by the centripetal force, and where does the centrifugal force come from? I do know that the centrifugal force can be viewed as an inertial force in a certian reference frame, but is there any way to describe it in another way? I can imagine that the centripetal force may come from friction with the road if you're in a car and if the reaction force is the force into the ground it makes sense, except for the centrifugal force.
This is a common misinterpretation of Newton's third law, often stated as "to every action, there's an equal and opposite reaction." As you surmise, "action" and "reaction" refer to forces. However, they refer to forces acting on different things . Otherwise, nothing could accelerate, ever: if every force were always canceled out by an equal and opposite force, no force could ever do anything. Instead, forces occur between objects--say car and road, to take your example. The road exerts an inward force on the car, which, you're right, is the centripetal force. The equal and opposite force is exerted by the car, on the road. The two forces are acting on different things, so they do not cancel. This second force (the force exerted by the car on the road) is sometimes referred to as the "reactive centrifugal force," which is confusing, because it's different from the more common meaning of centrifugal force.
{ "source": [ "https://physics.stackexchange.com/questions/503488", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/196529/" ] }
503,492
If I have a situation like this Where my blue thing is wall and the yellow thing is the ground. If both were to be smooth then they will exert a contact force only normal to them which I have depicted as $ N_1 and N_3 $ . Now let's calculate the torques :- $\tau _1 $ = $W \times 5 ~cos 60 ~~~~~ = W \times \frac{5} {2} $ $\tau _2 $ = $ N_3 \times 10 ~ sin60 ~~~~~ = N_3\times \frac{5\sqrt{3}} {1} $ I have taken the pivot point to be the point of contact of ground. By the right hand rule the $\tau_1 $ will go into the page and $\tau_2$ will come out of the page . Now, for rotational equilibrium $$ \tau _1 - \tau _2 = 0 $$ $$ N_3 ~5\sqrt3 - W~ \frac{5}{2} = 0$$ $$ N_3 \sqrt3 = W/2 $$ $$ N_3 = \frac {W} {2 \sqrt{3}}$$ Well that just means that if $N_3 $ is $\frac{1} {2\sqrt{3} }$ of W then our ladder wouldn't rotate. SO much clear this far. But $$ \sum F_x \neq 0 $$ because we have just $N_2$ as a horizontal force and nothing else to compensate for it. Okay, well nothing is compensating it but what it can do , all it can do is cause a motion horizontally but the ladder is pivoted at bottom and we have found if $N_3 = \frac{W} {2\sqrt{3}} $ then there would be no rotation at all about the bottom point. But I have read that if both wall and ground were to be frictionless then the ladder would slip. What is slipping? Is it a translational motion or a rotation? As far as I can see slipping is kind of rotation. How the ladder will slip if if we have managed to do rotational equilibrium? Thank you. Any help will be much appreciated.
This is a common misinterpretation of Newton's third law, often stated as "to every action, there's an equal and opposite reaction." As you surmise, "action" and "reaction" refer to forces. However, they refer to forces acting on different things . Otherwise, nothing could accelerate, ever: if every force were always canceled out by an equal and opposite force, no force could ever do anything. Instead, forces occur between objects--say car and road, to take your example. The road exerts an inward force on the car, which, you're right, is the centripetal force. The equal and opposite force is exerted by the car, on the road. The two forces are acting on different things, so they do not cancel. This second force (the force exerted by the car on the road) is sometimes referred to as the "reactive centrifugal force," which is confusing, because it's different from the more common meaning of centrifugal force.
{ "source": [ "https://physics.stackexchange.com/questions/503492", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
504,183
We observe that protons are positively charged, and that neutrons are strongly attracted to them, much as we would expect of oppositely charged particles. We then describe that attraction as non-electromagnetic "strong force" attraction. Why posit an ersatz force as responsible, rather than describing neutrons as negatively charged based on their behavior? I keep running up against circular and tautological reasoning from the laity in explanation of this (i.e. "We know they aren't charged because we attribute their attraction to a different force, and we ascribe this behavior to a different force because we know they aren't charged"). I'm looking for an empirically-based (vs. purely theoretical/mathematical) explanation. Can someone help?
Free neutrons in flight are not deflected by electric fields. Objects which are not deflected by electric fields are electrically neutral. The energy of the strong proton-neutron interaction varies with distance in a different way than the energy in an electrical interaction. In an interaction between two electrical charges, the potential energy varies with distance like $1/r$ . In the strong interaction, the energy varies like $e^{-r/r_0}/r$ , where the range parameter $r_0$ is related to the mass of the pion. This structure means that the strong interaction effectively shuts off at distances much larger than $r_0$ , and explains why strongly-bound nuclei are more compact than electrically-bound atoms.
{ "source": [ "https://physics.stackexchange.com/questions/504183", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/208438/" ] }
504,476
Is the "spacetime" the same thing as the mathematical 4th dimension? We often say that time is the fourth dimension, but I am wondering if it's means that time is like the fourth geometrical axis, or it's something different than a geometrical axis and something that's used to represent it graphically even though time has no geometrical feature. If they differ, can you tell me in what way it differs so that a layman can understand?
Yes, time can be treated as a fourth axis- that idea was developed by a German mathematician called Hermann Minkowski not long after Einstein published his theory of special relativity (Minkowski was Einstein's supervisor for a time). Representing time as a fourth axis- along with the usual three spatial axes- is now standard in text books and scientific papers. I saw a quote from Einstein implying that he didn't like it at first, something along the lines of 'Now that mathematicians have got hold of relativity I'm not sure I understand it myself any more.' The Minkowski institute has a website where you can read English translations of his papers. Minkowski's spacetime is in some ways analogous to 3D space. For example, in 3D space there is no predefined value of 'up', so you can pick any direction you like to orient your Z axis, for example. Likewise in Minkowski's space there's no predefined direction for the T axis- if two observers are moving relative to each other then their respective T axes diverge, with the divergence increasing with their relative speed. You can use the concept of 4 D spacetime to get a feel for things like time-dilation and length contraction in a way that's analogous to measurements in ordinary space. For example if you use the normal 'Z equals up' orientation you might tell me that a certain flagpole is a hundred feet high and a foot wide. If I have my Z axis tilted away from yours I will say that the height of the flagpole is less than a hundred feet, but it's a lot wider than a foot. Similar things happen in spacetime, where a diverging direction for the T axis would mean that observers measure different elapsed times. However, you can't take the analogy too far, as the geometry of Minkovski space (ie the rule for calculating distances etc) isn't the same as the geometry of Euclidian space, which is what we were all used to before we were introduced to relativity. In that respect you can't really think of time as something you can treat exactly like the three spatial dimensions. That said, it turns out that mathematically you can represent 'flat' spacetime as Euclidian if you make your fourth dimension iT (ie T multiplied by the square root of -1). I read somewhere that representing spacetime that way used to be more popular.
{ "source": [ "https://physics.stackexchange.com/questions/504476", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/243133/" ] }
505,045
This article claims that "nucleons in a dense nucleus exceed 25 percent of the speed of light". How do you measure or infer the speed of nucleons in the nucleus? Note added later : I'm looking here for experimental techniques, that are as direct as possible. Using the uncertainty principle is of course valid, but it's not really what I'm looking for here, for two reasons: it provides a rough estimate only. arguably, it feels to me more like a theoretical prediction, as opposed to a measurement.
If you shoot an electron or a proton at a nucleus at moderate energies (a few hundred $\mathrm{MeV}$ to a few $\mathrm{GeV}$ ) it will usually either bounce off the whole nucleus or break up the nucleus. But every once in a while (and this gets rarer and rarer the harder you throw it in) it will actually bounce off of a single nucleon. At the right energies this can happen without exciting the target nucleon (or the beam nucleon if you are using a proton beam), and still knock that nucleon right out of the parent nucleus without doing too much mischief on the way out. These events are termed "quasi-elastic scattering" (not to be confused with the use of the same term in neutrino scattering). The "quasi" is because there is some interaction between the beam particle and nucleus on the way in and out and some interaction between the scattered particle and the remnant-nucleus on the way out. But this is fairly modest and and can be computed in simulation. It is the "elastic" part which we concentrate on. Elastic collisions between two objects are fully constrained by energy and momentum conservation. We know the initial energy and momentum of the beam particle, so if we measure the energy and momentum of the scattered particles we can deduce the energy and momentum of the target particle before it was hit. Then all that remains is to computationaly remove the effects of the final-state interactions. Well, and we have to allow for the fact that a nuclear proton is slightly different from a free proton, and for that we rely on phenomenological models. This is exactly the kind of data we took in my disseration experiment (our purpose was a little more subtle than just making the momentum measurement, but we got that for free). We took events like $A(e,e'p)$ using a $5$ - $6 \,\mathrm{GeV}$ electron beam on protium, deutrium, carbon, and iron targets. This figure appeared in my dissertation. It shows the processed result for squared momentum transfer of $3.3 \,\mathrm{GeV}^2$ on a carbon target. In the right-hand panel we plot the magnitude of the initial momentum of the struct particle in units of $\mathrm{GeV}/c$ . You can see that the protons are mildly relativistic (recall that their mass is nearly $1 \,\mathrm{GeV}/c^2$ ). In the left-hand panel we plot the binding energy of the struck proton. We see the shell structure of the nucleus. The s-shell is responsible for the smaller, more-tightly bound energy hump and the non-zero density at the center of the momentum graph. The p-shell is responsible for the less tightly bound energy hump and the two-lobed structure of the momentum graph.
{ "source": [ "https://physics.stackexchange.com/questions/505045", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/34003/" ] }
505,593
When electrons transition from a higher energy state to a lower energy state (energy difference $E$ ), they produce massless photon with frequency $\nu$ where $ \Delta E= h \nu$ (h is Planck constant). We know energy-mas relation $ E=mc^2$ . Why not create some kind particle, in this case a particle that has mass m that we could calculate from the energy difference of the two states of the electron? Is there any kind critical energy difference $\Delta E_c$ such that lower than $\Delta E_c$ always is creating photon and higher than $\Delta E_c$ its value create particle with mass?
There are a few reasons why the particle produced needs to be a photon. Aside from conserving energy, we also need to conserve momentum, charge and spin, for example. So you would need to ask what other particle, instead of a photon, could be emitted while satisfying all those conservation requirements. If you just consider energy and spin conservation, the total amount of energy available in electron transitions in an atom is small, and not enough to make any of the other massive Bosons. To use your terminology, the maximum energy difference in electron transitions, Δ, is way below the energy Δ you would need to create any of the other known massive particles that satisfy the other conservation requirements.
{ "source": [ "https://physics.stackexchange.com/questions/505593", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/129312/" ] }
505,596
My question is about the ozone layer. Is it possible that sending rockets out to space can damage the ozone layer?
There are a few reasons why the particle produced needs to be a photon. Aside from conserving energy, we also need to conserve momentum, charge and spin, for example. So you would need to ask what other particle, instead of a photon, could be emitted while satisfying all those conservation requirements. If you just consider energy and spin conservation, the total amount of energy available in electron transitions in an atom is small, and not enough to make any of the other massive Bosons. To use your terminology, the maximum energy difference in electron transitions, Δ, is way below the energy Δ you would need to create any of the other known massive particles that satisfy the other conservation requirements.
{ "source": [ "https://physics.stackexchange.com/questions/505596", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/243616/" ] }
505,662
My understanding is the early universe was a very "hot" (ie energy dense) environment. It was even hot enough for black holes to form from photons . My second point of understanding is that black holes can lose mass due to hawking radiation, which amounts to : Physical insight into the process may be gained by imagining that particle–antiparticle radiation is emitted from just beyond the event horizon. This radiation does not come directly from the black hole itself, but rather is a result of virtual particles being "boosted" by the black hole's gravitation into becoming real particles.[citation needed] As the particle–antiparticle pair was produced by the black hole's gravitational energy, the escape of one of the particles lowers the mass of the black hole. 3 An alternative view of the process is that vacuum fluctuations cause a particle–antiparticle pair to appear close to the event horizon of a black hole. One of the pair falls into the black hole while the other escapes. In order to preserve total energy, the particle that fell into the black hole must have had a negative energy (with respect to an observer far away from the black hole). This causes the black hole to lose mass, and, to an outside observer, it would appear that the black hole has just emitted a particle. In another model, the process is a quantum tunnelling effect, whereby particle–antiparticle pairs will form from the vacuum, and one will tunnel outside the event horizon. So I simulated a scenario with two types of particles that are created in a 50/50 ratio from hawking radiation, and always annihilate each other if possible. Edit: In this simulation both particles are created, but one gets sucked into the black hole. The other stays outside. So the charge should be conserved. The simulation (written in R) is here: # Run the simulation for 1 million steps and initialize output matrix n_steps = 1e6 res = matrix(ncol = 2, nrow = n_steps) # Initiate number of particles to zero n0 = n1 = 0 for(i in 1:n_steps){ # Generate a new particle with 50/50 chance of matter/antimatter x = sample(0:1, 1) # If "x" is a matter particle then... if(x == 0){ # If an antimatter particle exists, then annihilate it with the new matter particle. #Otherwise increase the number of matter particles by one if(n1 > 0){ n1 = n1 - 1 }else{ n0 = n0 + 1 } } # If "x" is an antimatter particle then... if(x == 1){ # If a matter particle exists, then annihilate it with the new antimatter particle. # Otherwise increase the number of antimatter particles by one if(n0 > 0){ n0 = n0 - 1 }else{ n1 = n1 + 1 } } # Save the results and plot them if "i" is a multiple of 1000 res[i, ] = c(n0, n1) if(i %% 1000 == 0){ plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid()) lines(res[1:i, 2], col = "Red", lwd = 3) } } Here is a snapshot of the results, where the black line is the number of "type 0" particles and the red line is the number of "type 1" particles: Obviously this is a simplified 1d model where any generated anti-matter is immediately annihilated by a corresponding particle of matter, etc. However, I do not see why the qualitative result of a dominant particle "species" would not be expected to hold in general. So what is the basis for expecting equal amounts of matter and antimatter? How is it in conflict with this simple simulation? EDIT: As requested in the comments I modified the simulation to allow different initial number of particles and the probability of generating each particle. # Run the simulation for 1 million steps and initialize output matrix n_steps = 250e3 res = matrix(ncol = 2, nrow = n_steps) # Initial number of each type of particle and probability of generating type 0 n0 = 0 n1 = 0 p0 = 0.51 for(i in 1:n_steps){ # Generate a new particle with 50/50 chance of matter/antimatter x = sample(0:1, 1, prob = c(p0, 1 - p0)) # If "x" is a matter particle then... if(x == 0){ # If an antimatter particle exists, then annihilate it with the new matter particle. # Otherwise increase the number of matter particles by one if(n1 > 0){ n1 = n1 - 1 }else{ n0 = n0 + 1 } } # If "x" is an antimatter particle then... if(x == 1){ # If a matter particle exists, then annihilate it with the new antimatter particle. # Otherwise increase the number of antimatter particles by one if(n0 > 0){ n0 = n0 - 1 }else{ n1 = n1 + 1 } } # Save the results and plot them if "i" is a multiple of 1000 res[i, ] = c(n0, n1) if(i %% 1e4 == 0){ plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid()) lines(res[1:i, 2], col = "Red", lwd = 3) } } Some examples: n0 = 1000, n1 = 0, p = 0.5 n0 = 0, n1 = 0, p = 0.51 n0 = 1000, n1 = 1000, p = 0.5 EDIT 2: Thanks all for your answers and comments. I learned the name for the process of generating matter from black holes is "black hole baryogenesis". However, in the papers I checked on this topic (eg Nagatani 1998 , Majumdar et al 1994 ) do not seem to be talking about the same thing I am. I am saying that via the dynamics of symmetric generation and annihilation of matter-antimatter along with symmetric baryogenesis via hawking radiation you will always get an imbalance over time that will tend to grow due to a positive feedback. Ie, the Sakharov conditions such as CP-violation are not actually required to get an asymmetry. If you accept pair-production, annihilation, and hawking radiation exists, then you should by default expect one dominant species of particle to dominate over the other at all times. That is the only stable state (besides an energy-only universe). Approximately equal matter/antimatter is quite obviously very unstable because they annihilate each other, so it makes no sense to expect that. It is possible that in some more complicated model (including more than one type of particle-pair, distance between particles, forces, etc) somehow this tendency towards asymmetry would be somehow canceled out. But I cannot think of any reason why that would be, it should be up to the people who expect matter-antimatter symmetry to come up with a mechanism to explain that (which would be an odd thing to spend your time on since that is decidedly not what we observe in our universe). Regarding some specific issues people had: 1) Concerns about negative charge accumulating in the black holes and positive charge accumulating in the regular space While in the simulation there is only one particle, in practice this would be happening in parallel for electron-positrons and proton-antiproton pairs at (afaik) equal rates. So I would not expect any kind of charge imbalance. You can imagine particle pairs in the simulation are half electron-positrons and half proton-antiprotons. 2) There were not enough black holes in the early universe to explain the asymmetry I tried and failed to get an exact quote for this so I could figure out what assumptions were made, but I doubt they included the positive feedback shown by the simulation in their analysis. Also, I wondered if they considered the possibility of kugelblitz black holes forming in an energy-only universe. Finally, the tendency towards a dominant species is ongoing all the time, it need not to have happened in the early universe anyway. 3) If this process is ongoing in a universe that looks like ours today (where it may take a long time for a particle to travel from one black hole to the other), we would expect some black holes to locally happen to generate antimatter dominated regions and others to generate matter dominated regions. Eventually some of these regions should come into contact with each other leading to an observable mass annihilation of particles. I agree this would be the default expectation, but If you start from a highly matter-dominated state it would be very unlikely for enough antimatter to be generated to locally annihilate all the matter and even then there is only a 50% chance the next phase is antimatter. Putting numbers on stuff like this would require a more complex model that I don't wish to attempt here. 4) Asymmetry is not actually considered surprising by physicists. Well, it says this on wikipedia : Neither the standard model of particle physics, nor the theory of general relativity provides a known explanation for why this should be so, and it is a natural assumption that the universe be neutral with all conserved charges. [...] As remarked in a 2012 research paper, "The origin of matter remains one of the great mysteries in physics." 5) This process is somehow an exotic "alternative" theory to the standard. This process was deduced by accepting standard physics/cosmology to be correct. It is a straightforward consequence of the interplay between pair production/annihilation and hawking radiation. It may seem counterintuitive to people used to thinking about what we would expect on average from a model, when actually we want to think about how the individual instances behave. If the simulation is run multiple times and add up all the "particles" the result will be ~50/50 matter/antimatter. However, we observe one particular universe not an average of all possible universes. In each particular instance there is always a dominating species of particle, which we end up calling "matter". So, after reading the answers/comments I think the answer to my question is probably that physicists were thinking of what they would expect on average when they should have been thinking about what would happen in specific instances. But I'm not familiar enough with the literature to say. Edit 3: After talking with Chris in the chat I decided to make the rate of annihilation dependent on the number of particles in the universe. I did this by setting the probability of annihilation to exp(-100/n_part), where n_part is the number of particles. This was pretty arbitrary, I chose it to have decent coverage over the whole typical range for 250k steps. It looks like this: Here is the code (I also added some parallelization, sorry for the increased complexity): require(doParallel) # Number of simulations to run and threads to use in parallel n_sim = 100 n_cores = 30 # Initial number of each type of particle and probability n0 = 0 n1 = 0 p0 = 0.5 registerDoParallel(cores = n_cores) out = foreach(sim = 1:n_sim) %dopar% { # Run the simulation for 250k steps and initialize output matrix n_steps = 250e3 res = matrix(ncol = 2, nrow = n_steps) for(i in 1:n_steps){ # Generate a new particle with 50/50 chance of matter/antimatter x = sample(0:1, 1, prob = c(p0, 1 - p0)) n_part = sum(res[i -1, ]) + 1 p_ann = exp(-100/n_part) flag = sample(0:1, 1, prob = c(1 - p_ann, p_ann)) # If "x" is a matter particle then... if(x == 0){ # If an antimatter particle exists, then annihilate it with the new matter particle. # Otherwise increase the number of matter particles by one if(n1 > 0 & flag){ n1 = n1 - 1 }else{ n0 = n0 + 1 } } # If "x" is an antimatter particle then... if(x == 1){ # If a matter particle exists, then annihilate it with the new antimatter particle. # Otherwise increase the number of antimatter particles by one if(n0 > 0 & flag){ n0 = n0 - 1 }else{ n1 = n1 + 1 } } # Save the results and plot them if "i" is a multiple of 1000 res[i, ] = c(n0, n1) if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores)){ # plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid()) # lines(res[1:i, 2], col = "Red", lwd = 3) print(paste0(sim, ": ", i)) } } return(res) } Here is an example of 25 results: And a histogram of the percent of particles that were in the minor class by the end of each simulation: So the results still agree with the simpler model in that such systems will tend to have a dominant species of particle. Edit 4: After further helpful conversation with chris he suggested that annihilation of more than one particle pair per step was the crucial added factor. Specifically that the number of removed particles should be a sample from the Poisson distribution with a mean proportional to the total number of particles, ie rpois(1, m*n0*n1) where m is small enough so that annihilation are very rare until a large number of matter and antimatter particles exist. Here is the code (which is quite different from earlier): require(doParallel) # Number of simulations to run and threads to use in parallel n_sim = 100 n_cores = 30 # Initial number of each type of particle and probability n0 = 0 n1 = 0 p0 = 0.5 m = 10^-4 # Run the simulation for 250k steps and n_steps = 250e3 registerDoParallel(cores = n_cores) out = foreach(sim = 1:n_sim) %dopar% { # Initialize output matrix res = matrix(ncol = 3, nrow = n_steps) for(i in 1:n_steps){ # Generate a new particle with 50/50 chance of matter/antimatter x = sample(0:1, 1, prob = c(p0, 1 - p0)) # If "x" is a matter particle then... if(x == 0){ n0 = n0 + 1 } # If "x" is an antimatter particle then... if(x == 1){ n1 = n1 + 1 } # Delete number of particles proportional to the product of n0*n1 n_del = rpois(1, m*n0*n1) n0 = max(0, n0 - n_del) n1 = max(0, n1 - n_del) # Save the results and plot them if "i" is a multiple of 1000 res[i, 1:2] = c(n0, n1) res[i, 3] = min(res[i, 1:2])/sum(res[i, 1:2]) if(i %% 1e4 == 0 && sim %in% seq(1, n_sim, by = n_cores)){ # plot(res[1:i, 1], ylim = range(res[1:i, ]), type = "l", lwd = 3, panel.first = grid()) # lines(res[1:i, 2], col = "Red", lwd = 3) print(paste0(sim, ": ", i)) } } return(res) } And here are the results for various values of "m" (which controls how often annihilation occurs). This plot shows the average proportion of minor particles for each step (using 100 simulations per value of m) as the blue line, the green line is the median, and the bands are +/- 1 sd from the mean: The first plot has the same behavior as my simulations, and you can see that as m gets smaller (annihilation rate as a function of number of particles becomes rarer) the system tends to stay in a more symmetric state (50/50 matter/antimatter), at least for more steps. So a key assumption made by physicists seems to be that the annihilation rate in the early universe was very low, so that enough particles could accumulate until they became common enough that neither is likely to ever get totally "wiped out". EDIT 5: I ran one of those Poisson simulations for 8 million steps with m = 10^-6 and you can see that it just takes longer for the dominance to play out (it looks slightly different because the 1 sigma fill wouldn't plot with so many data points): So from that I conclude the very low annihilation rates just delay how long it takes, rather than resulting in a fundamentally different outcome. Edit 6: Same thing happens with m = 10^-7 and 28 million steps. The aggregate chart looks the same as the above m = 10^-6 with 8 million steps. So here are some individual examples. You can see a clear trend towards a dominating species just as in the original model: Edit 7: To wrap this up... I think the answer to the question ("why do physicists think this?") is clear from my conversation with Chris here . Chris does not seem interested in making that into an answer but I will accept it if someone writes similar.
Congratulations on finding a method for baryogenesis that works! Indeed, it's true that if you have a bunch of black holes, then by random chance you'll get an imbalance. And this imbalance will remain even after the black holes evaporate, because the result of the evaporation doesn't depend on the overall baryon number that went into the black hole. Black holes can break conservation laws like that. The only conservation laws they can't break are the ones where you can measure the conserved quantity from outside. For example, charge is still conserved because you can keep track of the charge of the black hole by measuring its electric field. In the Standard Model, baryon number has no such associated field. Also, you need to assume that enough black holes form to make your mechanism work. In the standard models, this doesn't happen, despite the high temperatures. If you start with a standard Big Bang, the universe expands too fast for black holes to form. However, in physics, finding a mechanism that solves a problem isn't the end -- it's the beginning. We aren't all sitting around scratching our heads for any mechanism to achieve baryogenesis. There are actually at least ten known, conceptually distinct ways to do it (including yours), fleshed out in hundreds of concrete models. The problem is that all of them require speculative new physics, additions to the core models that we have already experimentally verified. Nobody can declare that a specific one of these models is true, in the absence of any independent evidence. It's kind of like we're all sitting around trying to find the six-digit password for a safe. If you walk by and say "well, obviously it could be 927583", without any further evidence, that's technically true. But you have not cracked the safe. The problem of baryogenesis isn't analogous to coming up with any six-digit number, that's easy. The problem is that we don't know which one is relevant, which mechanism actually exists in our universe. What physicists investigating these questions actually do involves trying to link these models to things we can measure, or coming up with simple models that explain multiple puzzles at once. For example, one way to test a model with primordial black holes is to compute the amount heavy enough to live until the present day, in which case you can go looking for them. Or, if they were created by some new physics, you could look for that new physics. Yet another strand is to note that if enough primordial black holes still are around today, they could be the dark matter, so you could try to get both baryogenesis and dark matter right simultaneously. All of this involves a lot of reading, math, and simulation.
{ "source": [ "https://physics.stackexchange.com/questions/505662", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/108778/" ] }
506,254
Is there some method for solving differential equations that can be applied to Maxwell equations to always get a solution for the electromagnetic field, even if numerical, regardless of the specifics of the problem. Let's say you want to design a series of steps that you can handle to a student and he will be able to obtain E and B for any problem. The instructions don't have to be simple or understandable to someone without proper background but, is it possible?
You need to be more precise about exactly what problem you're solving and what the inputs are. But if you're considering the general problem of what electromagnetic fields are produced by a given configuration of electric charge and current over spacetime, then the general solution is given by Jefimenko's equations .
{ "source": [ "https://physics.stackexchange.com/questions/506254", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/233340/" ] }
506,267
A comment to this answer to another question states I would imagine that for any linear non-unitary time-evolution operator, I can find a unitary one that will yield the same expectation values for every [physical state], which makes non-unitary time-evolution with manual normalization equal to unitary time evolution with standard normalization. Is this correct?
You need to be more precise about exactly what problem you're solving and what the inputs are. But if you're considering the general problem of what electromagnetic fields are produced by a given configuration of electric charge and current over spacetime, then the general solution is given by Jefimenko's equations .
{ "source": [ "https://physics.stackexchange.com/questions/506267", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/92058/" ] }
506,285
The speed of light is absolute, but time is relative. So would a light-year for us on earth be a different distance from a light-year on a different uniformly moving object? Why or why not?
The distance light travels in a given period is the same for every observer. That's the whole point of relativity. You can figure out for yourself almost all the effects predicted by relativity if you start with that assumption and think through its consequences. Indeed, the reason why times have to be relative is to allow observers who are moving relative to each other to agree on the speed of light. Take the classic set-up where you are on a railway carriage and I am on the platform of the station you are passing. As you pass me I flash a laser along the platform. After what seems to me to be 100 nanoseconds, light has traveled to a certain point 100 feet along my platform (a foot is a light-nanosecond, hence a shorter version of a light-year). To you, however, that point is not 100 feet away, as you have been travelling relative to the platform, so it is some other distance. If we both agree that the speed of light is the same, ie the ratio of the distance it has covered to the time it has taken, when we each think it has covered a different distance, then we must disagree about the elapsed time, ie time is relative.
{ "source": [ "https://physics.stackexchange.com/questions/506285", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/81812/" ] }
506,695
In Schrodinger's Cat thought experiment, why doesn't the cat itself qualify as an observer? Reading through the replies there seem to be two suggestions for what can take the role of observer: any "large" body any "living" thing (or should that be "conscious"?)
The point is, it has made you think about the issue. Whereas we all might agree a hydrogen atom is not an observer and a human is an observer, the case of a cat is not so clear. The point of the thought experiment is to expose problems with the Copenhagen interpretation - which it does very successfully.
{ "source": [ "https://physics.stackexchange.com/questions/506695", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/244054/" ] }
506,714
Error-correction in quantum computing is designed to get around the decoherence "washing out" the answer to a computation. But wouldn't the introduction of error-correction procedures or apparatus merely increase the production of entropy and produce more decohrence?
The point is, it has made you think about the issue. Whereas we all might agree a hydrogen atom is not an observer and a human is an observer, the case of a cat is not so clear. The point of the thought experiment is to expose problems with the Copenhagen interpretation - which it does very successfully.
{ "source": [ "https://physics.stackexchange.com/questions/506714", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/20391/" ] }
506,716
As far as I know, number of protons is less that or equal to the number of neutrons in any atomic nucleus. But is there any possibility that there exists a nucleus where the number of protons exceeds number of neutrons (apart of course, from the trivial case of hydrogen)? Actually I wanted this to be a discussion involving binding energy. It's my mistake: I did not correctly write my query. As many of you have pointed out, protium has in fact more protons than neutrons. But in protium there is only 1 proton, so there is no involvement of binding energy. For an atomic nucleus to be stable, the repulsive force between protons must be less than the binding energy. But is there any atomic nuclei which is stable whose $n/p$ ratio is less than 1?
What you are looking for is isotopes with neutron–proton ratio N / Z less than 1. You can find these isotopes, for example, in this list from Wikipedia. As you can see, you are looking for members of the table with N less than Z . In these table you are looking for isotopes that are roughly above the gray zone (also known as band or belt of stability ). The colors indicate how stable the isotopes are, grey isotopes are stable, white isotopes have a half-life of less than a day, other colors are somewhere in-between. According to the table there are only three isotopes with less neutrons than protons and a half-life of more than a day. hydrogen-1 and helium-3 which are stable and beryllium-7 with a half-life of around 53 days.
{ "source": [ "https://physics.stackexchange.com/questions/506716", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/243860/" ] }