source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
644,012
I used an electronic device during a certain time. Due to the powering of this device, the temperature, measured with an NTC, increased. After a certain time, I switched off the power of the device but still measured the temperature, with the NTC. When drawing the evolution of temperature with time, I realized that the increase of temperature followed a $A(1-Be^{-Cx})$ law while the decrease of temperature followed a $De^{-Ex}$ law. Is there a fundamental reason for this kind of analytic evolution of the temperature?
This behavior is basically described by Newton cooling with heat generation, using the equation: $$MC\frac{dT}{dt}=G-k(T-T_{\infty})$$ where T is the temperature, t is time, M is the mass, C is the heat capacity, G is the heating rate, k is the Newton cooling coefficient (convective heat transfer coefficient times surface area), and $T_{\infty}$ is the surrounding room temperature. For the heating portion of the cycle, the initial temperature is $T_{\infty}$ , and the solution to the equation for the temperature increase is $$(T-T_{\infty})=\frac{G}{k}(1-e^{-\frac{kt}{MC}})$$ For the cooling portion, G is zero, and the starting temperature $T_0$ is the final temperature from the heating portion. The solution for the temperature decrease in this portion is $$(T_0-T)=(T_0-T_{\infty})e^{-\frac{kt}{MC}}$$ Both these variations match the functionality of what you have observed. The analysis also shows that the coefficients C and E in your measurements should be roughly equal to one another.
{ "source": [ "https://physics.stackexchange.com/questions/644012", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/230627/" ] }
644,270
Wolfgang Pauli once said (regarding the neutrino): I have done a terrible thing. I have postulated a particle that cannot be detected. Why did he figure it couldn't be detected? Was this because he thought it was massless? According to Wikipedia, "neutrinos were long believed to be massless". If so, why did they think it was massless? I thought the particle was hypothesised in order to maintain the conservation of momentum in a beta decay . If it was massless, this wouldn't have any effect, right?
I thought the particle was hypothesised in order to maintain the conservation of momentum in a beta-decay. If it was massless, this would have no effect, right? This is where you are confused. Having no mass does not mean having no momentum. I think you are probably thinking of momentum as Newtonian mechanics would express it : $P=mv$ However Einstein came up with a relativistic expression linking energy and momentum which is : $$E^2 = m^2c^4+p^2c^2$$ where $m$ is rest mass. Now even if rest mass is zero, the particle has energy (like photons do) and you get : $$p = \frac E c$$ So massless neutrinos would have momentum.
{ "source": [ "https://physics.stackexchange.com/questions/644270", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/248862/" ] }
644,630
I have read a bit about this and am still wondering why we actually care about the direction of angular momentum, because that the vector representing angular momentum is perpendicular to the momentum vector and the position vector doesn't really have any deeper meaning. Is it defined as a vector to keep it consistent with the definition of linear momentum or what is the deeper explanation behind this?
This answer elaborates on the answer-in-a-comment by Chiral Anomaly. (Incidentally, Stack Exchange specifically requests: "Avoid answering questions in comments") The precursor to the concept of angular momentum was Kepler's law of areas . As we know, to define an area - using vectors as elements - you need two vectors. As pointed out by Stack Exchange contributor Chiral Anomaly, if the motion is in a space with 4 spatial dimensions, or 5, or any higher number of dimensions then the only way to specify angular momentum at all is with two vectors. A space with three spatial dimensions has a property unique to space-with-three-spatial-dimensions: every plane has a single vector that is perpendicular to that plane . So: the convention of using a single vector to represent angular momentum is a hack, a hack that only works in a space with three spatial dimensions. The convention: The direction of the angular momentum vector expresses the plane of rotation, and the magnitude of the vector represents the magnitude of the angular momentum. There is a problem though: the direction of the rotation is ambiguous. This problem would not be there with a notation using two vectors, but when you use a single vector to represent angular momentum then there is not enough capacity to represent all the information that needs to be represented. Because of that ambiguity there is an extra rule: the right hand rule . The right hand rule exists because the single vector notation for angular momentum is a hack. The single vector doesn't have the capacity to represent all the information that needs to be represented.
{ "source": [ "https://physics.stackexchange.com/questions/644630", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/304015/" ] }
645,055
Just like it says in the question title. I have heard that photons are force carriers of electromagnetism. Is it not true, that when a golf club imparts force on to a golf ball, then the fundamental force involved is the electromagnetic force ? Is yes, then would it be true , that during this interaction, there is some kind of photon transfer from / between golf club and golf ball ?
The blank assertion that "photons are the force carrier of the electromagnetic force" is sort of true, but a bit misleading if you don't have the technical knowledge to unpack what it is trying to say. When the golf club pushes the ball, it is correct that the forces are largely electromagnetic. The situation also involves the Pauli exclusion principle (the fact that two electrons can't occupy the same state of motion and spin) and in consequence the whole interaction is quite complicated, but for present purposes let's just consider electromagnetic interaction between a pair of charged things such as electrons. When we say that "photons" are involved in this kind of electromagnetic repulsion, the word "photons" is very much in inverted commas. These are not real photons, not like the ones you see with your eye or which travel along in light beams etc. Rather, it is a way of talking about how the underlying physics of quantum fields and their interactions works. The interaction between charged objects can be expressed as an integral over all the ways in which one object (e.g. an electron) can interact with the electromagnetic field which in turn interacts with the other object (e.g. another electron). These interactions can themselves be expressed a number of ways, but a particularly nice way is to assert that an electron emits something called a "virtual photon". This virtual photon is quite like a real photon, but not completely like, the main difference being that it does not propagate like an ordinary wave but more like an exponentially decaying excitation, and it should not be considered as a thing which could in any sense go on its way to the rest of the world and interact with anything else . Rather, it is a way of talking about part of the interaction between the particular two electrons under consideration. A good image here is that of a diagram where two electrons come in, and two go out, and in the middle various virtual photons are exchanged in a network of interactions: but notice, none of those virtual photons come in or out as overall input or overall output to the network. When you understand quantum field theory, you know that this aspect of the diagram tells you that these "photons" are not entities with any independent existence of their own; they are just a convenient way to discuss, and calculate accurately, the interaction between two electrons via the electromagnetic field.
{ "source": [ "https://physics.stackexchange.com/questions/645055", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/269418/" ] }
645,226
When I leave my room I walk down three flights of stairs releasing about 7kJ of potential energy. Where does it go? Is it all getting dispersed into heat and sound? Is that heat being generated at the point of impact between my feet and the ground, or is it within my muscles? Related question, how much energy do I consume by walking? Obviously there's the work I'm doing against air resistance, but I feel like that doesn't account for all the energy I use when walking.
The heat is predominantly generated in your muscles. More direct conversion of potential energy to heat is when a person is sliding down a pole to get to a lower floor quickly. With sufficient friction, the descent is at a constant velocity instead of accelerating. In muscle, some structures slide along each other. Muscle contraction is those structures being made to move relative to each other, using molecular motors that act somewhat like a hand-over-hand method. As we know, muscles can also extend in a controlled manner. If you are bending down to the ground you allow your muscles to extend while maintaining tension, so that your motion is controlled. During that controlled extending: potential energy converts to heat in the muscles. This conversion of potential energy is on top of the baseline heat generation because the muscle is active . When you stand up your muscles are working against gravity, actively contracting. The energy source for that contraction is, ultimately, the food you have eaten. In the muscles, the conversion of chemical energy is not 100% efficient. A percentage is transformed to actual power output, a percentage becomes heat straightaway. When you are allowing your muscles to extend in a controlled manner your muscles are active , so some heat is generated just because the muscle is active. When you are walking downstairs the total heat generated in the muscles is the sum of two contributions: heat that is generated anyway because the muscle is active, and heat generated because the process of a muscle being extended against muscle tension is work being done on the muscle, and that leads to heat generation in the muscle. (That is, that heat is not generated in the muscle when a completely relaxed muscle is extended by an external force.) In walking we use our leg muscles actively to smooth out the motion; the leg muscles are used actively to provide some level of elastic suspension . By comparison, kangaroos are known to have Achilles tendons that are optimized to store elastic energy. The jumping form of travelling that kangaroos can do is quite energy-efficient. The power needed for the next jump is mostly from elastic energy stored in the tendon on coming down. Human walking doesn't have that level of efficiency. Muscle power is used actively both when the centre of mass of the body comes down and when the centre of mass of the body comes back up again. So there is the generation of heat from that power output.
{ "source": [ "https://physics.stackexchange.com/questions/645226", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/260328/" ] }
645,387
It seems to me that this time is finite, although it seems infinitely small, but if it is finite, is it also identical for any perfectly elastic collision? What should I know about this time?
You can't deform a real material without losing some energy to heat (this is known as internal friction or mechanical hysteresis ). In the ideal case of a perfectly elastic collision, if you're allowing this internal friction to exist, then zero deformation can occur, implying that the idealized materials are perfectly rigid—that is, that their elastic moduli are infinite. This in turn requires a contact time of zero, which is typical for introductory physics treatment of kinematics and collisions. Alternatively, if you posit that the internal friction is zero, then you can have a nonzero contact time in which the materials squish together, storing strain energy, and then rebound. In fact, for very compliant materials, the contact time could be quite long. This problem is treated in the field of impact mechanics. Note, however, that compliant does not mean soft ; no permanent deformation can occur (this is the soft–hard dichotomy), only recoverable deformation (this is the compliant–stiff dichotomy). Nonrecoverable deformation would preclude an elastic collision. Does this make sense? As examples, rubber and steel balls both bounce quite well off a relatively stiff surface, which could be surprising because their stiffnesses differ by about six orders of magnitude. Elastomers such as rubber are compliant (with Young's modulus under 1 MPa, for example) but not soft (from a strain point of view; they are soft from an applied-force point of view). Thus, they provide a mostly-elastic collision (with a relatively long contact time) because they don't permanently deform much (compare with Silly Putty, for example) and don't waste a lot of deformation through friction. In contrast, the bounciness of steel balls arises from their relatively high stiffness and strength—which preclude lattice flexing and dislocation movement that would lead to hysteretic and plastic deformation losses—and the corresponding contact time is relatively short.
{ "source": [ "https://physics.stackexchange.com/questions/645387", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/198107/" ] }
645,501
A proton is stable because of the strong force between quarks, which is not there in electron. So what's the reason for electron's stability?
As far as we know, electrons are fundamental particles and have no internal structure or components. Also, an electron cannot decay into other particles (unless it has a very high kinetic energy) because there is no lighter charged lepton for it to decay into. It can, however, annihilate with a positron to produce gamma rays.
{ "source": [ "https://physics.stackexchange.com/questions/645501", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/301275/" ] }
645,854
Many questions have been asked here about why the Earth has a magnetic field, e.g., What is the source of Earth's magnetic field? How does Earth's interior dynamo work? How can an electrically neutral planetary core be geodynamo? Why does the Earth even have a magnetic field? At the risk of oversimplifying a bit, the answer is the dynamo theory . Convection in an electrically conductive, rotating fluid – in this case, the molten metal in the planet's core – creates electric currents that, in turn, generate a magnetic field. Why doesn't the same thing happen in the oceans? A large ocean like the Pacific would appear to have all of the general properties required for a dynamo. It is made of conductive saltwater; it has significant bulk flows (indeed, ocean currents are much faster than convective currents in the core); and it rotates with the planet. Is the higher resistivity the key difference? If so, would a saltier ocean be able to generate a magnetic field?
The earth's oceans do in fact generate a measurable magnetic field $^{[1][2]}$ . As you have already pointed out, the motion of charged particles generate magnetic fields, so it makes sense that the earth’s oceans would do the same. In fact, the oceans make a contribution (albeit a small one) to the Earth's overall magnetic field. The moving salts within the oceans have electrical charge which means you have electrical currents, and since the oceans move in cycles, the motion of the tides etc., as you pointed out, the oceans contribute to the total magnetic field of the earth. In the image below, we see how this magnetic field is distributed about the northern hemisphere with the United States and Canada in the center of the sphere, and how its strength varies at different points. The European Space Agency in 2013 launched three satellites, a system called Swarm , which was designed to study the earth's magnetic field in detail and was also used to map the magnetic field emanating from the oceans. As can be seen, the ocean generated magnetic field is on average $(1 \ \text{to} \ 2)\times 10^{-9}$ Tesla at sea level. This field goes to roughly $10^{-9}$ Tesla at the height of about a few hundred kilometers, or average satellite height. This means that this magnetic field is about $20,000 \times$ smaller than the Earth's magnetic field ( $\approx 40\mu$ Tesla) caused by the motion of charged particles in the Earth’s core. References : Analysis of Ocean Tide-Induced Magnetic Fields AGU Journals , 08 November 2019. Ocean Tides and Magnetic Fields A short video by NASA and links therein with other interesting magnetic effects of earth's oceans.
{ "source": [ "https://physics.stackexchange.com/questions/645854", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/194757/" ] }
645,863
Why can the parallel plate capacitor not be charged with unlimited charges? Since the zero resistance wire do not have the electric field inside, and the two parallel metal plates with an equal magnitude and opposite charges have the field only between the plates, perhaps it is not the capacitor's field that resists the charges charging. Then what force resists the charges charge on the capacitor when the voltage equals the battery's voltage, except following Kirchhoff circuit laws (KVL)? What if the two plates of the capacitor have a different area? How can I calculate its capacitance?
The earth's oceans do in fact generate a measurable magnetic field $^{[1][2]}$ . As you have already pointed out, the motion of charged particles generate magnetic fields, so it makes sense that the earth’s oceans would do the same. In fact, the oceans make a contribution (albeit a small one) to the Earth's overall magnetic field. The moving salts within the oceans have electrical charge which means you have electrical currents, and since the oceans move in cycles, the motion of the tides etc., as you pointed out, the oceans contribute to the total magnetic field of the earth. In the image below, we see how this magnetic field is distributed about the northern hemisphere with the United States and Canada in the center of the sphere, and how its strength varies at different points. The European Space Agency in 2013 launched three satellites, a system called Swarm , which was designed to study the earth's magnetic field in detail and was also used to map the magnetic field emanating from the oceans. As can be seen, the ocean generated magnetic field is on average $(1 \ \text{to} \ 2)\times 10^{-9}$ Tesla at sea level. This field goes to roughly $10^{-9}$ Tesla at the height of about a few hundred kilometers, or average satellite height. This means that this magnetic field is about $20,000 \times$ smaller than the Earth's magnetic field ( $\approx 40\mu$ Tesla) caused by the motion of charged particles in the Earth’s core. References : Analysis of Ocean Tide-Induced Magnetic Fields AGU Journals , 08 November 2019. Ocean Tides and Magnetic Fields A short video by NASA and links therein with other interesting magnetic effects of earth's oceans.
{ "source": [ "https://physics.stackexchange.com/questions/645863", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/304565/" ] }
646,114
I have now read on the Wikipedia pages for unbihexium , unbinilium , and copernicium that these elements will not behave similarly to their forebears because of “relativistic effects”. When I read about rutherfordium , it too brings up the relativistic effects, but only to say that it compared well with its predecessors, despite some calculations indicating it would behave differently, due to relativistic effects. The dubnium page on Wikipedia says that dubnium breaks periodic trends, because of relativistic effects. The Wikipedia page on seaborgium doesn't even mention relativistic effects, only stating that it behaves as the heavier homologue to tungsten . Bohrium's Wikipedia page says it's a heavier homologue to rhenium . So, what are these relativistic effects and why do they only take effect in superheavy nuclei? When I think of relativistic effects, I think speeds at or above $.9 c$ or near incredibly powerful gravitational forces. So, I fail to see how it comes into play here. Is it because the electrons have to travel at higher speeds due to larger orbits?
When quantum mechanics was initially being developed, it was done so without taking into account Einstein's special theory of relativity. This meant that the chemical properties of elements were understood from a purely quantum mechanical description i.e., by solving the Schrödinger equation . The more accurate models following that time, that do use special relativity, were found to be more consistent with experiment than compared with the ones that were used without special relativity. So when they quote "relativistic effects" they are referring to chemical properties for elements that were determined using special relativity. Is it because the electrons have to travel at higher speeds due to larger orbits? Changes to chemical properties of elements due to relativistic effects are more pronounced for the heavier elements in the periodic table because in these elements, electrons have speeds worthy of relativistic corrections. These corrections show properties that are more consistent with reality, than with those where a non-relativistic treatment is given. A very good example of this would be the consideration of the color of the element gold , Au. Physicist Arnold Sommerfeld calculated that, for an electron in a hydrogenic atom, its speed is given by $$v \approx (Zc)\alpha$$ where $Z$ is the atomic number, $c$ is the speed of light, and $$\alpha\approx\frac{1}{137}$$ is a (dimensionless) number called the fine structure constant or Sommerfeld's constant. For Au, since $Z= 79$ , its outer shell electrons would be moving $^1$ at about $0.58c$ . This means that relativistic effects will be pretty noticeable for gold $^2$ , and these effects actually contribute to gold's color . Interestingly, we also note from the above equation, that if $Z\gt 137$ then $v\gt c$ which would violate one of the postulates of special relativity, namely that no object can have a velocity greater than that for light. But it is also well known that no element can have atomic number $Z\gt 137$ (what would happen is that with such a strong electric field due to the nucleus, there is enough energy for pair production $e^++e^-$ which quenches the field). $^1$ Electrons are not "moving around" a nucleus, but they are instead probability clouds surrounding the nucleus. So "most likely distances of electrons" would be a better term. $^2$ In the example of the element Gold, which has an electron configuration $$\bf \small 1s^2 \ 2s^2\ 2p^6\ 3s^2\ 3p^6\ 4s^2\ 3d^{10}\ 4p^6\ 5s^2\ 4d^{10}\ 5p^6\ 6s^1\ 4f^{14}\ 5d^{10}$$ relativistic affects will increase the $\bf \small 5d$ orbital distance from the nucleus, and also decrease the $\bf \small 6s$ orbital distance from the nucleus.
{ "source": [ "https://physics.stackexchange.com/questions/646114", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75297/" ] }
646,332
If the only observer for the Schrödinger's cat experiment was a camera filming the box from the outside while the box was opened automatically without direct human intervention, and the only observation performed was through watching the recorded video ten years later, will the wave function collapse into one of the two states when watching the video for the first time or at the moment of filming it? Also following the many-worlds interpretation , will the universe “branch out” at the moment of watching the video for the first time? The question is not limited to the Schrödinger's cat experiment, but to any other experiment where a function wave is supposed to collapse, e.g. double-slit experiment.
The collapse of the wave function happens whenever the quantum system initially described by the wave function becomes entangled with environment — the part of the Universe that wasn't tracked by the wave function. This may be a human, but this could just as easily be a video camera. If the initial wave function described the system being watched and the camera, then the collapse happens whenever the state of both becomes entangled with something else, whose being a living thing is once again irrelevant. Technically, the collapse just means that the initial subsystem no longer can be described by a wave function, because the subsystem has additional correlations with the environment.
{ "source": [ "https://physics.stackexchange.com/questions/646332", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/304800/" ] }
646,528
I just saw in the dynamic periodic table that He is liquid at $-273.15\ ^\circ \rm C$ . Is that true? How is that even possible? Can someone explain?
The thermodynamic phase of a material is never a function of temperature only. The correct statement is that helium remains in a liquid state at whatever small temperature achievable in a laboratory at normal pressure . It is well known that $^4$ He freezes into a crystalline solid at about 25 bar. Such peculiar behavior (helium is the only element remaining liquid at normal pressure close to $0$ K) is partly due to its weak interatomic attraction (it is a closed shell noble gas) and partly to its low mass, which makes quantum effects dominant. A signal of the latter is the well-known transition to a superfluid phase at about $2$ K. A theoretical explanation of the avoided freezing at normal pressure could be done at different levels of sophistication. No classical argument can be used, since a classical system would stop moving at zero temperature. A hand-waving argument is related to the zero-point motion of the system, which is large for light particles. A theoretically more robust approach is based on the density functional theory of freezing (DFT). The qualitative explanation of the freezing process based on DFT is that the difference of free energy between the solid and the liquid is controlled by the competition between the contribution of the change of density (favoring the liquid), and the contribution of the correlations of the liquid phase in reciprocal space, in particular at wavelengths close to the first reciprocal vector of the crystalline structure (favoring the solid). It turns out that the latter contribution is particularly weak in liquid $^4$ He.
{ "source": [ "https://physics.stackexchange.com/questions/646528", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/298093/" ] }
646,726
I do know that there are at least two types of yellow light: a light of a single wavelength of ~580 nm and a combination of green light and red light. (Technically, there could be more yellow light.) And the following two figures are making me confused. Are red and green light reflected from banana? What wavelengths of white light does a banana reflect?
The reflectance of solid and liquid substances usually has a broad spectrum, and bananas are no exception here. Only gases have a line spectrum. Here is the reflectance spectrum of ripe (i.e. yellow) and unripe (i.e. green) bananas. (image from " Food chemistry - Prediction of banana color and firmness using a novel wavelengths selection method of hyperspectral imaging ") You see, ripe bananas reflect light beginning from green (~ 520 nm) through yellow, orange, red, and extending to infrared.
{ "source": [ "https://physics.stackexchange.com/questions/646726", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/304554/" ] }
646,737
So the inner tracker system of a particle detector (say CMS ) can detect charged particles because they ionize particles in the detector. These inner tracker systems are made of silicon pixels and silicon strips. As pointed out by the quantum diaries blog , this is the same technology as is used in a digital camera. Only, a digital camera can see neutral particles, crucially photons, otherwise it's not a very good camera. It does this via indirect ionization, from Compton scattering (edit; for future readers, this very confident statement is incorrect, see https://physics.stackexchange.com/a/646780/147600 ) . By contrast, photons are not visible in the inner tracker of a particle collider, see here . I'm sure that if an inner tracker could be built to detect photons it would be, because the inner tracker greatly improves the accuracy of the vertex finding. So there is some reason that isn't possible. We know the inner tracker is being hit by some very hard photons, so it's not a question of the photons being less ionizing than those detected by a camera. Perhaps whatever a camera does to make photons detectable is not radiation hard, and so cannot be used here. It seems unlikely that it is too bulky, the inner tracker is measures in cm, and a phone camera is measures in mm. It could have too long a deadtime between successive hits. Alternatively, whatever a camera does to detect photons has a high stopping power, it's very opaque, and it would shield the rest of the detector from radiation. But these ideas are just my speculation. It appears the answer to this is so obvious that nobody bothers to put it in their review/report/paper, which makes it a little embarrassing to ask, by why can't inner trackers see photons? If you had a citation for the cause I'd be very grateful to have that too.
The reflectance of solid and liquid substances usually has a broad spectrum, and bananas are no exception here. Only gases have a line spectrum. Here is the reflectance spectrum of ripe (i.e. yellow) and unripe (i.e. green) bananas. (image from " Food chemistry - Prediction of banana color and firmness using a novel wavelengths selection method of hyperspectral imaging ") You see, ripe bananas reflect light beginning from green (~ 520 nm) through yellow, orange, red, and extending to infrared.
{ "source": [ "https://physics.stackexchange.com/questions/646737", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/147600/" ] }
647,020
I understand that there is no need for a fourth-dimensional space to bend into, but why do physicists seem to be against the idea? Is this simply because there is no proof of a fourth dimension, or is there some sort of evidence against a fourth dimension? Wouldn't it be simpler to assume there is a larger dimension that spacetime is embedded in? It seems to me that it could simplify a lot of things if we assume there is a fourth dimension tangential to the three dimensions of space, so it feels like physicists must have some good reason to be so against the idea. Also, of course, there is the simple intuition that if something is on a curved surface (like a ball or saddle) there must be "something" inside the "ball" or between the sides of the "saddle." Can a physicist explain why not being embedded in a higher 4D space is simpler or more accurately describes observations?
You can always embed a (spacetime) manifold in a sufficiently high-dimensional space ( if you have a $d$ dimensional manifold it can be embedded in a space of $2d$ dimensions ). But that doesn't specify which space it is - it could be any sufficiently high dimensional space. So assuming it is embedded doesn't tell you anything at all. Hence it is simpler to not invoke any embedding in the first place.
{ "source": [ "https://physics.stackexchange.com/questions/647020", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/299922/" ] }
647,022
In example 5.5 from Kleppner Kolenkow (2nd edition), the general case of finding the escape velocity of a mass $m$ projected from the earth at an angle $\alpha$ with the vertical, neglecting air resistance and Earth's rotation, is presented. Their analysis goes as follows: The force on $m$ , neglecting air resistance, is $$\mathbf{F} = -mg\dfrac{R_e^2}{r^2}\mathbf{\hat{r}}$$ where $\mathbf{\hat{r}}$ is a unit vector directed radially outward from earth's center, $R_e$ is the radius of the Earth and $r$ is the distance of $m$ from the center of the Earth. We don't know the trajectory of the particle without solving the problem in detail, but for any element of the path the displacement $d\mathbf{r}$ can be written as $$d\mathbf{r} = dr \mathbf{\hat{r}} + rd\theta \boldsymbol{\hat{\theta}}$$ where $\boldsymbol{\hat{\theta}}$ is a unit vector perpendicular to $\mathbf{\hat{r}}$ , see the picture below for a sketch of the images they present. Because $\mathbf{\hat{r}} \cdot \boldsymbol{\hat{\theta}} = 0$ we have $$\mathbf{F} \cdot d\mathbf{r} = -mg \dfrac{R_e^2}{r^2}\mathbf{\hat{r}} \cdot (dr \mathbf{\hat{r}} + rd\theta \boldsymbol{\hat{\theta}}) = \\ -mg\dfrac{R_e^2}{r^2}dr.$$ The work-energy theorem becomes $$\dfrac{m}{2}(v^2-v_0^2) = -mgR_e^2 \int_{R_e}^r \dfrac{dr}{r^2} = \\ -mg R_e^2(\dfrac{1}{r} - \dfrac{1}{R_e})$$ Here is where I get lost: They say the escape velocity is the minimum value of $v_0$ for which $v=0$ when $r \to \infty$ . We find $$v_{\text{escape}} = \sqrt{2gR_e} = 1.1 \times 10^4 \text{m/s}$$ which is the same result as in the example when $\alpha=0$ , that is when the mass is projected straight up (presented earlier in the book). They write, "In the absence of air friction, the escape velocity is independent of the launch direction, a result that may not be intuitively obvious". Indeed I find this very unintuitive and have one gripe with the analysis presented. I don't understand how we can assume that $r$ even will go to infinity in the first place. As far as I can see, what have been shown is that if $r \to \infty$ , then $v_0 = \sqrt{2gR_e}$ will make $v=0$ . What I don't see have been shown, however, is that setting $v_0 = \sqrt{2gR_e}$ will make $r \to \infty$ , which I think really is what escape velocity should be. (This is what was done in the case of a pure vertical projection of the mass, it was shown that if $v_0 = \sqrt{2gR_e}$ then $r_{\text{max}} \to \infty$ but this has not been done in the more general case presented here I think.) Question Based on this similar question the curvature of the Earth seems to make it so that even if you fire the mass horizontally it won't crash down or simply stay at a constant height above the earth, but this would have to be shown in the analysis right, it is not merely enough to assume that $r$ can go to infinity?
You can always embed a (spacetime) manifold in a sufficiently high-dimensional space ( if you have a $d$ dimensional manifold it can be embedded in a space of $2d$ dimensions ). But that doesn't specify which space it is - it could be any sufficiently high dimensional space. So assuming it is embedded doesn't tell you anything at all. Hence it is simpler to not invoke any embedding in the first place.
{ "source": [ "https://physics.stackexchange.com/questions/647022", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/240385/" ] }
647,329
The second law of thermodynamics states that the entropy in an isolated system cannot decrease This seems intuitive when considering a low entropy system transitioning to a higher entropy state, but very counterintuitive when considering a system that is currently at the greatest possible entropy because the system can only transition to another maximum entropy state by first passing through a lower entropy state. Consider, for example, the system shown in "Entropy: Why the 2nd Law of Thermodynamics is a fundamental law of physics" by Eugene Khutoryansky. This system starts with $500$ balls in the left container and intuitively we can understand that these balls will spread evenly between the two containers, but what happens when the balls are distributed evenly: $250$ in the left container and $250$ in the right container? Does the second law of thermodynamics prohibit any ball from moving to another container because that would shift the system into a lower entropy configuration? EDIT: I believe (although answers seem to indicate that this believe is incorrect) that the state in between has lower entropy because $$\Omega_1 = \binom{1000}{500} > \binom{1000}{501} = \Omega_2$$
The second law of thermodynamics does not prohibit any ball from moving to another container because that would shift the system into a lower entropy configuration. The question originated from a widespread misconception. There is nothing like the entropy of one configuration in statistical mechanics. Entropy is a property of the macrostate . Therefore, it is a collective property of all the microscopic configurations consistent with the macroscopic variables uniquely identifying the equilibrium state. The physical system visits all the accessible microstates as a consequence of its microscopic dynamics. Among these states, there are states with an unbalanced number of particles in the two containers. People refer to such states as fluctuations around the average equally distributed case. It is an effect of the macroscopic size of thermodynamic systems that the overwhelming majority of the microscopic states does not show large fluctuations.
{ "source": [ "https://physics.stackexchange.com/questions/647329", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/215323/" ] }
647,487
The reason given in most places about why one cannot escape out from an event horizon is the fact that the escape velocity at the event horizon is equal to the speed of light, and no one can go faster than speed of light. But, you don't really need to reach the escape velocity to get away from a massive object like a planet. For example, a rocket leaving earth doesn't have escape velocity at launch, but it still can get away from earth since it has propulsion. So, if a rocket is just inside the event horizon of a black hole, it doesn't need to have the escape velocity to get out, and it should at least be able to come out of the event horizon through propulsion. Also, if the black hole is sufficiently large, the gravitational force near the event horizon will be weaker, so a normal rocket should be able to get out easily. Is this really theoretically possible? If it was just the escape velocity being too high was the problem of getting out, I don't see any reason why a rocket cannot get out. This is a similar question, but my question is not about a ship with Alcubierre drive.
It is often said that the escape velocity at the event horizon is the speed of light, but while this is true in a sense it is not very useful. The problem is that the speed is an observer dependent quantity. An observer far from the black hole would say the escape velocity at the event horizon was zero, which is obviously nonsensical and proves only that speed is not a useful quantity to describe the motion near an event horizon. There is more on this in the question Does light really travel more slowly near a massive body? though this may be excessively technical. A better way to understand what is going on is to ask how powerful a rocket motor would you need to hover at a fixed distance from the black hole. For example to hover at the Earth's surface your rocket motor needs to be able to generate an acceleration of $g$ i.e. a force $mg$ where $m$ is the mass of the rocket. If your rocket motor is more powerful than this you will accelerate upwards away from the Earth and if it is less powerful you will fall downwards towards the Earth. In Newtonian gravity the acceleration required to hover at a distance $r$ from a mass $M$ is given by the well known equation for Newtonian gravity: $$ a = \frac{GM}{r^2} \tag{1} $$ The event horizon is at $r = 2GM/c^2$ so if Newtonian gravity applied we could substitute this into equation (1) to give: $$ a = \frac{c^4}{2GM} \tag{2} $$ which is a large number, but some future physicist might be able to build a rocket that powerful. The problem is that when we move to general relativity equation (1) is no longer valid. The GR equivalent is derived in twistor59's answer to What is the weight equation through general relativity? The details are a little involved, but in GR the equation becomes: $$ a = \frac{GM}{r^2} \frac{1}{\sqrt{1-\frac{2GM}{c^2r}}} \tag{3} $$ If you now substitute $r = 2GM/c^2$ into this equation you find that the acceleration required is infinite i.e. no matter how powerful a rocket motor you build you cannot hover at the event horizon. Once at the horizon you are doomed to fall in. And this explains why you cannot start at the event horizon and move away from it slowly using your rocket motor. You would need an infinitely powerful rocket!
{ "source": [ "https://physics.stackexchange.com/questions/647487", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/79464/" ] }
647,804
Why doesn't energy exhaust itself over time? For example, isn't kinetic energy required for an object to move, but won't the kinetic energy decrease as the object moves since it uses that energy? ${{}}$
Kinetic energy is a property of moving objects, not a thing that moving objects consume. A moving object can no more exhaust its kinetic energy by moving than a red object can exhaust its redness by being red.
{ "source": [ "https://physics.stackexchange.com/questions/647804", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/297402/" ] }
648,041
If every object is $99$ % empty space, how is reflection possible? Why doesn't light just pass through? Also light passes as a straight line, doesn't it? The wave nature doesn't say anything about its motion. Also, does light reflect after striking an electron or atom or what?
Have you ever seen grid antennas? In fact, it is also a mirror, designed to reflect the waves into its focal point. Why it can reflect the waves if it is mostly empty space? The reason is, because the wavelength is about the same size as the holes. The wave cannot pass through the holes which are sized about the same as its wavelength.
{ "source": [ "https://physics.stackexchange.com/questions/648041", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/303573/" ] }
648,140
If air is a bad heat conductor, how does fire heat up a room? Could someone help me, as I really don't get this?
There are three mechanisms at play: conduction, convection and radiation. Radiation is the most immediate. Your environment irradiates you with black body radiation at room temperature (assuming that you are in a room at 20 C/ 293 K). As soon as your fire burns it emits black body radiation at a temperature of a 600 C (900 K), that is mostly in the infrared. The power emitted by a black body is proportional to $T^4$ (Kelvin), according to the Stefan-Boltzmann law, and this radiation is quite intense. Also the air will be heated by your fire and hot air will reach you by convection. The least important effect is air conduction.
{ "source": [ "https://physics.stackexchange.com/questions/648140", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/305595/" ] }
648,273
Question: Can the radiation emitted by fire be approximated by black body spectrum? It has been discussed in this community that black-body spectrum mostly serves as an approximation to actual spectra of thermal light sources: firstly because perceived radiation are not in thermal equilibrium (although this depends on how one defines BBR - as an equilibrium state of radiation or as a radiation emitted by a black body ) secondly because the real sources of radiation are usually only approximately black bodies finally, the real sources of radiation are usually in a stationary state rather than in thermal equilibrium Still, a slab of metal taken out of an oven, or even an oven itself could be treated as a black body heated to a constant temperature (which is also a strong assumption). Can we however say the same about a highly dynamic campfire?
Different bits of the fire have different characteristics, the spectrum of a flame would usually consist of discrete line radiation perhaps superposed on a weaker continuum. However, the base of the fire, especially say in the cavities between any burning material would emit radiation that more closely approximates the Planck function. In this respect, a just-lit fire would have light dominated by "flame" and the spectrum would not be Planckian, but a well-established hot fire with a lot of radiation coming from its "heart" would be more blackbody-like. The flames that you see would usually be "optically thin" - that is they are transparent to their own and other radiation. In such circumstances, the flames emit light that usually correspond to discrete transitions at particular wavelengths - about as dissimilar to the Planck function as it could be. This is how "flame tests" for the presence of different chemical elements work. If the flame was hot enough to ionise atoms you would also get some recombination continuum radiation, but as I said, most flames are optically thin, so the radiation never gets the chance to come into equilibrium with the matter at some particular temperature and you would not get the Planck spectrum. If you can see through the flame, at any wavelength, then it isn't radiating a Planck spectrum . On the other hand, the space between the coals of a hot fire can provide a reasonable simulacrum to the cavity radiation of an ideal blackbody. Here, we are looking into an enclosed space where the radiation field has had a chance to come into equilibrium with the surrounding material. This is still just an approximation - but a clue that you are looking at something close to the Planck spectrum comes when you start to lose the ability to discern the texture or shape of the material within the cavity. That is because everything is at a similar temperature and the radiation field is starting to approach isotropy. The above discussion by the way is often why one will notice that it isn't the flames of a fire that give off the most heat, it is the base of the fire. That is because blackbody radiation is the most efficient way that a thermal radiator can lose heat (radiatively). EDIT: Here are some spectra taken of flames from a simulated wildfire, compared with blackbody spectra - not very Planckian (from Boulet et al. 2011 ). The main peak at 2300 cm $^{-1}$ is due to radiation from CO $_2$ and CO molecules. The conclusion in this paper is that the temperature is about 1500K and the CO $_2$ /CO radiation is optically thick and thus reaches the Planck function, but that the flames are optically thin at all other wavelengths.
{ "source": [ "https://physics.stackexchange.com/questions/648273", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/247642/" ] }
648,399
Often semiconductors are cited as the big application of quantum mechanics (QM), but when looking back at my device physics book basically no quantum mechanics is used. The concept of a quantum well is presented and some derivations are done, but then the next chapter mostly ignores this and goes back to statistical physics along with referencing experimentally verified constants to explain things like carrier diffusion, etc. Do we really need quantum mechanics to get to semiconductor physics? Outside of providing some qualitative motivation to inspire I don't really see a clear connection between the fields. Can you actually derive transistor behaviour from QM directly?
Do we really need quantum mechanics to get to semiconductor physics? It depends what level of understanding you're interested in. For example, are you simply willing to take as gospel that somehow electrons in solids have different masses than electrons in a vacuum? And that they can have different effective masses along different direction of travel? That they follow a Fermi-Dirac distribution? That band gaps exist? Etc. If you're willing to accept all these things (and more) as true and not worry about why they're true, then quantum mechanics isn't really needed. You can get very far in life modeling devices with semi-classical techniques. However, if you want to understand why all that weird stuff happens in solids, then yes, you need to know quantum mechanics. Can you actually derive transistor behavior from QM directly? It depends on the type of transistor. If you're talking about a TFET (or other tunneling devices, like RTDs and Zener diodes), then I challenge you to derive its behavior without quantum mechanics! However, if you're talking about most common transistors (BJTs, JFETs, MOSFETs, etc.), then deriving their behavior from quantum mechanics is a lot of work because the systems are messy and electrons don't "act" very quantum because of their short coherence time in a messy environment. However, the semi-classical physics used for most semiconductor devices does absolutely have a quantum underpinning. But there's a good reason it's typically not taught from first principles. Anecdote: One time, I was sitting next to my advisor at a conference, and there was a presentation that basically boiled down to modeling a MOSFET using non-equilibrium greens functions (which is a fairly advanced method from quantum mechanics). During the presentation, my advisor whispered to me something along the lines of: "Why the heck are they using NEGF to model a fricking MOSFET?!?" In other words, just because you can use quantum mechanics to model transistors, doesn't mean you should. There are much simpler methods that are just as accurate (if not more accurate).
{ "source": [ "https://physics.stackexchange.com/questions/648399", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/147291/" ] }
648,643
I have read this question: The fundamental confusion many have about black holes is thinking that they are discrete "things" surrounded by horizons and other phenomena. But they are actually extended spacetime curvature structures (that imply the various phenomena). The singularity is not doing anything and is not responsible for the gravitational field, it is a consequence of the field. What are tidal forces inside a black hole? As far as I understand, as per general relativity, spacetime curvature is caused by stress-energy (not mass). This answer is using a vacuum solution to describe black holes, and you can read in the comments to that question that there is no need for any matter (or mass) to be present inside the black hole, it is just a vacuum, but spacetime itself is curved, and the gravitational field itself has the energy needed for the curvature itself. This includes the singularity itself, which in this answer is described as being "off the metric", that is not part of our spacetime, hence, it cannot cause the curvature. Now, if the interior of the black hole, is a vacuum (the model is a vacuum solution), meaning the collapsed star's gaseous matter is not there (as far as I understand it is in the singularity), and the singularity is not part of our spacetime, then neither can cause curvature. Again, GR describes curvature as being caused by stress energy. If there is no matter, no mass, nothing with stress-energy inside the black hole, except the singularity, but the singularity is not part of our spacetime, then what causes the curvature? There are suggestions in the comments, that the collapsing star's gaseous matter transforms into the energy of the gravitational field itself. But I do not understand how electrons and quarks can transform into gravitons. Still, how can the gravitational field itself cause the curvature, or how can it sustain itself? Gravity sustains itself, curvature means stress-energy in the gravitational field, and this energy causes curvature? Question: $1$ . If black holes are just an empty vacuum of space inside, then what causes the curvature?
GR describes curvature as being caused by stress-energy. This statement is slightly wrong and is the cause of your confusion here. Technically, in GR the stress energy tensor is the source of curvature. That is not quite the same as being the cause. An easy analogy is with Maxwell’s equations. In Maxwell’s equations charge and current density are the sources of the electromagnetic field. However, although charges are the source of the field there exist non trivial solutions to Maxwell’s equations that involve no sources. These are called vacuum solutions, and include plane waves. In other words Maxwell’s equations permit solutions where a wave simply exists and propagates forever without ever having any charges as a source. Similarly with the Einstein field equations (EFE). The stress energy tensor is the source of curvature, but just as in Maxwell’s equations there exist non trivial vacuum solutions, including the Schwarzschild metric. In that solution there is no cause of the curvature any more than there is a cause of the plane wave in Maxwell’s equations. The curvature in the Schwarzschild metric is simply a way that vacuum is allowed to curve even without any sources. Now, both in Maxwell’s equations and in the EFE the vacuum solutions are not particularly realistic. Charges exist as does stress energy. So the universe is not actually described by a vacuum solution in either case. So typically only a small portion of a vacuum solution is used to describe only a small portion of the universe starting at some matching boundary. A plane wave can match the vacuum region next to a sheet of current, and the Schwarzschild solution can match the vacuum region outside a collapsing star. So realistically, the cause of the curvature would be stress-energy that is outside of the vacuum solution, in the part of the universe not described by the Schwarzschild metric. This would be in the causal past of the vacuum region including the vacuum inside the horizon. Since it is in the causal past it can be described both as the cause and the source of the curvature, with the understanding that it is strictly outside of the Schwarzschild metric which is a pure vacuum solution in which the curvature has no source.
{ "source": [ "https://physics.stackexchange.com/questions/648643", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/132371/" ] }
648,656
I'm exploring this question out of personal curiosity. If I take a cloud of atoms of a given element and release it in space at a distance of Earth's orbit from the sun (but not so close to Earth as to be affected by Earth's atmosphere/gravity/mag-field/radiation belts, etc.), will the neutral cloud turn into an ionized plasma? Of course a few of the atoms will always be neutral regardless of the radiation field intensity, so for specificity let's say an ionization fraction of at least 95% or so. I think that I could use the Saha equation for this: $$ \frac{N_{i+1}}{N_i}=\frac{2 Z_{i+1}}{n_e Z_i}\left(\frac{2\pi m_e k T}{h^2}\right)^{3/2} \exp\left(\frac{-\chi_i}{kT}\right) $$ Naturally it depends on the electron density, $n_e$ , and the ionization energy, $\chi_i$ . But to use it I also need to know the temperature of the cloud of atoms. How would I determine this? I know the diffuse solar wind has an electron temperature ~140000 K, would the cloud equilibrate with the electrons? Or would it be driven toward another temperature? And is the Saha equation even the correct approach given that for some elements the solar radiation will be able to directly photo-eject electrons?
GR describes curvature as being caused by stress-energy. This statement is slightly wrong and is the cause of your confusion here. Technically, in GR the stress energy tensor is the source of curvature. That is not quite the same as being the cause. An easy analogy is with Maxwell’s equations. In Maxwell’s equations charge and current density are the sources of the electromagnetic field. However, although charges are the source of the field there exist non trivial solutions to Maxwell’s equations that involve no sources. These are called vacuum solutions, and include plane waves. In other words Maxwell’s equations permit solutions where a wave simply exists and propagates forever without ever having any charges as a source. Similarly with the Einstein field equations (EFE). The stress energy tensor is the source of curvature, but just as in Maxwell’s equations there exist non trivial vacuum solutions, including the Schwarzschild metric. In that solution there is no cause of the curvature any more than there is a cause of the plane wave in Maxwell’s equations. The curvature in the Schwarzschild metric is simply a way that vacuum is allowed to curve even without any sources. Now, both in Maxwell’s equations and in the EFE the vacuum solutions are not particularly realistic. Charges exist as does stress energy. So the universe is not actually described by a vacuum solution in either case. So typically only a small portion of a vacuum solution is used to describe only a small portion of the universe starting at some matching boundary. A plane wave can match the vacuum region next to a sheet of current, and the Schwarzschild solution can match the vacuum region outside a collapsing star. So realistically, the cause of the curvature would be stress-energy that is outside of the vacuum solution, in the part of the universe not described by the Schwarzschild metric. This would be in the causal past of the vacuum region including the vacuum inside the horizon. Since it is in the causal past it can be described both as the cause and the source of the curvature, with the understanding that it is strictly outside of the Schwarzschild metric which is a pure vacuum solution in which the curvature has no source.
{ "source": [ "https://physics.stackexchange.com/questions/648656", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/252367/" ] }
649,238
I have read this question: Changes to chemical properties of elements due to relativistic effects are more pronounced for the heavier elements in the periodic table because in these elements, electrons have speeds worthy of relativistic corrections. These corrections show properties that are more consistent with reality, than with those where a non-relativistic treatment is given. Electrons are not "moving around" a nucleus, but they are instead probability clouds surrounding the nucleus. So "most likely distances of electrons" would be a better term. Why do "relativistic effects" come into play, when dealing with superheavy atoms? And this one: The fact that by measuring the spectra of atoms we know the energy levels, in the Bohr model allows to calculate a velocity for the electron. The Bohr model is superseded by the quantum mechanical solutions which give the probabilistic space-time solutions for the atom, but since it is a good approximation to the QM solution, it can be considered an "average" velocity. There is no way to measure an individual electron's four vector while bound to an atom. One can measure it if it interacts with a particle, as for example "the atom is hit by a photon of fixed energy, with an energy higher than ionization", and an electron comes out and its velocity can be measured. The balance of the energy and momentum four vectors of the interaction "atom+photon" will give the four vector of the electron, and thus its velocity in a secondary way. An accumulation of these measurements would give on average the velocity calculated by the Bohr model. How do particles that exist only as a cloud of probabilities have actual rates of speed? Now both of these agree on the fact that electrons are quantum mechanical objects, described by probability densities where they exist around the nuclei (some might say they exist everywhere at the same time with different probabilities), but then the first one says that the relativistic corrections are justified, so that is the correct way. Now a free electron can have a classical trajectory, as seen on the bubble chamber image, as the electron spirals in. But why can't the bound electron do the same thing around the nucleus? As far as I understand, QM is the right way to describe the world of electrons around the nuclei, and they are not classically moving around the nuclei, then they do not have actual classically definable trajectories, but as soon as they are free, they can move along classical trajectories. Just to clarify, as far as I understand, electrons are not classically orbiting, but they exist around the nuclei in probability clouds, as per QM. What I am asking about, is, if they are able to move along classical trajectories as free electrons, then what happens to these free electrons as they get bound around a nucleus, why are they not able to classically move anymore? I am not asking why the electron can't spiral into the nucleus. I am asking why it can't move along classical trajectories around the nucleus if it can do that when it is a free electron. If we can describe the free electron's trajectory with classical methods in the bubble chamber, then why can't we do that with the electron around the nucleus? Question: If free electrons have classical trajectories, then why don't bound electrons around the nuclei have it too?
This is the subject of an underrated classic paper from the early days of quantum mechanics: Mott, 1929: The wave mechanics of alpha-ray tracks . Mott's introduction is better than my attempt to paraphrase: In the theory of radioactive disintegration, as presented by Gamow, the $\alpha$ -particle is represented by a spherical wave which slowly leaks out of the nucleus. On the other hand, the $\alpha$ -particle, once emerged, has particle-like properties, the most striking being the ray tracks that it forms in a Wilson cloud chamber. It is a little difficult to picture how it is that an outgoing spherical wave can produce a straight track; we think intuitively that it should ionise atoms at random throughout space. We could consider that Gamow’s outgoing spherical wave should give the probability of disintegration, but that, when the particle is outside the nucleus, it should be represented by a wave packet moving in a definite direction, so as to produce a straight track. But it ought not to be necessary to do this. The wave mechanics unaided ought to be able to predict the possible results of any observation that we could make on a system, without invoking, until the moment at which the observation is made, the classical particle-like properties of the electrons or $\alpha$ -particles forming that system. Mott's solution is to consider the alpha particle and the first two atoms which it ionizes as a single quantum-mechanical system with three parts, with the result We shall then show that the atoms cannot both be ionised unless they lie in a straight line with the radioactive nucleus. That is to say, your question gets the situation backwards. The issue isn't that "free electrons have classical trajectories," and that these electrons are "not able to move classically anymore" when they are bound. Mott's paper shows that the wave mechanics, which successfully predicts the behavior of bound electrons, also predicts the emergence of straight-line ionization trajectories. With modern buzzwords, we might say that the "classical trajectory" is an "emergent phenomenon" due to the "entanglement" of the alpha particle with the quantum-mechanical constituents of the detector. But this classic paper predates all of those buzzwords and is better without them. The observation is that the probabilities of successive ionization events are correlated, and that the correlation works out to depend on the geometry of the "track" in a way which satisfies our classical intuition.
{ "source": [ "https://physics.stackexchange.com/questions/649238", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/132371/" ] }
649,241
Introduction: I read on Wikipedia's list of common misconceptions that microwaves work not by emitting the resonant frequency of water, but as a result of dielectric heating. As I understand it, this process heats a substance by emitting a constantly changing electric field, which makes the polar molecules in the substance attempt to align themselves with the field, thus introducing more molecular motion, that is, thermal energy. This makes me think that the heat introduced to the substance should be directly proportional to the polarity of the molecule. I then conducted a brief experiment: Method: I heated 25g of canola oil and 25g of water in plastic cups in a 1250 W microwave oven for 15 seconds each, measuring the temperatures before and after. I couldn't find the frequency of the waves emitted from the microwave, though I'm pretty sure it's the standard 2.45 GHz. If the frequency is necessary to know for sure, I think I could go back and partially melt a chocolate bar in it, finding the wavelength, and use the speed of light to find the frequency. ( http://www.planet-science.com/categories/over-11s/physics-is-fun!/2012/01/measure-the-speed-of-light-using-chocolate.aspx ) Data: Water Canola oil mass (g) 25 25 time in microwave (s) 15 15 initial temperature (C) 25 24 final temperature (C) 51 33 heat deposited (J) 2718 472 Results: Using estimated specific heat for canola oil (from https://www.sciencedirect.com/topics/neuroscience/canola-oil and https://doi.org/10.1080/10942910701586273 ) of 2.1 J/gK and 4.182 J/gK for water, it can be found that the change in energy of the water is: $q = mc(\Delta T)=(25~\mathrm{g})(4.182~\mathrm{J/gK})(26~\mathrm{K}) = 2718 ~\mathrm{J}$ And the change of energy in the oil is: $q = mc(\Delta T)=(25~\mathrm{g})(2.1~\mathrm{J/gK})(9~\mathrm{K}) = 472~\mathrm{J}$ So there was about six times more energy given to the water than the oil. Discussion: This seems odd to me. First, if dielectric heating relies on polarity, why is the canola oil heating at all? Second, why is it heating as much as it is? I would think that the hydrogen bonds in the water are far more than six times as strong as the London dispersion forces within the oil. Is it because the oil is diluted with a polar substance? Is it because the lowered polarity makes it easier to move the molecules, and therefore impart heat? What should be expected to happen if a completely nonpolar material is microwaved?
This makes me think that the heat introduced to the substance should be directly proportional to the polarity of the molecule. In addition to the good answer from Gert, there's a problem in this step. The microwave oven is a metal box. The purpose of the metal box is to reflect whatever microwaves aren't absorbed by the food (either because they "missed the target" or because they passed through). This reflection isn't 100% efficient, but it's actually pretty close, so for the sake of discussion we can pretend that it is. What that means is that basically all of the power emitted by the magnetron goes into the food . If you put a sample in there with a lower dielectric loss it will absorb less energy, and heat up less, "on the first pass", but that just means that more of the energy will be available to bounce off of the walls and take another try, and another, and another — the field density will increase until it reaches the point where absorbed power equals input power. Obviously there are practical limits to this (an empty microwave oven would either have to deliver all of its heat to itself , or else automatically turn off or reduce power), but for reasonable samples of food-like stuff it's close enough to true to seriously mess with the concept of your experiment.
{ "source": [ "https://physics.stackexchange.com/questions/649241", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/279639/" ] }
649,413
I have read this question: When galaxies collide it is not that their stars crash into each other, because their individual cross-sections are extremely small when compared to the space between them. This is dealt with in qualitative terms on the Wikipedia page on the likely collision. What will happen to the Earth when Milky Way and Andromeda merge? And this one: All that being said, the effect would be order of magnitude smaller than the type of seismic events that happen on a daily basis, and would not pose any sort of threat to anything on Earth. Andromeda & Milky Way Merger: Gravitational Waves The first one says that the stars (solar systems) will hardly collide with each other. Now the second one says that the gravitational waves from the merger of the central black holes would be hardly detectable. So basically, when this merger happens, assuming humanity is still here, and the Earth is still intact, would we even notice the merger? As far as I understand, the black holes would merge in seconds, but I do not know how long the full merger of the galaxies would last. Would we just simply see more stars moving fast in the night sky? Would we even notice the merger with the Andromeda Galaxy ?
The merger would be indirectly noticeable due to a dramatic burst of star formation and supernovas. The gas of the two galaxies will meet at high velocity, clump, and produce new stars. Some will be very heavy and bright, resulting in supernovas and gamma ray bursts: the merged galaxy may become a bit too risky for planet-bound civilizations dependent on an ozone layer. This is amplified by the possibility of gas accretion on the central black holes producing a luminous active galactic core. In the end these processes will tend to blow away much of the gas and end star formation, leaving us with a big elliptical galaxy. Obviously the process itself stretches over hundreds of million years from start to finish of the final merge, so there would never be anything moving dramatically. The night sky would just keep on getting more complex and rich until the merger winds down. A civilization observing the stars would likely figure out what is going on, especially by comparing to other galactic mergers.
{ "source": [ "https://physics.stackexchange.com/questions/649413", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/132371/" ] }
649,416
I am working on exercises in Introduction to Quantum Field Theory by Peskin and Schroeder and throughout many of my derivations I often encountered integrals like this: $$\int d^3p d^3p' \delta^3(p - p')e^{i(p\cdot x + p'\cdot y)}.$$ Here the $p $ s in the dirac delta are three-vectors and the $p,x,y$ s in the exponent are four vectors. Here is how I tried to evaluate this: $$\int d^3p d^3p' \delta^3(p - p')e^{i(p\cdot x + p'\cdot y)} = \int d^3p d^3p' \delta^3(p - p')e^{i(E_p\cdot t_x + E_{p'}\cdot t_y)}e^{^{-i(p \cdot x + p'\cdot y)}}.$$ Now the $p,x,y$ s in the exponent are three vectors. Thus: $$\int d^3p d^3p' \delta^3(p - p')e^{i(E_p(t_x - t_y))}e^{^{-ip\cdot(x + y)}} = \int d^3p d^3p' \delta^3(p - p')e^{i(E_p\Delta t)}e^{^{-ip\cdot(x + y)}} = \int d^3pe^{i(E_p\Delta t)}e^{^{-ip\cdot(x + y)}} = \int d^3p e^{ip\cdot (x - y)}.$$ Note that in the very last integral expression the $p,x,y$ s in the exponent are four vectors. If I let $\Delta t = 0$ then the integral is the three dimensional dirac delta function $(2\pi)^3\delta^3(x - y)$ , where $x,y$ are position three vectors. Now, if I do not let $\Delta t = 0$ , then is the integral of four vectors $\int d^3p e^{ip\cdot (x - y)}$ the four dimensional dirac delta function $(2\pi)^4\delta^4(x - y)$ where x and y are four vectors? If not, what is this integral?
The merger would be indirectly noticeable due to a dramatic burst of star formation and supernovas. The gas of the two galaxies will meet at high velocity, clump, and produce new stars. Some will be very heavy and bright, resulting in supernovas and gamma ray bursts: the merged galaxy may become a bit too risky for planet-bound civilizations dependent on an ozone layer. This is amplified by the possibility of gas accretion on the central black holes producing a luminous active galactic core. In the end these processes will tend to blow away much of the gas and end star formation, leaving us with a big elliptical galaxy. Obviously the process itself stretches over hundreds of million years from start to finish of the final merge, so there would never be anything moving dramatically. The night sky would just keep on getting more complex and rich until the merger winds down. A civilization observing the stars would likely figure out what is going on, especially by comparing to other galactic mergers.
{ "source": [ "https://physics.stackexchange.com/questions/649416", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
649,427
When dealing with RF plasma sources - one of the most significant challenges is impedance matching between RF power source and plasma. Impedance of plasma changes dramatically during ignition - and this change is very fast. I wonder if it is possible to build impedance matching circuit which is matched to both cold & hot plasma at the same time without tuning, even if it requires more components (i.e. more than 2C + 1L)? So that when plasma starts to burn - circuit is immediately matched to it. If it is not possible - are there any hacks/tricks to make it work relatively well with both states of plasma?
The merger would be indirectly noticeable due to a dramatic burst of star formation and supernovas. The gas of the two galaxies will meet at high velocity, clump, and produce new stars. Some will be very heavy and bright, resulting in supernovas and gamma ray bursts: the merged galaxy may become a bit too risky for planet-bound civilizations dependent on an ozone layer. This is amplified by the possibility of gas accretion on the central black holes producing a luminous active galactic core. In the end these processes will tend to blow away much of the gas and end star formation, leaving us with a big elliptical galaxy. Obviously the process itself stretches over hundreds of million years from start to finish of the final merge, so there would never be anything moving dramatically. The night sky would just keep on getting more complex and rich until the merger winds down. A civilization observing the stars would likely figure out what is going on, especially by comparing to other galactic mergers.
{ "source": [ "https://physics.stackexchange.com/questions/649427", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/930/" ] }
649,738
This may sound like a ridiculous question, but it struck me as something that might be the case. Suppose that you have a gigantic mirror mounted at a huge stadium. In front, there's a bunch of people facing the mirror, with a long distance between them and the mirror. Behind them, there is a man making moves for them to follow by looking at him through the mirror. Will they see his movements exactly when he makes them, just as if they had been simply facing him, or will there be some amount of "optical lag"?
As the speed of light is finite, sure enough there is some lag, but let's evaluate how big that lag is. Considering that the mirror is 100 meters away, than the lag will be $$2\times 100\: \mathrm{m}/(3\times 10^8\:\mathrm{m/s}) = 667\:\mathrm{ns}.$$ Comparing it to average human reaction time of about $0.1\:\mathrm{s}$ , one can conclude that it is impossible for a naked eye to notice any lag at all.
{ "source": [ "https://physics.stackexchange.com/questions/649738", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/306276/" ] }
649,873
Let's consider the element neon. Its ground-state electron configuration is: $1s^2 2s^2 2p^6$ . What would happen if enough energy was given for one electron in the $1s$ orbital to jump to the $2s$ orbital (i.e. exactly the $\Delta E$ between $1s$ and $2s$ was supplied)? Would the electron from the $1s$ orbital absorb the energy? There can't be more than 2 electrons in an orbital, so what would happen to the electrons in the $2s$ orbital if the $1s$ electron absorbed the energy?
Overview Transitions to other unoccupied states are possible but extremely unlikely, more likely that the photon will not be absorbed. Introduction The Pauli exclusion principle prevents a third electron occupying the $2s$ state. Even if there was space in the $2s$ state a $1s\to 2s$ transition is unlikely due to selection rules and a $1s\to2p$ transition is significantly more likely if there is space in the $2p$ orbital. Other answers here have stated that the transition to other energy levels is forbidden. Now while the probability of a transition is extremely small, it is non-zero. A quick note on notation: I will be using a bold typeface for vectors as opposed to an over arrow so that vector operators are clearer. Quantisation of the Electromagnetic Field Minimal coupling of the electron to the electromagnetic field using the Coulomb potential adds a perturbation of: $$\hat H_1=\frac{e}{m_e}\hat{\boldsymbol p}\cdot\hat{\boldsymbol A}\left(\boldsymbol r,t\right)$$ to the Hamiltonian. Where $e$ and $m_e$ are the charge and mass of the electron, $\hat{\boldsymbol p}$ is the momentum operator acting on the electron and the vector potential operator has the form: $$\hat{\boldsymbol A}\left(\boldsymbol r,t\right)=\sum_{\lambda,\boldsymbol k}\sqrt{\frac{\hbar}{2v\epsilon_0\omega\left(\boldsymbol k\right)}}\left(\hat a_\lambda\left(\boldsymbol k\right)\boldsymbol s_\lambda\left(\boldsymbol k\right)e^{i\left(\boldsymbol {k}\cdot\boldsymbol r-\omega t\right)}+\text{h.c.}\right)$$ where $\text{h.c.}$ is the Hermitian conjugate of the preceding terms, $v$ is the volume of the cavity in which the experiment is taking place; $\omega\left(\boldsymbol k\right)$ is the angular frequency of the photon mode as a function of the wavevector $\boldsymbol k$ ; $\lambda$ labels the two polarisations; $\boldsymbol s_\lambda\left(\boldsymbol k\right)$ is the polarisation vector of the mode; $\hat a_\lambda\left(\boldsymbol k\right)$ is the annihilation operator for the mode; and $\boldsymbol r$ is the position of the atom (assuming the wavelength is larger than the atom the uncertainty in the electron's position can be ignored). If we have a single wavelength and polarisation then: $$\hat H_1=\frac{e}{m_e}\sqrt{\frac{\hbar}{2v\epsilon_0\omega}}\hat{\boldsymbol {p}}\cdot\boldsymbol s\hat a e^{i\left(\boldsymbol k\cdot\boldsymbol r-\omega t\right)}+\text{h.c.}$$ Thus, let: $$\begin{align}\hat V&=\frac{e}{m_e}\sqrt{\frac{\hbar}{2v\epsilon_0\omega}}\hat{\boldsymbol {p}}\cdot\boldsymbol s\hat a e^{i\boldsymbol k\cdot\boldsymbol r}\\\implies\hat H_1&=Ve^{-i\omega t}+\hat V^\dagger e^{i\omega t}\end{align}$$ Then using first-order time-dependent perturbation theory which holds in the limit $\frac{t}{\hbar}\left|\langle f|\hat V|i\rangle\right|\ll1$ for all $n\ge2$ . We find the probability of a transition having occured if the atom is measured after a time $t$ since the electromagentic field was applied is: $$\begin{align}P\left(t\right)=\frac{t^2}{\hbar^2}\Bigg|&\overbrace{e^{i\left(\Delta\omega-\omega\right)t/2}\operatorname{sinc}\left(\frac{1}{2}t\left(\Delta\omega-\omega\right)\right)\langle f|\hat V|i\rangle}^\text{absorption}\\+&\underbrace{e^{i\left(\Delta\omega+\omega\right)t/2}\operatorname{sinc}\left(\frac{1}{2}t\left(\Delta\omega+\omega\right)\right)\langle f|\hat V^\dagger|i\rangle}_\text{emission}\Bigg|^2\end{align}\tag{1}$$ where $\Delta E=\hbar \Delta \omega$ be the difference in the energy levels of the initial $|i\rangle$ and final $|f\rangle$ states. This is in general non-zero even when $\Delta \omega\ne\omega$ . However, we can make one more approximation to aid in understanding: if the final state $|f\rangle$ has absorbed a photon then in the limit $t\Delta\omega\gg2\pi$ the $\operatorname{sinc}$ functions do not overlap and so we need only retain the absorption term: $$P\left(t\right)=\left(\frac{\left|\langle f|\hat V|i\rangle\right|}{\hbar}\right)^2t^2\operatorname{sinc}^2\left(\frac{1}{2}t\left(\Delta\omega-\omega\right)\right)\tag{2}$$ Further approximations from here will give you Fermi's Golden rule, one of these approximations is taking the limit such that $t\operatorname{sinc}^2$ tends to a delta function and so removes the possibility for a transition when the energy of the photon is not exactly equal to the energy gap: and so this is an inappropriate approximation to make in this case. Energy Conservation in Quantum Mechanics While the expectation value of the energy is conserved in the evolution of a system as described by Schrödinger equation, there may be a discontinuous jump in the energy of the system when a measurement is performed. Consider a system in a superposition of energy eigenstates, when you measure the energy the state will collapse into an energy eigenstate which in general will not have the same energy as the expectation value for the energy - the energy of the system has increased or decreased! The energy may be transferred to or from the measurement device or surroundings to compensate. In previous edits this section also contained a discussion of the many-words type interpretation which in my naivety I included. I apologise for anyone I have mislead and for more details you can see this question: "Conservation of energy, or lack thereof," in quantum mechanics @Jagerber48's answer is the most relevant to this question giving additional details that will likely be of interest to any reader of this question. @benrg's answer gives a good explanation of why energy is conserved. @NiharKarve's comment includes a blog post which explains why the paper may be misleading. Putting this all Together Equation (1) shows, in general, the when an atom is illuminated by light of a single specific wavelength and polarisation, a transition is possible even if the energy of the photons is not equal to the energy gap, which would violate energy conservation (but this is allowed); however, the probability is extremely small. Equation (2) makes a further approximation which we can now use to find an expression for the probability: $$P\left(t\right)=\frac{e^2}{2v\epsilon_0m_e^2\hbar\omega}\left|\langle f|\hat{\boldsymbol {p}}\cdot\boldsymbol s\hat a |i\rangle\right|^2t^2\operatorname{sinc}^2\left(\frac{1}{2}t\left(\Delta\omega-\omega\right)\right)$$ As $|i\rangle\equiv|i\rangle_e|f\rangle_{EM}$ and $|f\rangle_e|f\rangle_{EM}$ where subscript $e$ is the electrons states and subscript $EM$ are the states of the electromagnetic field. Without detail $_e\langle f|\hat{\boldsymbol {p}}\cdot\boldsymbol s|i\rangle_e\equiv\boldsymbol d_{fi}\cdot\boldsymbol s$ where $\left\{\boldsymbol d_{fi}\right\}$ are the dipole matrix elements and are zero for transitions between certain orbitals independent of the energy supplied (for more details see selection rules ). Finally, $_{EM}\langle f|\hat a|i\rangle_{EM}=\sqrt{N}$ if the state $|i\rangle_{EM}$ is the state for $N$ photons of the given wavelength and polarisation - but other states such as coherent states are also posible. $$\implies P\left(t\right)=\frac{e^2N}{2v\epsilon_0m_e^2\hbar\omega}\left|\boldsymbol d_{fi}\cdot\boldsymbol s\right|^2t^2\operatorname{sinc}^2\left(\frac{1}{2}t\left(\Delta\omega-\omega\right)\right)\tag{3}$$ which holds in the limit: $$t\ll\frac{\hbar}{\left|\,_e\langle f|\left(\hat{\boldsymbol {p}}\cdot\boldsymbol s\right)|i\rangle_e\right|}\sim 10^{-25}\text{s}$$ As the limit $t\Delta\omega\gg2\pi$ is not needed when the state $|i\rangle_{EM}$ is the state for $N$ photons of the given wavelength and polarisation because the creation operator causes the emission term to vanish anyway. However, the time is of the order of $10^{-25}\text{s}$ give or take a few orders of magnitude for Neon (obtained using the only data I could find for reduced matrix elements for dipole transitions), which is not a practical time scale to measure on. Finally, considering your given case, given selection rules, the most likely case if the $1s$ electron did absorb a photon is a transition to the $3p$ state (as $2p$ is occupied and $3s$ is forbidden to first order by selection rules). Substituting values into equation (3) gives an order of magnitude estimate for the probability of transitioning from $1s$ to $3p$ in Neon of $10^{-12}\%\text{ per }\left(\text{photon }m^{-3}\right)$ for $t=10^{-25}\text{s}$ which is the point the approximation of first order perturbation breaks down.
{ "source": [ "https://physics.stackexchange.com/questions/649873", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/306354/" ] }
650,211
When the earth revolves around the sun, the sun attracts the earth by a gravitational force $F_{se}$ (centripetal force), and the earth attracts the sun by a gravitational force $F_{es}$ (centrifugal force). The two forces are equal and opposite according to Newton's third law. We know that a centrifugal force is a fictitious force. So, $F_{es}$ is also a fictitious force, but wait, how is this possible? Gravitational force is not fictitious! But if a gravitational force is a centrifugal force, it has to be fictitious, right (since all centrifugal forces are fictitious)? So, is gravitational force fictitious or not?
The best way to avoid this kind of confusion is to start from the beginning in a purely Newtonian description of the motion, i.e., working in an inertial frame. Only after understanding the situation in the inertial system it is possible to analyze it in a non-inertial frame without terminology or conceptual confusion. For the present discussion, we can neglect the effect of the presence of other planets. In an inertial frame, both Sun and Earth move with an almost circular trajectory around the common center of mass. If centripetal means towards the center of rotation , both $F_{es}$ and $F_{se}$ are centripetal . In this inertial frame, no centrifugal force is present. In the non-rotating non-inertial frame centered on the Sun, thus accelerating with acceleration ${\bf a}_s$ with respect to any inertial system, a fictitious (or inertial) force ${\bf F}_f = -m {\bf a}_s$ appears on each body of mass $m$ . As a consequence, there is no net force on the Sun, and the force on the Earth is the sum of the usual gravitational force plus a fictitious force $$ {\bf F}_f=-m_e {\bf a}_s $$ where ${\bf a}_s=\frac{Gm_e}{r_{es}^2}{\bf \hat r}_{es}$ is the acceleration of Sun in an inertial frame, ${\bf \hat r}_{es}$ is the unit vector from Sun to Earth. Therefore, this fictitious force points toward the Sun and should be called centripetal in this reference frame. It has to be added to the gravitational force on the Earth, again a centripetal force. The reference frames where a centrifugal fictitious force appears are all the non-inertial reference frames rotating with respect to the inertial frames. For example, if we assume circular orbits for simplicity, in the non-inertial frame centered on Sun and co-rotating with Earth, a fictitious centrifugal force on Earth appears, exactly equal to the gravitational force. Indeed, in such a rotating system, the Earth is at rest at a fixed distance from the Sun.
{ "source": [ "https://physics.stackexchange.com/questions/650211", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167872/" ] }
650,213
If I take the derivative of a displacement-time function, it will give the function of its velocity over time. If I take the absolute value of my velocity function it will give me a function of its speed over time. If I take the integral of my speed function will it therefore give me a distance-time function?
The best way to avoid this kind of confusion is to start from the beginning in a purely Newtonian description of the motion, i.e., working in an inertial frame. Only after understanding the situation in the inertial system it is possible to analyze it in a non-inertial frame without terminology or conceptual confusion. For the present discussion, we can neglect the effect of the presence of other planets. In an inertial frame, both Sun and Earth move with an almost circular trajectory around the common center of mass. If centripetal means towards the center of rotation , both $F_{es}$ and $F_{se}$ are centripetal . In this inertial frame, no centrifugal force is present. In the non-rotating non-inertial frame centered on the Sun, thus accelerating with acceleration ${\bf a}_s$ with respect to any inertial system, a fictitious (or inertial) force ${\bf F}_f = -m {\bf a}_s$ appears on each body of mass $m$ . As a consequence, there is no net force on the Sun, and the force on the Earth is the sum of the usual gravitational force plus a fictitious force $$ {\bf F}_f=-m_e {\bf a}_s $$ where ${\bf a}_s=\frac{Gm_e}{r_{es}^2}{\bf \hat r}_{es}$ is the acceleration of Sun in an inertial frame, ${\bf \hat r}_{es}$ is the unit vector from Sun to Earth. Therefore, this fictitious force points toward the Sun and should be called centripetal in this reference frame. It has to be added to the gravitational force on the Earth, again a centripetal force. The reference frames where a centrifugal fictitious force appears are all the non-inertial reference frames rotating with respect to the inertial frames. For example, if we assume circular orbits for simplicity, in the non-inertial frame centered on Sun and co-rotating with Earth, a fictitious centrifugal force on Earth appears, exactly equal to the gravitational force. Indeed, in such a rotating system, the Earth is at rest at a fixed distance from the Sun.
{ "source": [ "https://physics.stackexchange.com/questions/650213", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/306497/" ] }
650,245
When an object undergoes rotation, from the object's reference frame, which is a non-inertial reference frame, the object feels there is a radially outward force, a centrifugal force, acting on it. However, from an inertial reference frame, this force doesn't exist at all. That's why it is called a fictitious force. My argument is, who are we to say what is fictitious or not. The object at the non-inertial frame really feels the centrifugal force! So, it is a real force for the object. Suppose, there are two inertial reference frames $S$ & $S'$ and $S'$ is moving with a velocity v that is a significant fraction of the speed of light. From $S$ it would seem that time is going slower for $S'$ . Surprisingly, it would seem from $S'$ that time is going slower for $S$ as well. Now, who is right? Answer: Both of them are right. So, is it really right to call centrifugal force fictitious just because it doesn't exist in an inertial reference frame?
I disagree that you feel centrifugal force. A person in a centrifuge actually feels their reaction to the centripetal force. If you sit in a car that is subject to harsh acceleration, you 'feel' as if you are being pushed back in your seat. There is no force pushing you back- it is simply the result of your inertia.
{ "source": [ "https://physics.stackexchange.com/questions/650245", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167872/" ] }
650,263
To begin with, I'm a high school student and so my understanding of QFT is quite basic. Due to this, I'd prefer a simple answer (it would be great if it's yes/no) along with a very basic explanation. Essentially, I know that the three fundamental forces - electric, strong and weak force are results of spontaneous symmetry breaking. At low energies, the symmetry breaks and the forces "split". My question is based on this. Now that the forces have "split" is there any direct relationship between these forces? For example, an electron has an electric charge of -1, a strong charge of 0, and a weak charge of -1/2. So is there a connection between the -1, the 0 and the -1/2? If one of the values was to change, would any of the other two values change? If yes, would it be both or would it just be one of them? So in essence, could there exist a fundamental particle that for example has an electric charge of -2, a color charge and a weak charge of 1/2? I'm not sure if there is another restriction that doesn't allow the electric charge to go below -1, but ignoring these other restrictions, just based on the pure relationship between these charges, would changing one affect the other 2, and if it does then is there only a certain number of combinations of these 3 charges?
I disagree that you feel centrifugal force. A person in a centrifuge actually feels their reaction to the centripetal force. If you sit in a car that is subject to harsh acceleration, you 'feel' as if you are being pushed back in your seat. There is no force pushing you back- it is simply the result of your inertia.
{ "source": [ "https://physics.stackexchange.com/questions/650263", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/306523/" ] }
651,581
Let’s say I drop a penny in the deepest part of the ocean having a certain depth. Would the penny become buoyant enough to stand still in the water, since the density of water increases with depth? Since the buoyancy of objects becomes greater as the density of water increases, would a penny ever come to a stand still, and if so, what would be an estimated depth of the water? I asked my science teacher this and she couldn’t answer it.
The answer is no . Water, being a liquid, is nearly incompressible, meaning that the density changes very little with increasing pressure. In the very deep ocean, the pressure can approach $10^{8}$ Pa (about a thousand times greater than standard atmospheric pressure of $1.01\times10^{5}$ Pa). However, the bulk modulus $B$ of liquid water (the reciprocal of the compressibility) is even larger, at about $2\times10^{9}$ Pa. ( $B$ actually varies with the pressure and temperature with the water, but not enough to make a practical difference.) The ratio of the ambient pressure $P$ to the bulk modulus $B$ gives you roughly how much fractional change there will be in the density of water when it is under the pressure $P$ . This ratio is about $0.05$ , so even at the greatest depths of the ocean, the changes to the density of the water will be, at most, about five percent. Actual differences in ocean density often have more to do with differences in the salinity of the water (since the dissolved sodium and chloride ions adds extra mass); however, these changes are also small, also at the level of a few percent at most. To determine whether a penny will float or sink, we only need to compare the density of the penny to the benthic density of the water. Modern pennies are made mostly of zinc, while before 1982, they were mostly made of copper. The specific gravities of these metals are $7.0$ and $9.0$ , respectively (under atmospheric pressure; they would be slightly greater at $10^{8}$ Pa), making them both several times more dense than seawater (specific gravity of $1.0$ , plus or minus a few percent). So a penny would sink to the bottom. Only an object that is very close to the density of water at the surface can find an eventual equilibrium point where its density is equal to that of very deep water.
{ "source": [ "https://physics.stackexchange.com/questions/651581", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/307102/" ] }
652,545
There are many disagreements of convention between mathematicians and physicists, but a recurring theme seems to be that physicists tend to insert unnecessary factors of $i = \sqrt{-1}$ into definitions. I understand this is only convention, but I’m curious about why this seems so widespread. Does anybody know about the “etymological” reason(s) for physicists’ $i$ -heavy conventions? $\newcommand{\dd}{\mathrm{d}}$ Thing Mathematics convention Physics convention Lie algebra structure constants (Ref.) $[L_a, L_b] = f_{ab}{}^cL_c$ $[L_a, L_b] = if_{ab}{}^cL_c$ Lie group transformations in terms of generators (Ref.) $R_z(θ) = \exp(θJ_z)$ $R_z(θ) = \exp(-iθJ_z)$ Covariant derivative with $\mathbb{C}$ -valued connection 1-form $\nabla V = \dd V + A V$ $\nabla_μ V^a = ∂_μ V^a -iqA^a{}_{bμ} V^b$ Curvature of connection or gauge field strength (Ref., §7.4) $F = \dd A + A ∧ A$ $F_{μν} = ∂_μA_ν - ∂_νA_μ \pm iq[A_μ, A_ν]$ I have a vague guess: physicists read and write $e^{iωt}$ a lot, and an exponential with an $i$ in it screams “rotation”. Fast forward to describing $\mathrm{SO}(n)$ rotations in terms of matrix generators, and an expression like $e^{iθJ_z}$ just “feels more familiar” so much so that an extra $i$ is pulled out of the definition of $J_z$ . Can that guess be supported? Not sure about the third and fourth rows, though.
What you say is part of it. But I think a more important reason we have the $i$ 's explicit is because we like to describe things with Hermitian operators. Taking the example of $SU(2)$ , the Lie algebra in physicist's notation is $$[L_i,L_j]=i \varepsilon_{ijk}L_k$$ $L_3$ in physics is an observable, which describes the spin of a particle in the $z$ direction, which takes integer or half-integer values. Since this is an observable, we prefer it to be a real number. Hence why we want $L_3$ to be Hermitian. More generally, we take generators of a symmetry group to be Hermitian (assuming we're dealing with a unitary representation), because they describe the observable "charges", which we want to be real. The reason for the $i$ 's in the covariant derivative and curvature $2$ -form is similar. We want the connection to be Hermitian, since it describes an observable field that permeates space. Although in more advanced treatments we sometimes use the mathematicians notation in these cases. In summary, physicists use this particular notation because these quantities represent something physical, whereas mathematicians use their notation because it is symbolically efficient. We have different priorities :)
{ "source": [ "https://physics.stackexchange.com/questions/652545", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/121203/" ] }
652,556
Since the body is moving in a circular path, I understand that the normal reaction from the wall provides the required centripetal force. I also get that the driver has to lean in order to counteract the rotating forces caused because the forces of gravity, friction and normal are in different planes. What I don't get is the action of friction. I read that when the driver goes faster, the upward friction force increases, counteracting gravity. How does that happen? In my understanding, when the driver goes faster, the required centripetal force increases. My questions: When the driver goes faster, does the normal reaction from the wall increase to provide the extra centripetal force? If he goes slower, does it decrease? If yes, a. How? b. Is that increase in normal reaction responsible for increasing friction? c. Does the normal reaction keep increasing when velocity increases? Meaning, if the person keeps accelerating, will the normal reaction keep increasing to provide the centripetal force? d. If the person goes too slow, does he fall because normal reaction is too weak to provide enough friction? If no, how exactly does it work?
What you say is part of it. But I think a more important reason we have the $i$ 's explicit is because we like to describe things with Hermitian operators. Taking the example of $SU(2)$ , the Lie algebra in physicist's notation is $$[L_i,L_j]=i \varepsilon_{ijk}L_k$$ $L_3$ in physics is an observable, which describes the spin of a particle in the $z$ direction, which takes integer or half-integer values. Since this is an observable, we prefer it to be a real number. Hence why we want $L_3$ to be Hermitian. More generally, we take generators of a symmetry group to be Hermitian (assuming we're dealing with a unitary representation), because they describe the observable "charges", which we want to be real. The reason for the $i$ 's in the covariant derivative and curvature $2$ -form is similar. We want the connection to be Hermitian, since it describes an observable field that permeates space. Although in more advanced treatments we sometimes use the mathematicians notation in these cases. In summary, physicists use this particular notation because these quantities represent something physical, whereas mathematicians use their notation because it is symbolically efficient. We have different priorities :)
{ "source": [ "https://physics.stackexchange.com/questions/652556", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/307490/" ] }
652,752
So I saw this article stating that gravity is stronger on the top on the mountain due to there being more mass under you however I have read some questions other people have asked and most of the responses state that the mass is concentrated at the middle of the earth meaning gravity doesn't get stronger the higher up you go. I would like to know which one of these it is as the article is a pretty reliable source. Here is the link to the article https://nasaviz.gsfc.nasa.gov/11234
You are getting different answers from NASA and from other sources, as they are talking about slightly different things. NASA is talking about the acceleration of the GRACE satellite towards the earth, as it orbited over different regions. When it went over the Himalayas, for example, the acceleration (gravity) was higher than average. Other sources are talking about the difference in acceleration due to gravity at ground level, compared to if you were to walk up the Himalayas, then the acceleration would decrease. That's because even though there would be more mass underneath, you've increased the distance from the earth. More detail: At the bottom of a cone shaped mountain of mass $m$ , radius $r$ and height $r$ , the acceleration due to gravity is $g$ , due to the earth of mass $M$ , radius $R$ . $$g=\frac{GM}{R^2}\tag1$$ the difference in gravity after climbing the mountain is $$\frac{GM}{{(R+r)}^2}+\frac{Gm}{{(\frac{3}{4}r)}^2} - g\tag2$$ The 3/4 is due to the position of the COM of a cone. Using 1) it's $$g\bigl((1+\frac{r}{R})^{-2}+\frac{16mR^2}{9Mr^2}-1\bigr)\tag3$$ From formulae for the volume of a sphere and a cone and assuming equal density $$\frac{m}{M} = \frac{r^3}{4R^3}\tag4$$ so 3) becomes, in terms of $g$ $$y= (1+\frac{r}{R})^{-2}+\frac{4r}{9R}-1\tag5$$ , putting $x = \frac{r}{R}$ $$y= (1+x)^{-2}+\frac{4}{9}x-1\tag6$$ plotting this shows that there is a decrease in the acceleration due to gravity for all realistic cone shaped mountains. For Everest, if it were a cone, $x=0.0014$ and the reduction in gravity is $y=0.002g$ , so the usual $9.81$ becomes about $9.79 \;\text{m}\,\text{s}^{-2}$ .
{ "source": [ "https://physics.stackexchange.com/questions/652752", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/307575/" ] }
653,009
If I'm in outer space, initially at rest, and every single particle in my body accelerates at the same rate in the same direction, will I feel that? My brain is fried thinking about this. There are two possibilities: initially at rest and then accelerating with const acceleration w.r.t. some inertial frame, and initially at rest and then accelerating with const acceleration w.r.t. some non-inertial frame So will I feel anything: while I'm in motion when I transition from a state of rest to a state in motion My intuition is that I shouldn't feel anything in any of the scenarios (every single particle in my body accelerates at the same rate - so there's no source of tension- any kind of push or pull- between various parts of my body), but I could be very wrong.
The physics answer is, an accelerometer will detect all accelerations relative to an inertial frame. If you're in free fall being accelerated by a gravitational field, the answer is actually no, because a frame in free fall is inertial, even though for most purposes it's more useful to treat it as an accelerating frame. So to your questions in order, 1a Yes, but free fall counts as inertial. 2a Only if you are also accelerating w/r/t an inertial frame. 1b No, if by "while I'm in motion" you mean constant velocity. 2b Yes, subject to the above. The engineering/biology answer is flat "No." You specified that every part of you is being identically accelerated, and if that's the case, there's no way for an accelerometer (biological or otherwise) to be designed such that it will detect any acceleration. An accelerometer works by measuring the difference in motion between a mostly-inertial frame (like a mass on a spring, or fluid in your inner ear) and the accelerating frame (the body of the accelerometer, or your skull). If every particle is identically accelerating, there's nothing to measure.
{ "source": [ "https://physics.stackexchange.com/questions/653009", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/141925/" ] }
653,220
My book mentions that water contracts on melting, but the book doesn't give any reason why it does so. It is mentioned that: $1\,\mathrm g$ of ice of volume $1.091\,\mathrm{cm}^3$ at $0^\circ\mathrm C$ contracts on melting to become $1\,\mathrm g$ of water of volume $1\,\mathrm{cm}^3$ at $0^\circ\mathrm C$ . I searched on the internet but I failed to find any useful insight. Could someone please explain why water contracts on melting?
It's because of the crystal structure of the solids. When water freezes, the molecules form various structures of crystals which have empty gaps that cause the solid to be about 9% larger in volume than the liquid was. Metals usually form crystals when they freeze too, but they're often simpler crystals, if you will, and often don't have as much empty space in them as ice/snow does.
{ "source": [ "https://physics.stackexchange.com/questions/653220", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/301185/" ] }
653,484
When we want to remove a cork from a bottle first we turn the cork. Turning in one direction makes it easier to remove in the axial direction. Does anyone know something more about this?
When the cork is stuck and stationary, it is static friction which is culpable in keeping it fixed. As soon as the cork moves - in any direction - the static friction is replaced by kinetic friction. Kinetic friction, $f_k=\mu_k n$ , is typically lower than the maximum static friction, $f_s\leq \mu_s n$ (because the kinetic friction coefficient typically is smaller than the static friction coefficient, $\mu_k<\mu_s$ ), and so, whenever you want to move something that is stuck, try to make it twist and turn and move before pulling it out. With some downvoters and commentators bringing to my attention, that the answer above is not fully sufficient, I have below added the missing half covering the question of leverage. Naturally, it is only a good trick to rotate the cork and then pull it out, if overcoming static friction to make it rotate is easier than overcoming static friction by pulling it straight out. As the comments mention, this can indeed be easier due to leverage: Pulling it straight out requires the force, $F_{pull}$ , exerted by your arm to match and overcome that of the static friction, $f_s$ , fully, one-to-one. You are then fighting Newton's 1st law directly and must exert a force: $$\sum F > 0\quad\Leftrightarrow\quad F_{you}-f_s>0 \quad\Leftrightarrow\quad F_{pull}>f_s$$ There might other contributing factors to the necessary force as well, such as the pressure in the bottle as another answer points at. Rotating the cork can be done by applying force at the far ends of the handle of the cork screw/wine opener tool. That force creates a torque, $\tau$ , and the farther away, $r$ , from the centre the force is applied (the greater the leverage), the greater does the torque become: $$\tau=F_{you}r_{handle}.$$ This torque in turn causes a shear force, $F_{cork}$ , against the static friction forces at the cork perifery. As long as the tool handle allows for more leverage than the radius of the cork itself, $r_{handle}>r_{cork}$ , then you can with less force at the handle generate enough force at the cork perifery: $$\tau=F_{you}r_{handle}\quad\text{ and }\quad \tau=F_{cork}r_{cork}\quad\Leftrightarrow\\ F_{you}r_{handle}=F_{cork}r_{cork}\quad\Leftrightarrow\quad F_{you}=F_{cork}\frac{r_{cork}}{r_{handle}}.$$ Since it is now this new force, $F_{cork}$ , that must overcome static friction, $F_{cork}>f_s$ , and not your own pulling force, and since the twisting force, $F_{you}$ , you apply is smaller than, $F_{you}<F_{cork}$ , then it is much easier to make the cork rotate and thereby overcome static friction, and then apply a subsequent pulling force that easier overcomes the smaller kinetic friction.
{ "source": [ "https://physics.stackexchange.com/questions/653484", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/288237/" ] }
653,700
Why does water contract on melting whereas gold, lead, etc. expand on melting? reminded me about something I've been wondering myself for some time. We know that water expands as it freezes. The force is quite formidable - it can cause solid steel pipes to rupture. But nothing is limitless. If we created a huge ball of steel and placed a small amount of water inside it (in a small, closed cavity) and then froze it - I don't think the big ball would rupture. But what would we get? Compressed ice? Can this even be done? Can you compress ice? Or would the water simply never freeze? Or freeze only partially? What if we kept cooling it, down to absolute zero (or as close as we can get)? What happens when water should expand, but there is no room for it to do so, and the container is too strong to be deformed?
But what would we get? Compressed ice? Can this even be done? Can you compress ice? Absolutely; all passive materials can be compressed. The bulk modulus , a material property with units of pressure, couples the applied pressure to a relative reduction in volume. The bulk modulus for ice at 0°C is around 8 GPa, which means that about 8 MPa or 80 bar pressure is required for a -0.1% volumetric change. What happens when water should expand, but there is no room for it to do so, and the container is too strong to be deformed? Here, a phase diagram for water is useful. The discussion in Powell-Palm et al.'s "Freezing water at constant volume and under confinement" includes a volume–temperature phase diagram: From this, we can predict the equilibrium response when heating or cooling water at constant volume (by moving vertically) or compressing or expanding water at constant temperature (by moving horizontally). We find that at constant volume (moving vertically downward from 0°C and 1 g/cc), over 200 MPa and 20°C undercooling is required* to get even a 50% slush of water and ice. Let's zoom out a little. From Powell-Palm, "On a temperature-volume phase diagram for water and three-phase invariant reactions in pure substances," we find that 209.9 MPa is ultimately required* for complete solidification, into a two-phase region (at equilibrium) of ice-Ih (ordinary ice) and ice-III : (Note that "0.00611 MPa" should read "0.000611 MPa"—the authors missed a zero.) We can interpret this as the compact structure of ice-III providing a solution to the problem of ice-Ih being anomalously voluminous. We find from the temperature–pressure phase diagram of water that this ice-III nucleates (at equilibrium) upon cooling to 251 K, or -22°C: With further cooling, the ice-I–ice-III mixture transforms* to ice-I– ice-II , then to ice-IX –ice-II, and then to ice-XI –ice-IX. (How can this be determined, since the volume–temperature chart doesn't include any of that information? It's from the horizontal line on the temperature–pressure chart and the knowledge that ice-I and ice-XI have specific volumes of >1 g/cc and that ice-II, ice-III, and ice-IX have specific volumes of <1 g/cc; thus, a higher-density and lower-density combination is required to maintain a constant 1 cc/g, and we can't move an iota above or below that two-phase line upon cooling at constant volume.) Note that no power can be generated under the condition of constant volume, as no displacement occurs. And although there's no thermodynamics prohibition about allowing the system to expand and do useful work, you would have to heat it up again to liquefy it to repeat the process, and this would use up the energy you gained. *Note that this answer always refers to equilibrium phase predictions. Sufficiently rapid cooling involves kinetic limitations that delay or essentially even preclude phase transitions. For example, liquid water can be cooled fast enough that crystals essentially never form even though the thermodynamic driving force is large. Here, the solid water is said to be in a glassy or amorphous state . (See also the fun rotatable 3D phase diagram of water here .)
{ "source": [ "https://physics.stackexchange.com/questions/653700", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3381/" ] }
656,052
When I fill a water bottle from a tap (aiming the flow from the tap so that it goes entirely into the bottle), if I time it correctly I can turn off the tap so that the bottle is filled right up to the brim. If I mistime it so that the water overflows, and then turn off the tap, the resulting level in the bottle is below the brim, often by a decent margin. The same effect can be observed filling up a saucepan or bowl, so it doesn't seem to be the shape of the container. My instinct is that something to do with viscosity or surface tension means that some water is carried away with the overflow, but I don't have the knowledge to tell if this makes sense. What's going on here?
There are two effects that both reduce the final water level: Kinetic energy of the water Entrapped air bubbles in the water When the water is pouring into the bottle and back out of it, it does not immediately turn around at the surface. Instead, the kinetic energy of the water causes it to flow quite deep into the bottle, then make a turn and flow back upwards. When the incoming flow stops, the remaining water in the bottle still has that kinetic energy, and will continue flowing upwards and over the rim for a short time. Depending on the faucet, the water flow usually has also entrapped air bubbles, which make it appear white rather than clear. Once the flow stops, the larger bubbles quickly rise to the surface and pop, further lowering the water level. Just for fun, I took a slow motion video of filling a bottle (slowed 8x). With my faucet, it appears the contribution of air bubbles is quite large.
{ "source": [ "https://physics.stackexchange.com/questions/656052", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/172543/" ] }
656,219
According to my Halliday-Resnick, the period of a pendulum clock can be kept constant with temperature by attaching vertical tubes of mercury to the bottom of the pendulum. How does this work? My guesses: Air resistance somehow changes with the temperature of the air, and this is offset by mercury evaporating (thus shifting the center-mass) Air resistance somehow changes with the temperature of the air, and this is offset by thermal expansion of the tube (thus shifting the center-mass) Any help is greatly appreciated.
It's actually an ingenious, but relatively simple bit of physics and engineering. It works by compensating for the linear thermal expansion of the pendulum rod, utilizing the thermal expansion of the mercury but in the opposite direction and thus preserving the position of the center of gravity. Note that the period of the (compound) pendulum is given by $$T=2\pi\sqrt{\frac{I}{mgL}}$$ where $L$ is the distance to the center of mass of the rod from its pivot point, and $I$ is its moment of inertia also about the pivot point. This means the period varies if this distance varies. With the adding of mercury tubes, as the temperature rises for example, this will cause expansion in the pendulum rod downward $^1$ . But at the same time, this temperature rise causes the expansion of the mercury in the tube that moves the mercury upward . The exact opposite effect happens for temperature drops. That is, the pendulum arm decreases its length, so that $L$ decreases, while the mercury in the tube decreases its height in the other direction. The net result is that the arm keeps a constant location of the center of gravity, and therefore keeps a constant period. $^1$ The rod is fixed to a pivot at the opposite end, so even though the whole thing may expand, it's the change in the distance $L$ from the pivot that's important.
{ "source": [ "https://physics.stackexchange.com/questions/656219", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/310158/" ] }
656,405
I've found a rule that says, "When two quantities are multiplied, the error in the result is the sum of the relative error in the multipliers." Here, why can't we use absolute error? And why do we've to add the relative errors? Why not multiply them? Please give me an intuition to understand the multiplication of two uncertain quantities .
It basically comes from calculus (or more generally just the mathematics of change). If you have a quantity that is a product $z=x\cdot y$ , then the change in this value based on the change of $x$ and $y$ is $^*$ $\Delta z=x\Delta y+y\Delta x$ . So then it is straightforward that $$\frac{\Delta z}{z}=\frac{x\Delta y+y\Delta x}{xy}=\frac{\Delta x}{x}+\frac{\Delta y}{y}$$ The reason you don't use absolute uncertainty or multiply the relative uncertainties is the same reason why $(a+b)^2\neq a^2+b^2$ . It's just not the result you get when you do the math. $^*$ We are neglecting the term $\Delta x\cdot\Delta y$ in $\Delta z$ , since ideally $\Delta x<x$ and $\Delta y<y$ to the extent that $\Delta x\cdot\Delta y\ll xy$ such that $\Delta x\Delta y/xy$ is much less than both $\Delta x/x$ and $\Delta y/y$ .
{ "source": [ "https://physics.stackexchange.com/questions/656405", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/306951/" ] }
656,409
I'm doing an experiment investigating the Mpemba effect. I measured some factors that may have contributed to the time it took the water to freeze (defined as the point where it becomes a total solid, i.e. at the end of the phase change), such as the time it took to reach 0 degrees Celsius, the lowest temperature it reached before freezing (i.e. how much it supercooled), and the time it spent at the freezing temperature. I'm not sure what to call those variables since I couldn't vary them, which means it's not an independent variable. I'm not sure if they are dependent on the initial temperature because there are still so many things unknown about the Mpemba effect and factors which I couldn't control such as the roughness of the container that it was in (which contributes to the supercooling). And they're obviously not controlled variables since I couldn't control them. But my best guess is that it barely falls under either independent or dependent variable.
It basically comes from calculus (or more generally just the mathematics of change). If you have a quantity that is a product $z=x\cdot y$ , then the change in this value based on the change of $x$ and $y$ is $^*$ $\Delta z=x\Delta y+y\Delta x$ . So then it is straightforward that $$\frac{\Delta z}{z}=\frac{x\Delta y+y\Delta x}{xy}=\frac{\Delta x}{x}+\frac{\Delta y}{y}$$ The reason you don't use absolute uncertainty or multiply the relative uncertainties is the same reason why $(a+b)^2\neq a^2+b^2$ . It's just not the result you get when you do the math. $^*$ We are neglecting the term $\Delta x\cdot\Delta y$ in $\Delta z$ , since ideally $\Delta x<x$ and $\Delta y<y$ to the extent that $\Delta x\cdot\Delta y\ll xy$ such that $\Delta x\Delta y/xy$ is much less than both $\Delta x/x$ and $\Delta y/y$ .
{ "source": [ "https://physics.stackexchange.com/questions/656409", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/272270/" ] }
656,603
An object of mass $5\, \mathrm{kg}$ is projected with a velocity $20\, \mathrm{ms}^{-1}$ at an angle $60^{\circ}$ , to the horizontal. At the highest point of its path, the projectile explodes and breaks up into two fragments of masses $1\, \mathrm{kg}$ and $4\, \mathrm{kg}$ . The fragments separate horizontally after the explosion, which releases internal energy such that the kinetic energy ( $\text{KE}$ ) of the system at the highest point is doubled. What is the separation of the two fragments when they reach the ground? In this problem, the set up is quite simple. Let $v_1$ be the velocity of the $4\,\mathrm{kg}$ mass and $v_2$ be the velocity of the $1\,\mathrm{kg}$ mass. The initial momentum along the $x$ axis is $5\cdot10\,\mathrm{kg}\,\mathrm{m}\,\mathrm{s}^{-1}$ Applying conservation of momentum along the $x$ axis: $$50= 4v_1+v_2 \tag1$$ $\text{KE}_i$ = $250J$ . Twice of this is equal to the final $\text{KE}$ . So $$2(250)=\frac{4v_1^2}{2} + \frac{v_2^2}{2} \tag2$$ Solving equations $(1)$ and $(2)$ we get two different sets of answers: $$v_1=5\, \mathrm m/\mathrm s \qquad v_2=30\,\mathrm m/\mathrm s$$ or $$v_1=15\, \mathrm m/\mathrm s \qquad v_2=-10\, \mathrm m/\mathrm s$$ My question is that out of these two possible answers which one should occur as both masses can taken any one velocity so one of the cases should arise. The separation turns out to be the same in both cases as $|v_1- v_2|$ is same for both the cases. If somehow this can be tested physically, (under ideal conditions with no air resistance or spin of the balls) what will we observe in nature in multiple attempts? Will every time any of the two be able to happen? How do we know for sure what exactly is going to happen here?
The two results that you got are absolutely acceptable both mathematically and experimentally and either of them can happen if the experiment is done ( but it's not random ) practically with all situations similar to the one posed in the question. Why isn't it random ? Because the result depends (with certainty) on the arrangement of the piece of mass $4\; kg$ and that of mass $1\; kg$ when they were a single body just before exploding. Consider the diagram. Suppose Initially it was a sphere (and I have shown a dashed line in subsequent figures to indicate the points where they separate). Now in practical experiments, you will get your first set of values if the masses just before the explosion are arranged as shown in the figure And the second set of values correspond to the situation when the masses are arranged in this way :- So in practical experiments, which set of values we get depend on the position of the pieces (just before the explosion) inside the main sphere . But it's not random.
{ "source": [ "https://physics.stackexchange.com/questions/656603", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/305339/" ] }
656,625
When we draw a Newtonian free body diagram of a man standing still on land, we draw force g and the reaction force. In water when floating still, we still have force g but we also have the reaction force and buoyant force. We are not accelerating in either case. So what makes us feel weightless in water but not on land?
This is less of a physics question and more of a neurophysiology question. Physically, as you have noted, both the proper acceleration (measured by an accelerometer) and the coordinate acceleration are the same in the two situations. So this is about the neural pathways involved in producing this specific illusion. There are four main senses involved in the sensation of acceleration. The first is your inner ear. This is your body’s accelerometer. Inside your ear are several tubes filled with fluid, and the location where the fluid pools tells you which direction you are (proper) accelerating. This sensation habituates very quickly, meaning that after a short time of constant sensation the brain ignores it. So it is practically a sensor of changes in proper acceleration. The inner ear sensation is heavily modified by the second sense, your visual system. In humans the visual system is and the associated neural pathways are particularly strong. When your inner ear says that you are accelerating and your vision says that you are not then you feel dizzy. Usually your vision dominates and the overall sensation is more determined by the vision than the inner ear. When floating or standing generally both the above sensations are rather similar. Not much visual input and not much inner ear activity. Any differences may be a gentle inner ear “sloshing” which is easily habituated and ignored in favor of the visual input. The third related sense is pressure, sensed by your skin. This sensation is particularly different in standing or floating. In standing the pressure sensation is strong and concentrated on your feet. In floating it is weak and distributed across a large surface area. This weak and distributed sensation is easier to habituate and ignore than the strong concentrated sensation. If you specifically direct your attention then you will notice it, but because it is weak it is easy to neglect if you do not specifically attend to it. However, the brain also habituates rather quickly to the pressure on your feet. So although this sensation is different in the two situations, after a short time it is ignored. Finally, the last sense involves in producing this illusion is called proprioception. Proprioception is the combination of all of the internal signals used to determine the body’s position relative to itself. This is the sense that allows you to accurately touch your nose instead of poke your eye, even with your eyes closed. In my opinion this is probably the most important sense in producing this illusion. Proprioception is one of the slowest to habituate senses. Proprioception is crucial in the ability to stand upright in gravity, with many very short reflex pathways that detect perturbations to your posture and trigger muscles to contract and maintain your posture. Muscles in your back, shoulders, neck, butt, and legs are all triggered and adjusted frequently to stand. In contrast, to maintain posture while floating takes very little effort from far fewer muscles. The perturbations to the proprioceptive system are weak, and as this sensation is less subject to habituation this seems likely to be the dominant one in producing the illusion of weightlessness. Since the body is not subject to the burden of continual muscular effort to maintain posture, it feels substantially different from our normal experience, and is interpreted as weightlessness.
{ "source": [ "https://physics.stackexchange.com/questions/656625", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/305633/" ] }
656,671
If you send a text from your phone to your friend, do electrons move from your phone to your friend's phone? How is text transferred (physics wise)? I am a programmer and I want to know how it is working.
It is less of a "sending electrons," and more of jiggling them. Think of a wave, done by the crowd at a sporting event. One person raises their hands high, and then sits back down. The next person raises their hands, and sits back down. So on and so forth. When we get to the end, its not that someone's hands moved from one side of the bleachers to the other. Its a pattern that moved. Each person doing the wave lead the next person to do the wave. Its the same game with RF communication, like cell phones use. They move electrons around within the antenna a tiny amount (fractions of a picometer!), which creates an electromagnetic potential -- electrical and magnetic charges moving just like the wave moves down the bleachers. On the receiving end, the cell phone tower watches how the electrons in its antenna move. It uses that to figure out what messages each of the cellphones is sending. Of course, these signals are more complicated than the wave at a sporting event. At the sporting event, it is just one wave (sometimes two waves) moving along a row of people. In RF communication, we send a stream of very particular movements which convey the information. The easiest to understand would be a human-scale signal like morse code , with its dots and dashes that everyone has heard at some point. More complex ones like QAM are carefully chosen to have desirable behaviors and use the medium efficiently.
{ "source": [ "https://physics.stackexchange.com/questions/656671", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/310331/" ] }
656,796
In conventional coordinate systems (anything you solve a simple Newtonian mechanics problem with), up and down are + and - z. A vector pointing up and a vector pointing down are anti-parallel. But in qm, we have up and down spinors making and orthonormal basis. These basis vectors are also called positive and negative z spin angular momentum. I understand the math for how spinors like (1,0) and (0,1) are orthogonal. I also see how they can be expressed as superpositions of x and y spinors, using complex numbers such that a two-component spinor can represent quantities in 3 dimensions. (this seems a little like what I have studied about symmetry groups like SU(1), so if that is relevant in the solution, I appreciate a discussion, but if it is unrelated, please do not bother correcting any huge mistakes in this sentence because I did not fully try to study it on my own yet). My question is this: what is the intuition for saying that spin up and down are orthogonal?
You need to distinguish the orthogonality in spinor space ( $\mathbb{C}^2$ ) from orthogonality in vector space ( $\mathbb{R}^3$ ). The spaces are different, and therefore scalar product and orthogonality in these spaces have entirely different meanings. Example: The two spinors $$|\uparrow\rangle=\begin{pmatrix}1\\0\end{pmatrix}$$ and $$|\downarrow\rangle=\begin{pmatrix}0\\1\end{pmatrix}$$ are orthogonal to each other because their scalar product is zero: $$\langle\uparrow|\downarrow\rangle=0$$ The expectation values of the spin vector $\vec{S}=\frac{\hbar}{2}\vec{\sigma}$ (where $\vec{\sigma}$ is the Pauli vector $\vec{\sigma}=\sigma_x\hat{x}+\sigma_y\hat{y}+\sigma_z\hat{z}$ ) for these two spinors are: $$\vec{S}_\uparrow = \langle\uparrow|\vec{S}|\uparrow\rangle = \frac{\hbar}{2}\langle\uparrow|\vec{\sigma}|\uparrow\rangle = + \frac{\hbar}{2} \hat{z}$$ and $$\vec{S}_\downarrow = \langle\downarrow|\vec{S}|\downarrow\rangle = \frac{\hbar}{2}\langle\downarrow|\vec{\sigma}|\downarrow\rangle = - \frac{\hbar}{2} \hat{z}$$ These two vectors are antiparallel to each other. Their scalar product $\vec{S}_\uparrow \cdot \vec{S}_\downarrow$ is not zero.
{ "source": [ "https://physics.stackexchange.com/questions/656796", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/310291/" ] }
656,871
A man is standing inside a train compartment. He then hits one side wall of the compartment with his hand (or you can assume he kicks the wall with his leg). Will the compartment begin to move? I don't think so. (If it happens, there will be no need of fuel.) I mentioned this incident as a background for my question. Suppose there is a fish bowl (totally round) on a table and a fish swims inside it. It wants to get out from the bowl to achieve freedom {yes, it won't be the freedom :) }. But it cannot jump over the edge. Then it tries to topple the bowl by hitting the wall of the bowl (Assume that the fish has the enough strength). Is it possible? I thought 'no' until I read this question (It might be better to say that I am still in my opinion): Could a fish in a sealed ball, move the ball? I doubt whether this is slightly different from my question. Can someone explain? EDIT: I got some answers here those state that the swimming of the fish can move the bowl. While googling I found a similar question on another website. Instead of comforting me, it doubled my confusion :( . Though, causing my consolation, it says what I previously thought. You can find it here .
Summary The fish can only move the bowl horizontally, at all, if he can use the force of internal momentum transfers, $F= m \tfrac{dv}{dt}$ , to overcome the force of static friction with the table, $F= \mu_s Mg$ , and can only move it vertically if he can overcome the force of gravity $F= Mg$ . Same is true for the man if the car is stationary and we replace coefficient of static friction with static coefficient of rolling friction. “Perfectly round” is saying more than people often realize, it is an impossible theoretical case, so I mostly address a real-world round bowl. Furthermore, as discussed at the bottom, NASA slosh scientists who analyze slosh in liquid nitrogen tanks for the mentioned applications generally disregard anything inside the tank, like a mixer, because it is very hard to create any (net) momentum in the liquid from the inside; it usually takes external motions and forces to get any meaningful sloshing, also for the reasons discussed. Man in Train Car For the man in the car, there are three cases to consider: No friction between the wheels and the track A constant “coefficient of rolling friction” between the wheels and the track A coefficient of static friction and a coefficient of rolling friction that are different. Case 1: He can never change the center of mass (com) of the man/car system. This is because he cannot apply any net force on the man/car system. There is no external friction nor any other source of external force. Anything he does to push on the wall will push on him with equal force, and the net resultant force on the man/car system will be zero. But even in this case he can cause slight temporary back-and-forth motions of the car from the outside, but he cannot change the com of the whole system so he won’t be able to move the car on a continual basis or for a significant distance. The way he can move it a little is by moving in the car. For any motion of the man relative to the car along the direction parallel to the track, no matter how he moves (slowly, quickly, pushing on walls, walking), the car will move in the opposite direction enough to keep the system com in the same place. Case 2: (constant coef of friction): He can start the car rolling and actually move the system’s com, but due to friction it will stop again. But then he could do it again. He walks slowly in one direction while it is stopped, keeping his momentum transfers low enough to not overcome friction with the track. Then moves the other direction with enough force to overcome friction and start the train rolling (perhaps by jumping and pushing on the wall, or just sprinting the car length). Case 3: (Static friction is more than kinetic): This doesn’t change things a ton, but it matters. It makes it much easier and more forgiving when starting out, and remember we must start out over and over on our journey. We have a differential friction gain. Friction differences in each direction provide the external force that moves com. —— The Fish No Friction Our case: If the bowl is perfectly round (as nothing is), this is a frictionless case when we consider rolling. Again, perfectly round is no friction. He will be able to tip the bowl using even minute waves, or while motionless using his innate differential density capacity, discussed below. One may think it’s not frictionless because the bowl and surface will give a little, but there’s still maximum material stress and force at the very bottom, decreasing radially from there, which only requires any positive leverage however small. If there is any flat, or even imperfections in the roundness, will the bowl move at all in the frictionless case? The remainder if this section consider frictionless, but with a flat rather than the trivial perfectly round: The fish faces the same general problem with the added complication of the liquid. But case 1 (no bowl/table friction) remains generally the same. This may seem surprising, but every little motion of the fish that changes com and even small currents will be balanced from a center-of-mass standpoint by the bowl moving on the table. For normal (slow) swimming, whenever he swims to one side, the bowl and water move a little the other direction $d_{\text{bowl}} m_{\text{bowl}} = ( d_{\text{swim}} - d_{\text{bowl}} )$ $m_{\text{fish}}$ where $m_{\text{bowl}}$ includes the water. There’s a twist. This equation gives no motion if his density equals that of the water. Fish have internal pockets of gas that they can compress or expand to help them go up and down, in addition to just swimming up and down. Gas, unlike water, is compressible. So one surprising result is that if the fish is off to the side, and the water is still, and he merely changes the volume of his gas pouch without swimming... the bowl will move on the table. That’s because he has changed his density and the overall com as seen from the bowl. To see this note that when he is as dense as water, the com is on the vertical centerline of the bowl, and when he is not, it is not. But the com viewed from the table cannot move, so the bowl moves. (Frictionless worlds are impossible and sometimes counterintuitive.) The man in the train does not have a density similar to air, but the fish does to water and this makes it even harder. This all means he needs to use the water to transfer momentum, as this gas-pouch effect is small. However, it alone is enough to tip the bowl because the bowl is perfectly round and only requires any amount net leverage. A more interesting question is whether he can get anything with a small bottom to slide or tip: If he swims fast there will be water motion to consider too. Yet, he also can’t set up big currents and sloshing and do a lot; the location of the bowl on the table depends only on where everything inside it is, not how fast anything is moving, and the maximum distance moved (with the same location for the center of mass of the system) depends only on how much mass he can get to one side in a peak of water, which is very limited operating from within the tank. There are sloshing problems even without friction (such as in space), but they require external motions and forces to generate the oscillations, not a fish or even a mixer etc from the inside. NASA has sloshing scientists who analyze how liquid nitrogen tanks can affect things and how to control for it. It can only cause a lot of back and fourth, but that can be a big problem when positioning things in a space station. And the sloshing inside as mentioned comes from motion caused externally. You can probably find NASA sloshing analysis papers online. They use computational fluid dynamics and try to estimate what the maximum short-term momentum change through time, $F_{max}= m \tfrac{dv_{cog}}{dt}_{max}$ , could be to give that as an upper bound for other engineers to know, and how best to reduce sloshing, with dampers or springs or whatever, which change automatically with tank level because sloshing dynamics change with that. There can be something akin to a resonate frequency, and I think that can even be estimated classically (?). Even if he makes a little net sloshing (note that things like a whirlpool have no overall effect, and sloshing scientists don’t even call them sloshing), he is limited to motion that corrects com, so the sloshing fish will have to use friction also to get across town. Fish with Friction Because perfectly round is frictionless for rolling, this case is not that. It will be much harder for him. And not just because of what was mentioned in the frictionless case (that he can’t get large amounts of water to one side from the inside). To overcome friction and move consistently, he has to get some mass (in the form of himself and some water) to one side, slowly (ie without exceeding the force of static friction with his rate of transferring mass), and then immediately move it quickly the other way generating large momentum transfer rates (and hence external force). Why must he go the other way immediately and not just quickly? Because it is a liquid and won’t stay to the side. If he stops and takes a breath () the water will begin to move back but not fast enough to do what he needs it to: overcome friction. This detail helps explain why the NASA sloshing engineers as above don’t worry about internal mixers in the nitrogen tanks. If the fish sloshes back and fourth overcoming friction each direction, they largely cancel out. He needs rapid then slow momentum changes. So you see that tipping, even with a very tiny flat on the bottom, or a nearly round bowl, seems impossible from the inside, even for a strong fish, even if he wasn’t so close to the density of water. Tipping with a flat bottom is even harder because $\mu_s$ is much lower than one, usually ~ $0.2 - 0.3$ . Good luck lifting, tipping, or even moving horizontally without getting outside at all
{ "source": [ "https://physics.stackexchange.com/questions/656871", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/305718/" ] }
656,999
When we are in an empty room with no one around, we don't hear any sound, but there are billions of atoms and molecules that are colliding at the same instant. So my question is, when two molecules collide, does it produce a sound?
A sound wave is a synchronised movement of millions and millions of atoms or molecules. The random collisions of atoms or molecules are not synchronised and do not produce a sound wave. A sound wave is like a stadium wave in a large sports stadium. You only get a wave if people move in a synchronised way, each person standing up just after their neighbour. If people just stand up and sit down at random then there is no wave.
{ "source": [ "https://physics.stackexchange.com/questions/656999", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/264013/" ] }
657,446
I'm studying special relativity and was wondering, if someone were to give me a matrix, is there a way or a procedure to check if that matrix corresponds to a Lorentz transformation? Even if the matrix contains projectors or other weird things?
Yes, it needs to leave the Minkowski metric invariant: $$ \Lambda^T\eta\Lambda=\eta. $$
{ "source": [ "https://physics.stackexchange.com/questions/657446", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/298124/" ] }
657,453
Two entities truly captured within a singularity would have no positional offset from each-other, nor volume to differ either. Yet for two things to push away from each-other, they must have either volume or position for physics to determine which direction they should push each-other in. So it seems that because physics did decide which direction different entities would be expelled toward in the first instant of time after the big bang, the elements must have some positional offset from each-other. Perhaps I haven't articulated this very well, but it just seems that for a singularity to expand, there must be some positional relationship other than zero in every dimension. Even considering the HUP, I'm having trouble understanding it. Because HUP's behavior can be demonstrated in Feynman diagrams, but I don't think those diagrams support two entities trapped in zero volume and zero offset. There's always a positional relationship involved in the interaction, and it seems this is built into the very way physics works.
Yes, it needs to leave the Minkowski metric invariant: $$ \Lambda^T\eta\Lambda=\eta. $$
{ "source": [ "https://physics.stackexchange.com/questions/657453", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75876/" ] }
658,186
I've read things online here and there that seemed to hint that there's more to quark type than mass and charge. Is this true? For clarity's sake, I'm not asking about properties individual quarks have other than mass and charge, such as spin and color charge. I'm asking about the properties of quark types .
Yep, quarks of different flavours exhibit important differences in their properties. As we know, quarks participate to all known interactions – but the details of how this happens matter. Let's start from the weak interaction. The $u$ quark, being the lightest, is stable to weak interactions. The $d$ quark, being just a tiny bit heavier, is quasi-stable: in compounds such as proton (uud) it is stable but in other ones such as neutron (udd) the d quark eventually decays through the weak interaction at a timescale of ~ $10^3$ seconds. The $s,c,b$ quarks are unstable under the weak interaction: $s$ lives about $10^{-8}$ s, $c$ about $10^{-13}$ s and $b$ about $10^{-12}$ s. (Of course, we don't observe bare quarks, so the values I quote are the typical lifetimes of hadrons containing such quarks.) You might have noticed something weird in the ordering I posted above: why does the heavier $b$ quark live longer than the somewhat lighter $c$ quark? Let's order the quarks by generations: 1nd, 2nd, 3rd columns are respective generations (picture from Wikipedia ). The lines in this figure show the allowed weak decays of quarks; the darker the line color, the more allowed a certain transition is. First, one notices that there is no horizontal transitions such as $b \to s$ or $c \to u$ : they are forbidden. In fact, they can appear via more complex, more suppressed, mechanisms with two weak interactions, such as $b\to c \to s$ (see FCNC ), but not with one simple weak interaction. Second, one notices that the vertical transitions - those within one generation - are super-enhanced. In about 99% cases, $t$ decays to $b$ , or $c$ decays to $s$ . The diagonal transitions, such as $b \to c$ or $b \to u$ are much more suppressed. The relation which governs the rates of allowed weak decays of the quarks is the CKM matrix . What does this observation mean for the lifetimes of the quarks, given that we know quark masses? The $c$ quark has a super-allowed transition to decay to $s$ (plus a suppressed transition to $d$ ). The $b$ quark, however, cannot use its super-allowed relation to $t$ as the $t$ quark is heavier, so the $b$ quark cannot decay into a $t$ quark! This means, it only decays in suppressed ways to $c$ or $u$ quarks, which makes the lifetime of the $b$ somewhat longer. Now, what about the lifetime of the $t$ quark? In fact, the $t$ is so terribly heavy that it is heavier than the $W$ boson which mediates weak interactions. The weak interaction is Weak simply because the bosons which govern it are much heavier than particles of interest. This is not the case for the $t$ : it simply decays via $t \to W b$ without any kind of suppression. This makes its lifetime ridiculously small at $10^{-25}$ s level. It is so crazy small that we never managed to observe any hadrons (composite particles) containing a $t$ quark, but only the decay products of the $t$ itself. There is one another cool thing which arises from the structure of the CKM matrix: the CP violation . CP violation means that certain decay modes of a hadron and its anti-hadron have some differences in rate. Nature is such that this effect is the largest in weak decays of hadrons which contain $b$ quarks, and is much smaller in other quark systems. As you see, the weak interaction is what causes most differences for the quarks of different flavours. What about the strong interaction? In principle, all quarks are on equal footing here, but... once again, the details matter. The strong interaction has a high power at low energies, it is characterised by the so-called asymptotic freedom . Which means, you cannot take quarks out of their bound state: when you try doing that, the "binding" breaks by creating a quark-antiquark pair. As the strong interaction likes low energies, this resulting quark-antiquark pair is more likely to be made of lightest quarks, $u\bar{u}$ or $d\bar{d}$ , than of heavier quarks. (Strong interaction conserves quark flavour, so you need to produce quark and antiquark of the same flavour). For that (but not only) reason, in an average collision at a Large Hadron Collider, you can have hundreds of pions produced but only tens of kaons and ~2 hadrons made of heavier quarks. Pions, made of light quarks, are also copious in decays of heavy hadrons, e.g. $B \to D \pi \pi \pi$ would have a similar or even larger probability compared to $B \to D \pi$ . This special role of light quarks and pions in the strong interactions is represented by the conservation law of Isospin which governs the rates of strong-interaction processes. It is an approximate law, however, because the $u$ and $d$ quarks have slightly different masses. Finally, going to the electromagnetic interaction. Nothing special here, the electric charge of the quark defines its properties. Quarks can annihilate when meeting their antiquarks: $u\bar{u} \to \gamma \gamma$ , but for heavy quarks this electromagnetic process is overshadowed by the strong interaction, $c\bar{c} \to (2-3) \, gluons \to light \, hadrons$ . Some concluding remarks. In experimental particle physics, each quark has its own role. The important law which affects the experimental work is: the heavier the quark, the more difficult it is to produce it. The cross-section of light quarks production is orders of magnitude larger than that of heavy quarks. The $t$ quark is studied separately as it forms no known hadrons and is somewhat special with regard to tools needed to study it. The $b$ quark is a favorite tool of those studying CP-violation. A cool thing about the $b$ is that it is heavy and rather long-lived (if compared to $t$ or $c$ ) which means there are hundreds of decay modes available for each $b$ hadron. So much to study! $c$ quarks are gaining popularity in recent years with people learning how to measure CP-violation in $c$ hadrons, and they have a lot of data compared to $b$ physics (see my comment about cross-section just above). Strange or light hadrons are often long-lived enough to be treated as stable at the scale of typical particle detector, they even make particle beams made of these hadrons. This answer is far from exhaustive and I omitted some details for sake of simplicity, but there is some hope I managed to answer the question without risk of getting that nasty comment which the other answers got :)
{ "source": [ "https://physics.stackexchange.com/questions/658186", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/310823/" ] }
658,390
The question is where are they getting this torque ( $\tau$ ) thats causing them to increase their angular acceleration ( $\alpha$ ) and therefore increasing their angular velocity ( $\omega)?$ Assuming it's frictionless ground(like on ice)
They initially kick the ground and receive an equal and opposite force from it (Newton III), that's where the initial torque comes from. They would not be able to get this from a frictionless surface. Then, to spin even faster , they usually move their arms close to their chest. This decreases their moment of intertia ( $I$ ) and hence increases their angular velocity ( $\omega$ ), following conservation of angular momentum $L = I\omega$ .
{ "source": [ "https://physics.stackexchange.com/questions/658390", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/303810/" ] }
659,006
This image is a representation for light passing through a convex lens. It shows light entering from air to glass . When the light enters the glass we can see that it bends towards the normal. Now when the ray of light leaves the glass and enters the air again, we see no refraction. What I expect: The ray of light should bend away from normal once it exits the glass because it is going from an optically denser medium to a rarer medium. Is it just a bad representation or the bend is negligible or I am getting something wrong?
They technically should "bend" because of refraction, and a more accurate drawing would be this : But drawings like the one that you show usually just tell you the net effect of the lens, i.e. treating the lens as a black box and not a series of interfaces. In the derivation of the thin lens equation , however, both curved surfaces, refractive indices, and radii of curvature are taken into account.
{ "source": [ "https://physics.stackexchange.com/questions/659006", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/311217/" ] }
659,557
There is something with CMB radiation that does not sit well with me... It seems very counterintuitive that we are able to see it. If CMB radiation formed at the early phases of the universe, would it not make sense that it expands and propagates "outwards" with the Big bang so that we would never see it? Most space is out of our reach since it does not belong to the observable universe. But CMB radiation formed earlier than most space, should it not also be far outside the observational universe on its way away from us? How should I visualize the CMB radiation? I have a master in theoretical physics but I never went back to understand this. Any help in understanding this is very welcomed.
The biggest misconception I see in the question is the idea of the Big Bang as something that propagates "outwards", like an explosion. There is no outwards direction, the universe didn't expand into something. Even though recent observations seem to suggest that the universe is closed, for the sake of simplicity let me assume that the universe is flat and infinite. The first thing to notice is that if the universe is infinite now, it was infinite also at shortly after the Big Bang, it was just more dense . Imagine a flat plane on which it is drawn a lattice of dots, equally spaced. The plane is the universe at a certain time. The dots represent objects in the universe, electrons, atoms, stars, whatever. Now, imagine scaling up the plane, making it bigger. Of course the plane is infinite, so scaling it doesn't change its size, but the dots get further away from each other while keeping their size fixed (otherwise, you wouldn't be able to tell that an expansion took place). You see that the dots aren't expanding into anything, the universe (the plane) was already infinite, it didn't grow in size. Critically, there is not a center of the expansion. The distance from any two dots has doubled (in this example), regardless of their position. Now, let's talk about the CMB. Imagine that at time $t_0$ , every dot emitted a pulse, an expanding circular wave. This wave symbolizes the photons of the cmb that are emitted at the same time from every point towards every direction. The last pictures refers to our situation. Earth is the black dot that is touched by the wave fronts. We see the CMB coming from every direction (in the picture only from four directions) because it was emitted from everywhere.
{ "source": [ "https://physics.stackexchange.com/questions/659557", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/275522/" ] }
659,624
I'm trying to understand Manishearth's experiment in the answer here , To try this out, close one eye. Now hold your arms straight out, nearly stretched, but not completely stretched. Now extend your index fingers (there should be about one inch between them). Now try touching them together. Try this a few times, then repeat with both eyes open. You'll realise how necessary the second eye is for judging depth. I tried the experiment a few times but I am really not sure what I am supposed to see/ how the experiment works. The one part I do understand is the reason they have said not to completely stretch the arms. If one does that, then by the sensation of arm being stretched will give a sense of depth, so it is necessary to not extend till to total arm length. P.S: I completely understand the mathematics and fact we need two rays, but I think I am not getting the correct result for the experiment. Ideally an answer with pictures would be best.
The problem with the two finger experiment is that your body’s sense of proprioception is so accurate and so instinctive that you don’t need binocular vision to touch your finger tips together. In fact, you don’t need vision at all. Try the experiment with your eyes closed . You will find that you can still touch your finger tips together quite accurately without even seeing them. To get a better sense of the power of binocular vision, use a pen or pencil held in each hand instead of finger tips, to reduce the effect of proprioception. Wave the pens/pencils around to randomise the starting positions, and then try to make the ends of the pens/pencils meet. With both eyes open this task is very easy. With only one eye open you will find it is surprisingly difficult.
{ "source": [ "https://physics.stackexchange.com/questions/659624", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/236734/" ] }
659,661
In a book called "Einstein, Relativity and Absolute Simultaneity" there was this sentence by Smith: There is no observational evidence for a space expansion hypothesis. What is observed are superclusters of clusters of galaxies receding from each other with a velocity that is proportional to its distance. He goes on to say space is Euclidean and infinite. Wouldn't this mean Big Bang was a explosion in spacetime rather than a expansion of spacetime as it is often told? Is Smith just wrong or don't we know yet?
According to its Introduction, Einstein, Relativity and Absolute Simultaneity is a volume of essays “devoted, for the most part, to arguing that simultaneity is absolute” (as the title suggests). This is not mainstream physics. Since the book’s editors ( William Lane Craig and Quentin Smith ) are/were philosophers rather than physicists, its value as a cosmology textbook is doubtful. So, yes, according to the weight of the available evidence and the consensus of mainstream physics, Smith is wrong.
{ "source": [ "https://physics.stackexchange.com/questions/659661", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
659,739
This arXiv paper says: The recent Planck Legacy 2018 release has confirmed the presence of an enhanced lensing amplitude in CMB power spectra compared to that predicted in the standard $\lambda$ CDM model. A closed universe can provide a physical explanation for this effect, with the Planck CMB spectra now preferring a positive curvature at more than 99% confidence level. If I understand it well, this question might be already obsolete - there is a little deviation from a completely flat Universe into a positive direction. How is it possible? As far I know, there were no recent Planck (or similar) measurements. How believable is this new development? If it is believable (99% CL in an arXiv paper looks for me strong), what is the estimated radius of the Universe, if we assume a small, constant, positive curvature and spherical topology?
If you aren't already aware, that paper is controversial . That is why it's commonly not asserted that the universe is closed. This quote from the above link is especially relevant: If this curvature were real, the best-fit cosmology from Planck would have $\Omega_m \sim 0.5$ and $H_0 \sim 50km/s/Mpc$ . Is this remotely reasonable given other cosmology data? No. Data from CMB lensing, BAO, weak lensing, direct distance ladder measurements and a host of other observations rule it out ... Given this position and the fact that even a model with $A_L=1$ and zero curvature still gives a reasonable $\chi^2$ for the fit to the Planck data, we think the natural conclusion to draw is that whatever the explanation for this moderate discrepancy is, it is not curvature. So no: we have not proven that curvature exists. Also worth emphasizing: the paper also doesn't argue that the universe is closed. It only says that there are several internal inconsistencies in Planck data that can be resolved by assuming the Universe is closed, and suggests we investigate curvature as a solution to cosmological problems.
{ "source": [ "https://physics.stackexchange.com/questions/659739", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/32426/" ] }
660,244
When books or various references interpret the meaning of Maxwell equations, they typically state that the source (origin of the phenomena) is the right part of the formula, and the resulting effect is on the left part of the formula. For example, for Maxwell-Faraday law, $\vec{\nabla} \times \vec{E}=-\frac{\partial \vec{B}}{\partial t}$ one states "a time varying magnetic field creates ("induces") an electric field." (see for example : https://en.wikipedia.org/wiki/Maxwell%27s_equations#Faraday's_law ) It seems to me that this is not true. One could interpret in both direction. For the example above, we could also state that a change of direction of the electric field will create a temporal change of the magnetic field. Is it true that Maxwell equations should be interpreted by taking right side of formula as the "origin" and the left part as "consequence"? Or could we take also the left side as the origin?
You are basically correct but I think I can elucidate existing answers by pointing out that there are two issues here: a physical one and a mathematical one. Maxwell's equations are making both mathematical and physical statements. The relationship between the left hand side and the right hand side is not a cause-effect relationship. But when we use the equations to find out how the field at any given place comes about, then we do find a cause-effect relationship: the field at any given place can be expressed in terms of the charge density and current on the surface of the past light cone of that event. Mathematically the Maxwell equations have the form of differential equations. Looking at the first one, we would normally regard it as telling us something about the electric field if the charge density is given, but you can equally well regard it as telling us the charge density if the electric field is given. The difference between these two perspectives is that the second is mathematically not challenging and does not require any great analysis: if $\bf E$ is known then to find $\rho$ you just do some differentiation and a multiplication by a constant: all fairly simple. But if you have a known charge density and want to find the electric field, you have a lot more work to do, and indeed the problem cannot be solved at all unless you know quite a lot: to get the electric field at one point you need to know the charge density and current on the entire past light cone. Since this calculation is harder it earns some mathematical respect and there is terminology associated with it. We say (mathematically speaking) that $\rho$ is a 'source term' in a differential equation for $\bf E$ . This is somewhat reminiscent of cause and effect but strictly speaking it is only indirectly related to cause and effect as I already said.
{ "source": [ "https://physics.stackexchange.com/questions/660244", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/230627/" ] }
660,302
Consider I have a solar panel setup as in the above picture. The top sketch is sketching sun rays in the early morning around 6 AM in a clear bright sky. The bottom sketch is sketching the sun ray hitting the solar panel at around 8 AM. I installed a solar panel on a mountain, where we usually see the sun more than 12 hours a day. I found that even if I turn the solar panel to face the early morning rays perpendicularly, it doesn't produce much electricity. But when it reaches around 8 AM with the solar panel to perpendicularly facing the sun, the electricity it produces will increase to almost maximum performance, not very different to the productivity at noon. Quantitatively, this is typically an increase in voltage from around 5V to 16V, of 18V maximum. In this case, it is the same sun, the same place, the same solar panel, and the same condition (it faces perpendicular to the sunray, as in the picture). So what is different in the early mornings here so that the output is so different?
When the sun is near the horizon, the sun rays have to travel through more air to get to you than when it's directly overhead. This phenomenon is known as atmospheric extinction , and this page has a nice cartoon diagram to illustrate it: This effect can be quite large. This page has a graph of the approximate effect as a function of zenith angle , which is the angle from directly overhead; a zenith angle of 90° is therefore an object on the horizon. This graph is logarithmic; every unit on the vertical axis corresponds to a factor of about 2.5 times less energy getting to the ground. We can see that anything closer than about 15° to the horizon will have its light diminished substantially, certainly enough that you would notice on a power meter. In addition, sunlight near the horizon is reddened substantially; blue light is more likely to be scattered by molecules in the air, while red light is more likely to travel straight through the atmosphere to your solar panel. (This is why sunsets are red & orange compared to the light you see during the day.) Roughly speaking, your solar panels take particles of light (photons) and turn them into electrical energy. But because of the properties of solar cells, there is a minimum energy that a photon must have (called the band gap ) to excite any electricity at all. Redder photons have less energy than bluer photons, so the red photons that are prevalent in sunlight from near the horizon may not be able to generate any electricity.* In other words, there's a double whammy: less photons are getting to your solar panel, and the ones that do get to your solar panel have less energy on average and can't generate electricity. *I say "may not" because solar panel engineers are clever folks, and there are some more modern types of solar panel that are better at converting sunlight into electricity over a wide range of photon energies. Look up "multi-junction cells" for more information.
{ "source": [ "https://physics.stackexchange.com/questions/660302", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/221363/" ] }
660,381
I have read this question: For x-rays the (HUP limit) Δx becomes smaller than the distances between the lattice distances of atoms and molecules, and the photon will interact only if it meets them on its path, because most of the volume is empty of targets for the x-ray wavelengths of the photon. Why do X-rays go through things? As far as I understand, X-rays are one of the most penetrating electromagnetic radiation. They should easily penetrate Earth's atmosphere just like visible light. Then why do all x-ray telescopes have to be in space? The image is from the DK Smithsonian Encyclopedia. The only thing I found about this says something about atmospheric absorption, but does not go into detail, why x-rays get absorbed more then any other wavelength (like visible). So basically I am asking why are x-rays one of the most penetrating in solids, but one of the least penetrating in gases?
X-ray (and gamma rays) are quite penetrating. They can pass through solid matter with much less attenuation than visible light as an example. But that doesn't mean that the attenuation is zero. Put enough "stuff" in the way, and the energy is eventually scattered or absorbed. In the case of the atmosphere, it's "just" air, but there is quite a bit of it. The depth of the atmosphere is plenty to stop almost all UV/X/gamma radiation. In fact most types of EM radiation are blocked by the atmosphere. But our eyes see only the transparency in visible light. The small molecules that make up most of the atmosphere ( $N_2$ , $O_2$ , $Ar$ ) take a lot of energy to excite. It turns out that visible light is just shy of the energy to do this efficiently, so interactions are very rare. More energetic forms (including X-rays) can ionize these molecules, absorbing or scattering the radiation. Given a thick enough layer, almost all the incoming radiation is removed.
{ "source": [ "https://physics.stackexchange.com/questions/660381", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/132371/" ] }
660,413
In 17th century, Sir Isaac Newton gave us the universal law of gravitation which stated that gravity is an inverse square force. In 1915, Albert Einstein recognised gravity as a curvature of space-time caused by the presence of mass and energy in it. Does that mean Newton was wrong? Is Newton's law not true? It did predict the motion of planets. But above all this, according to modern research on quantum gravity, gravity is transmitted by a particle called a graviton. Does this too violate Einstein's General Theory of Relativity? What is gravity then? What is its true nature?
The job of physics is to construct models that are able to explain and predict empirical observations. You can never be completely sure that a given model is the "true" description, only that it, at the very least, captures facets of the truth by successfully accounting for certain observations. Newton's law of gravity successfully models a wide range of gravitational phenomena but is not valid in extreme regimes. General relativity is a better description and reduces to Newtonian gravity in the limit of small masses and low speeds. But GR seems to have its own problems too and a future theory of gravity, if we find one, must similarly recover general relativity in an appropriate limit. Perhaps we'll eventually converge on the "true" description, but whether we will or whether such a thing is even meaningful is strongly debated. All in all, it's a work in progress.
{ "source": [ "https://physics.stackexchange.com/questions/660413", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/311740/" ] }
660,783
The title says it all really, but I think that since the battery is disconnected there is now an 'open circuit'. I know that charge can only flow if the circuit is complete (closed). But the part that puzzles me here is, what is there to stop the electrons from the negatively charged capacitor plate 'flowing back' to where the negative terminal of the battery was located before? Update: Answers so far seem to indicate that I am asking 'where' the charge would go if the circuit was closed again (after the battery was disconnected) - by means of completing the circuit with a piece of metal or even yourself. This is not what I am asking; I am asking where the charge would go if the circuit was left open (still with no battery connected). I have added a schematic below to clarify what I am actually asking. But put simply, would the charge stay on the capacitor plate(s) or would there be some 'leakage' or 'backflow' of charge away from the charged plates?
The charge won't go anywhere and the capacitor will remain charged until you short the plates of the capacitor. Where there was once a battery terminal there is now an insulator and that stops the electrons. Also, the terminal will be made of metal that has a negligible capacitance so can't store significant amounts of charge. And there is no net charge taken from the battery. The battery will push electrons from one of the capacitor's plate to the other. Regarding your update: A theoretical perfect capacitor will never lose any volts. A real capacitor will always lose volts because air has some conductance and so does whatever dielectric is used to separate the plates. Even though a practical capacitor will lose volts, the loss may be small enough that it won't discharge in years.
{ "source": [ "https://physics.stackexchange.com/questions/660783", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/267061/" ] }
661,162
In the hearts of stars, hydrogen atoms fuse together to make helium. After the hydrogen in the core is depleted, the star changes state and conditions at the heart of the star make it possible for helium atoms to fuse together. There are parts of a star where hydrogen and helium are in contact which makes me wonder why there isn't any fusion going on between the two. Can hydrogen and helium fuse together? If so, under what conditions? If not, why not?
Hydrogen and helium can briefly bind together to make lithium-5, but this is an extremely unstable nuclide which falls apart instantly (with a half-life of ${\sim}4\times 10^{-22}\:\rm s$ ) and which actively requires energy to make (i.e. it is an endothermic process, as opposed to how we normally think of nuclear fusion). The reason for this is that helium-4 is a particularly stable system, and it has a huge binding energy $-$ much bigger than anything immediately higher up in size. In lithium-5, you have three protons, which you can think of as two of them paired up and one lone guy in a nuclear shell of its own at much higher energy. This energy is so high that it's simpler for the extra proton to just peel off and go away to become a separate hydrogen nucleus. To make stable lithium, you need more neutrons to stabilize the nuclide, so only lithium-6 and lithium-7 are stable. This raises the question of whether it might be possible to combine suitable isotopes to make those, for which the only candidates are \begin{align} \rm ^2H+{}^4He\to{}^6Li, \\ \rm ^3H+{}^4He\to{}^7Li, \\ \rm ^3H+{}^3He\to{}^6Li. \end{align} From these: The first reaction does happen, and e.g. this paper calls it "radiative capture of deuterium on alpha particles". But it is extremely unlikely and it only produced trace amounts of lithium-6 (w.r.t. lithium-7 production) in Big-Bang nucleosynthesis. (And, in addition, deuterium is not stable in stellar cores .) The second one does happen and it does produce energy. However, it is unlikely in stellar nucleosynthesis since it requires tritium, which is unstable. The third reaction can also happen (studied e.g. in this paper ) but again it is extremely unlikely, and it requires tritium, which is unstable. For what it's worth, these reactions are exothermic, releasing 1.5, 2.4 and 934 MeV of energy, respectively, so they are allowed to happen on their own without needing to supply initial energy to the reactants for them to fuse. In other words, the higher isotopes of hydrogen do have an open channel of fusing with helium to produce lithium. However, these channels are so suppressed, due to the details of how likely the reactions are to happen, that they are negligible in stellar nucleosynthesis. And, finally, there's an even bigger problem, known as lithium burning : if you just release a nucleus of lithium (either the -6 or -7 isotopes) into a stellar core, the star will just tend to eat it raw: Lithium-7 can fuse with hydrogen to make beryllium-8, which promptly breaks in half to give two helium-4 nuclei. Again, this is a consequence of the extreme stability of alpha particles compared to any of its neighbours in the table of nuclides. Lithium-6 can fuse with hydrogen to make beryllium-7, which decays via electron capture to lithium-7. The resulting lithium-7 will then end up catching another proton, as above. The net result of this mechanism is that developed stars have less lithium than the primordial soup they started out with.
{ "source": [ "https://physics.stackexchange.com/questions/661162", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/72564/" ] }
661,166
Say a electron is accelerated through a potential difference of 10v established between two points A and B 1metre distance apart. Then would i only be able to say that a electron is accelerated through a potential difference of 10v if the electron travels through 1metre to gain 10joule of kinetic energy or or it will just gain 10joule of kinetic energy just by being in that electric field?
Hydrogen and helium can briefly bind together to make lithium-5, but this is an extremely unstable nuclide which falls apart instantly (with a half-life of ${\sim}4\times 10^{-22}\:\rm s$ ) and which actively requires energy to make (i.e. it is an endothermic process, as opposed to how we normally think of nuclear fusion). The reason for this is that helium-4 is a particularly stable system, and it has a huge binding energy $-$ much bigger than anything immediately higher up in size. In lithium-5, you have three protons, which you can think of as two of them paired up and one lone guy in a nuclear shell of its own at much higher energy. This energy is so high that it's simpler for the extra proton to just peel off and go away to become a separate hydrogen nucleus. To make stable lithium, you need more neutrons to stabilize the nuclide, so only lithium-6 and lithium-7 are stable. This raises the question of whether it might be possible to combine suitable isotopes to make those, for which the only candidates are \begin{align} \rm ^2H+{}^4He\to{}^6Li, \\ \rm ^3H+{}^4He\to{}^7Li, \\ \rm ^3H+{}^3He\to{}^6Li. \end{align} From these: The first reaction does happen, and e.g. this paper calls it "radiative capture of deuterium on alpha particles". But it is extremely unlikely and it only produced trace amounts of lithium-6 (w.r.t. lithium-7 production) in Big-Bang nucleosynthesis. (And, in addition, deuterium is not stable in stellar cores .) The second one does happen and it does produce energy. However, it is unlikely in stellar nucleosynthesis since it requires tritium, which is unstable. The third reaction can also happen (studied e.g. in this paper ) but again it is extremely unlikely, and it requires tritium, which is unstable. For what it's worth, these reactions are exothermic, releasing 1.5, 2.4 and 934 MeV of energy, respectively, so they are allowed to happen on their own without needing to supply initial energy to the reactants for them to fuse. In other words, the higher isotopes of hydrogen do have an open channel of fusing with helium to produce lithium. However, these channels are so suppressed, due to the details of how likely the reactions are to happen, that they are negligible in stellar nucleosynthesis. And, finally, there's an even bigger problem, known as lithium burning : if you just release a nucleus of lithium (either the -6 or -7 isotopes) into a stellar core, the star will just tend to eat it raw: Lithium-7 can fuse with hydrogen to make beryllium-8, which promptly breaks in half to give two helium-4 nuclei. Again, this is a consequence of the extreme stability of alpha particles compared to any of its neighbours in the table of nuclides. Lithium-6 can fuse with hydrogen to make beryllium-7, which decays via electron capture to lithium-7. The resulting lithium-7 will then end up catching another proton, as above. The net result of this mechanism is that developed stars have less lithium than the primordial soup they started out with.
{ "source": [ "https://physics.stackexchange.com/questions/661166", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/307732/" ] }
661,266
I have been putting preserved plums, on a rack, to sun and dry on my balcony. When I take them in at dusk, the plums are noticeably hot to the touch. They feel warmer than the bamboo and metal racks they are on, the cardboard box I put the racks on, the netting I put over the lot, and the air outside. (Note that ambient air temperature doesn't start dropping until well after I have the plums indoors.) The balcony itself, made out of a light-colored concrete-like composite, and the metal railing also feel warm, but not as much as the plums do. I recall some relevant concepts from physics classes, but I can't tell if I'm taking into account everything at play. Here's what I have so far: Plums are mostly water, which has a high specific heat (~4 kJ/kg/K) relative to air (~1 kJ/kg/K) and probably the other objects. I'm guessing the balcony also has a higher specific heat than air. Higher specific heat means that by the end of the day, the plums have stored more thermal energy than the cardboard box. Water and metal are good thermal conductors, so they will feel warmer to my hands than the other objects even if they contain the same energy per unit. Is there something else in here about the plums converting radiant energy to thermal that the other objects don't, or something about air flow? Is it a sign (which I suppose is not for Physics.SE) of fermentation?
You were on track...and then missed the mark. Higher specific heat means that by the end of the day, the plums have stored more thermal energy than the cardboard box." Correct. You're on track... Water and metal are good thermal conductors, so they will feel warmer to my hands than the other objects even if they contain the same energy per unit. But incorrect. You just veered and missed the mark. You don't feel thermal energy stored in your finger tips; You don't even feel the temperature of the material. You feel the temperature of your fingertips. This in turn is influenced by the specific heat capacity, thermal conductivity, and actual thermal energy stored in the material. Of key importance in your scenario is that you are feeling the temperature after the heat source has been removed and things have been given time to cool down. Specific heat capacity does affect how quickly that happens since more energy must be drawn from the material for the same decrease in temperature. The role thermal conductivity plays is that it determines how quickly your finger tips match the temperature of the material. What this means is that a piece of aluminum (good thermal conductivity) will initially feel hotter than a piece of plastic (bad thermal conductivity) at the same temperature upon initially touching it because it is bringing the temperature of your fingers to match its own faster. But hold your fingers on either long enough, and it will feel the same because your finger has reached the same temperature in either case. This is all assuming the act of touching it does not change the temperature of the object itself since it is transferring heat to or from your fingers after all (see next paragraph). Specific heat capacity also determines how much an object's temperature changes due to you touching it as heat is transferred to or from your fingers. If an object is small enough it transfers so much of its own thermal energy to your fingers that it drop significantly in temperature while not raising your own hand temperature that much and doesn't burn you. This is the exact same mechanism that enables an object with higher specific heat capacity (for the the same mass) to take longer to cool at dusk since both metal and plum are exposed to the same cooling conditions.
{ "source": [ "https://physics.stackexchange.com/questions/661266", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/163091/" ] }
661,322
When we push a sphere on a rough horizontal surface, it slows down because there is a net torque on it (by normal force and friction acting in opposite directions) and this causes its angular speed to decrease. But work done by all the forces on it is zero too, so why does it's kinetic energy decrease?
Even if you assume a perfectly frictionless surface, the ball would still slow down because of inelastic deformation. Whenever a force is applied to an object it deforms, and when the force is removed, the object returns to its shape. In the case of rolling ball, both the point of the ball and the ground keep deforming and returning back. This leads to continuous loss of energy. This is also why a hard ball will roll for longer than a soft ball which deforms much more.
{ "source": [ "https://physics.stackexchange.com/questions/661322", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/306008/" ] }
661,684
Velocity has a magnitude and a direction and thus it is considered a vector. But from linear algebra perspective, a vector is an element of a vector space. A set of mathematical objects can be a vector space if they follow some conditions. One of the conditions is that if we add two vectors we must get another vector from the set. Which set of vectors should I take as the vector space? If car A has a velocity $\vec{v}$ , can we add this velocity to the velocity of car B and get another vector? is the velocity of the car B in the same vector space? What is the physical significance of such addition of vectors?
If we want to be mathematically precise, just saying "velocity is a vector" doesn't cut it. The definition of velocity is as the time derivative of position . In mathematical terms, this means - regardless of whether we think of position as a point in $\mathbb{R}^n$ or a more general manifold where position itself is not a vector - velocities are tangent vectors to curves $x(t)$ in our position space. In general, you can add two tangent vectors at the same point because they are vectors in the same tangent space, but you cannot add "velocity of car A" to "velocity of car B" unless the two cars are currently colliding and hence these two vectors live at the same point . Adding two velocities at the same point is just a way to express that it is equivalent to say "This thing is moving at $\sqrt{2} \frac{\mathrm{m}}{\mathrm{s}}$ northwest" and "This thing is moving at $1\frac{\mathrm{m}}{\mathrm{s}}$ north and it is moving at $1\frac{\mathrm{m}}{\mathrm{s}}$ west" - the "and" there corresponds to addition.
{ "source": [ "https://physics.stackexchange.com/questions/661684", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/133629/" ] }
662,316
Due to some of the basic principles of quantum mechanics, we have the Wolfgang Pauli exclusion principle, where two fermionic bodies cannot occupy the same quantum state simultaneously. If that is true, then how is all matter, energy, space and time of the universe being compressed into an infinitesimally small Singularity 13.8 billion years ago? Wouldn't particles/bodies be occupying the same space simultaneously in an infinitely small place? Does this mean that the Big Bang is wrong or is Pauli's exclusion principle wrong?
The key confusion is the idea that Pauli Exclusion prevents any two particles from occupying the same space . The actual Pauli Exclusion Principle is slightly different: it prevents any two particles occupying the same quantum state . If the temperature is higher than the Fermi Energy, there is more than enough thermal energy to give each particle its own quantum state (which includes energy). This allows particles to stack atop each other in space, (but have different momentum and energy). Consider electron clouds which partially overlap in your everyday atom! The Fermi temperature scales as the Fermi Energy, which in the relativistic case, scales as $L^{-1}$ . The temperature of a radiation dominated (as it was, early on) universe also scales as $L^{-1}$ . So as we go back in time, to smaller and smaller lengths, the Fermi Temperature will never overtake the Temperature, so the Pauli Exclusion Principle will hold true, but not matter physically. The pressure due to photons will be much larger than any degeneracy pressure. It's important to note that much of the energy density is in Bosonic fields (photons, scalar fields, etc.) rather than Fermionic fields. And important to note that we do not necessarily know what happens when the Universe is younger than a Planck time, as our understanding of physics above this energy scale is incomplete.
{ "source": [ "https://physics.stackexchange.com/questions/662316", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/230396/" ] }
662,423
The formula for a falling object has $r^2$ in the denominator. This would mean that an object that is higher up falls more slowly than the standard $9.807\ \mathrm{m/s^2}$ that we are taught in high school. What would happen if we took at $1$ metre pole and a $10$ metre pole up to a height of $100$ metres for the bottom of both poles, and then dropped them? Let's assume they are weighted on the bottom so both remain vertical, and that the $10$ metre pole is hollow so they both weigh the same. Would they hit the ground simultaneously or otherwise?
Paul T. provides a good answer regarding the case where the height of the bottoms of the poles are the same (which is what was asked for in the question). The main difference in that case is due to the different heights of the centers of mass of the two rods. However, you might ask, what if the centers of mass of the two rods were at the same height, would there still be a difference? It turns out that there will be, although the difference is even smaller. Let $m$ and $l$ be the mass and length of a vertical rod of uniform density, and $r$ the height of its center from the center of the planet. The planet's mass is $M$ . Consider a tiny piece of the rod, of mass $\delta m$ and distance $x$ from the rod's center (so $x$ is between $-l/2$ and $+l/2$ ). The force of gravity on the tiny piece is: $$\delta F=\frac{GM\delta m}{(r+x)^2}$$ The total force on the rod is the integral of $\delta F$ over the whole mass: $$F=\int\frac{GM}{(r+x)^2}dm$$ The mass of the small piece is proportional to its length ( $m/l=\delta m/\delta x$ ) so we can substitute $dx$ for $dm$ with the appropriate scaling: $$F=\int_{-l/2}^{+l/2}\frac{GM}{(r+x)^2}\left(\frac{m}{l}dx\right)$$ Doing the integral yields: $$\begin{align} F&=-\frac{GMm}{l}\left.\frac{1}{r+x}\right|_{x=-l/2}^{+l/2} \\ &=-\frac{GMm}{l}\left(\frac{1}{r+l/2}-\frac{1}{r-l/2}\right) \\ &=-\frac{GMm}{l}\left(\frac{(r-l/2)-(r+l/2)}{(r+l/2)(r-l/2)}\right) \\ &=-\frac{GMm}{l}\left(\frac{-l}{r^2-(l/2)^2}\right) \\ &= \frac{GMm}{r^2-(l/2)^2} \end{align}$$ This is almost the same value as if the mass of the rod was concentrated at its center (in which case it would be just $GMm/r^2$ ). Like Paul T., let's look at the relative difference: $$ \frac{\frac{GMm}{r^2-(l/2)^2}-\frac{GMm}{r^2}}{\frac{GMm}{r^2}} = \frac{(r^2)-(r^2-(l/2)^2)}{r^2-(l/2)^2} = \frac{(l/2)^2}{r^2-(l/2)^2} \approx \left(\frac{l}{2r}\right)^2 $$ Compare this to the case where we measure $r$ from the end of the pole, where the relative difference (between a rod and a point) was just $l/r$ . If the end case had a difference of on part per million, the center case will have a difference of less than one part per trillion!
{ "source": [ "https://physics.stackexchange.com/questions/662423", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/187139/" ] }
662,432
Consider a physical quantity $\phi$ that is globally conserved. From Feynman's argument (in his volume 2 I think), which states that local conservation follows from global conservation due to special relativity, we can say that $\phi$ is locally conserved also. Mathematically this can be written as, $$\frac{\partial\rho}{\partial t}+\vec\nabla\cdot\vec J=0$$ where $\rho=$ density of $\phi$ and, $\vec J=\rho\vec v$ The above law, with $\vec\nabla=(\frac{\partial}{\partial q_i},\frac{\partial}{\partial p_i})$ in phase can be written as: (this is how Louiville's theorem in statistical mechanics is derived from microstate conservation) $$\frac{d\rho}{dt}=0$$ Firstly, Is this equation true for the density for any conserved quantity? Secondly, I want to know what exactly this equation means? If the above was $\frac{d(\text{total charge})}{dt}=0$ then that would have been obvious. But how come the total time-derivative of the density being zero implies that total quantity is conserved? P.S: I know how the above analysis can come from the Hamiltonian mechanics where the time derivative of any function can be written as the sum of a partial time derivative plus the Poisson bracket with the Hamiltonian. I am more interested in knowing how the equation can be understood in clear physical/visual/intuitive terms.
Paul T. provides a good answer regarding the case where the height of the bottoms of the poles are the same (which is what was asked for in the question). The main difference in that case is due to the different heights of the centers of mass of the two rods. However, you might ask, what if the centers of mass of the two rods were at the same height, would there still be a difference? It turns out that there will be, although the difference is even smaller. Let $m$ and $l$ be the mass and length of a vertical rod of uniform density, and $r$ the height of its center from the center of the planet. The planet's mass is $M$ . Consider a tiny piece of the rod, of mass $\delta m$ and distance $x$ from the rod's center (so $x$ is between $-l/2$ and $+l/2$ ). The force of gravity on the tiny piece is: $$\delta F=\frac{GM\delta m}{(r+x)^2}$$ The total force on the rod is the integral of $\delta F$ over the whole mass: $$F=\int\frac{GM}{(r+x)^2}dm$$ The mass of the small piece is proportional to its length ( $m/l=\delta m/\delta x$ ) so we can substitute $dx$ for $dm$ with the appropriate scaling: $$F=\int_{-l/2}^{+l/2}\frac{GM}{(r+x)^2}\left(\frac{m}{l}dx\right)$$ Doing the integral yields: $$\begin{align} F&=-\frac{GMm}{l}\left.\frac{1}{r+x}\right|_{x=-l/2}^{+l/2} \\ &=-\frac{GMm}{l}\left(\frac{1}{r+l/2}-\frac{1}{r-l/2}\right) \\ &=-\frac{GMm}{l}\left(\frac{(r-l/2)-(r+l/2)}{(r+l/2)(r-l/2)}\right) \\ &=-\frac{GMm}{l}\left(\frac{-l}{r^2-(l/2)^2}\right) \\ &= \frac{GMm}{r^2-(l/2)^2} \end{align}$$ This is almost the same value as if the mass of the rod was concentrated at its center (in which case it would be just $GMm/r^2$ ). Like Paul T., let's look at the relative difference: $$ \frac{\frac{GMm}{r^2-(l/2)^2}-\frac{GMm}{r^2}}{\frac{GMm}{r^2}} = \frac{(r^2)-(r^2-(l/2)^2)}{r^2-(l/2)^2} = \frac{(l/2)^2}{r^2-(l/2)^2} \approx \left(\frac{l}{2r}\right)^2 $$ Compare this to the case where we measure $r$ from the end of the pole, where the relative difference (between a rod and a point) was just $l/r$ . If the end case had a difference of on part per million, the center case will have a difference of less than one part per trillion!
{ "source": [ "https://physics.stackexchange.com/questions/662432", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/266050/" ] }
662,583
Goldstein, 3rd ed $$ \frac{d}{d t}\left(\frac{\partial L}{\partial \dot{q}_{j}}\right)-\frac{\partial L}{\partial q_{j}}=0\tag{1.57} $$ expressions referred to as "Lagrange's equations." Note that for a particular set of equations of motion there is no unique choice of Lagrangian such that above equation lead to the equations of motion in the given generalized coordinates. I'm not able to understand what does the highlighted statement mean. How can we have different Lagrangians. While deriving the above equation we went through the derivation and ended up defining $L=T-V$ , so the Lagrangian is fixed and always $L=T-V$ , so why talk about a different Lagrangian? Here are the preceding steps of derivation $$ \frac{d}{d t}\left(\frac{\partial(T-V)}{\partial \dot{q}_{j}}\right)-\frac{\partial(T-V)}{\partial q_{j}}=0 $$ Or, defining a new function, the Lagrangian $L$ , as $$ L=T-V\tag{1.56} $$ the Eqs. (1.53) become $$ \frac{d}{d t}\left(\frac{\partial L}{\partial \dot{q}_{j}}\right)-\frac{\partial L}{\partial q_{j}}=0.\tag{1.57} $$ So we see that we have defined that $L=T-V$ so why talk of a different Lagrangian? Can anyone please help me.
It may not be obvious that different Lagrangians can lead to the same equations of motion. Here is a simple example. Take theses two Lagrangians $$L = \frac{m}{2}\dot{x}^2-\frac{k}{2}x ^2 \tag{1}$$ $$L' = \frac{m}{2}\dot{x}^2-\frac{k}{2}x ^2+ax\dot{x} \tag{2}$$ These two Lagrangians differ just by the extra term $ax\dot{x}$ . From the Lagrangian (1) you get the equation of motion $$\frac{d}{d t}\left(\frac{\partial L}{\partial \dot{x}}\right) = \frac{\partial L}{\partial x}$$ $$m\ddot{x}=-kx \tag{3}$$ From the Lagrangian (2) you get the equation of motion $$\frac{d}{d t}\left(\frac{\partial L'}{\partial \dot{x}}\right) = \frac{\partial L'}{\partial x}$$ $$m\ddot{x}+a\dot{x}=-kx+a\dot{x} \tag{4}$$ which is the same as (3). The above was just an example. Actually you can add any function $F$ of the form $$F(x,\dot{x},t)=\frac{\partial G(x,t)}{\partial x}\dot{x}+\frac{\partial G(x,t)}{\partial t}$$ with an arbitrary function $G(x,t)$ to the Lagrangian, and the equation of motion will remain the same. This function $F$ can equivalently be written as a total time derivative $$F(x,\dot{x},t)=\frac{dG(x,t)}{dt}$$ By the way: The example above was constructed using $G(x,t)=\frac{a}{2}x^2$ , thus giving $F(x,\dot{x},t)=ax\dot{x}$ .
{ "source": [ "https://physics.stackexchange.com/questions/662583", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/256066/" ] }
662,623
I am new to physics but not new to science/scientific thinking. Since I was young I have never really understood how to interpret the Newtonian forces. In some cases they seem very real. E.g. the static friction force can be nicely explained as gluing of atoms, or something like that. But if you consider two objects of different weights on the same table, then the table applies 2 different normal forces to the 2 objects. This really seems to be an abstract construct. We see that the objects are not moving so to be self-consistent with this massive framework of forces we are building we are claiming that the table is exerting a force towards the object. Yeah sure! What kind of atomic interpretation could this have? Of course, it does work at the end, so I could totally accept the "construct" interpretation, since it's a useful model anyway. But what transpires from reading articles etc. is that Newtonian forces are "more" than that. They are actually "real". So, what is the right way to conceptualize the Newtonian forces? I hope I expressed my confusion clearly.
First we must see how normal force is applied When we place a block on a table the table under it actually deforms. The deformation is so small that it can't be observed and we simply neglect it. As soon as we place block on the table it pushes / deforms table which develops restoring forces. These restoring forces are electromagnetic in nature and when weight of block equals these forces, it is at rest on table.These restoring forces sum up together as what we know as Normal force If you take a heavier block it will deform table more than a lighter block and therefore more restoring forces will be developed i.e. more normal reaction exerted.
{ "source": [ "https://physics.stackexchange.com/questions/662623", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/312794/" ] }
662,672
The overlap of a free electron wavefunction and a bound electron wavefunction is nonzero. So why don't electrons slowly bleed out of atoms? If any wavefunction enters a free particle state it will just escape!
There are two ways of thinking about this answer. The first is that tunneling allows a particle to pass quantum-mechanically through a barrier that it would not have enough energy to pass through classically. However, this always means tunneling from one classically allowed region to another; the initial and final energies have to be the same. (It is possible, thanks to quantum mechanical uncertainty, to sort of "borrow" enough energy $E$ to pass through the barrier, but only for a short time $\sim\hbar/E$ . As $t\rightarrow\infty$ , the energy has to be back to the value it started at.) In an atom, an electron is bound, in a negative energy state, while for a continuum/scattering/asymptotic state, the energy is necessarily positive. So there is no tunneling process that can take a bound electron to a free electron. (These facts about the energy have direct experimental consequences in other tunneling processes. In $\alpha$ -decay, conceptualized as the tunneling of an $\alpha$ -particle out from a positive-energy state inside nucleus, the energy of the outgoing $\alpha$ is related to the halflife, through the Geiger-Nuttall law.) The second way of understanding this result comes from the consideration of the overlap between the electron wave function in the bound and free states ( $|\psi_{0}\rangle$ and $|\vec{k}\rangle$ , respectively). If there were an overlap between these two states, then a bound electron could escape into an unbound state. However, in fact $\langle\vec{k}|\psi_{0}\rangle=0$ ; the two states are orthogonal! That this has to be the case is actually clear from the energy considerations above. The states $|\psi_{0}\rangle$ and $|\vec{k}\rangle$ are both eigenstates of the Hamiltonian $H$ , with different eigenvalues (one negative, one positive), so they are necessarily orthogonal. This may seem confusing, since the Fourier transform $$\tilde{\psi}_{0}\!\left(\vec{k}\right)=\frac{1}{(2\pi)^{3/2}}\int d^{3}r\,e^{-i\vec{k}\cdot\vec{r}}\psi_{0}(\vec{r})$$ is generically going to be nonzero. The resolution of this apparent paradox is that the continuum wave function is not simply a plane wave $\propto e^{i\vec{k}\cdot\vec{r}}$ . (In many situations, such as when calculating matrix elements of certain operators between $|\psi_{0}\rangle$ and $|\vec{k}\rangle$ , it may be a good enough approximation to use a plane wave for the $|\vec{k}\rangle$ wave function, but it does not work for this calculation.) At large distances, the continuum wave function is close to a plane wave, but in the vicinity of the nucleus [where $\psi_{0}(\vec{r})$ has most of its support], the scattering wave function is strongly distorted by the attractive potential. For a single-electron atom, the continuum wave functions are Coulomb waves ; for multi-electron atoms, they are much more complicated, but the qualitative distortion is similar.
{ "source": [ "https://physics.stackexchange.com/questions/662672", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/307328/" ] }
664,342
If the earth got shrunk into the size of a peanut, it would turn into a black hole, which would have a higher density but same mass. Since the center of mass of both bodies would be the same, the distance between a far-away object and the centers of mass would be the same. Since both the variables (mass, distance) would be the same, wouldn't the gravitational force exerted by both the earth and the black hole on a far away object be the same? If this is true, wouldn't light be unable to escape the earth as well, since light can't escape black holes?
The parameter you're not considering is the distance. The Earth is an object with the mass of the Earth $m_E$ and the radius of the Earth $r_E$ (duh). If you take a black hole with mass $m_E$ , then its radius will be the radius of a peanut, $r_p$ . When shooting a light ray on Earth, the light easily escapes its gravitational pull because it is shot at a distance $r_E$ from the center of mass of the object. When shooting a light ray near our black hole, it can't escape its gravitational pull and falls into it, because it is shot from a much shorter distance, $r_p$ . If you shoot a light ray at a distance of $r_E$ from the black hole, it will behave the same way as it does on the Earth. This is the punchline: light can escape black holes from a great enough distance . Then does this mean that if we go near the center of the Earth and shoot a light ray when we're at a distance of about $r_p$ the light will be pulled in the center of the Earth? Of course not, because the mass "inside" a radius $r_p$ of the Earth's center is much much smaller than $m_E$ .
{ "source": [ "https://physics.stackexchange.com/questions/664342", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/313076/" ] }
664,451
Currently, even the nearest stars are lightyears away, and impossible to reach in our lifetimes. If space is always expanding, and was once infinitely smaller, then at what point in the past was space so much smaller that the average distance between stars was less than light days? Was there ever such a time?
As the universe expands each individual galaxy stays roughly the same size, with stars on orbits of roughly constant diameter, so the stars within any given galaxy were no closer together a long time ago than they are now (at least as far as cosmic expansion effects are concerned). The distances between galaxy clusters were smaller in the past, and a good way to get a sense of this is to note that the ratio of distance between them now to distance between them a long time ago is equal to the ratio of wavelengths in the light received and emitted. If we receive light from a galaxy and the light arriving has a wavelength twice as large as when it set out, then the universe was half as small when the light set out (that is, distances between galaxy clusters were then on average half what they now are). To find a time when galaxies were not many lightyears apart you have to go so far back that you arrive at times before the formation of galaxies, so there never was such a time. [ Added remark in answer to a question in the comments concerning galaxy clusters . One galaxy cluster drifts away from another because the initial conditions gave them velocities of this form. This general condition is called the "Hubble flow" and it leads to the cosmic expansion. It is what things would do if they only experienced the average cosmic gravitation, without any local bumps owing to a non-homogeneous matter distribution such as a galaxy. Meanwhile everything attracts stuff near to it and this can lead to bound groups such as solar systems, galaxies and galaxy clusters. This binding is sufficient to turn the relative velocities around so that each bound group does not drift apart, nor does it expand (unless some other process intervenes).]
{ "source": [ "https://physics.stackexchange.com/questions/664451", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/312912/" ] }
664,705
This is not a homework question. I attempted to draw a free body diagram for a person pulling or pushing a cart. Based on Newton's third law, the following forces act on the body of the person: forward reaction force done by the ground because of friction between the person and the ground. downward force (the person's weight) done by the earth. backward reaction force done by the cart. I am wondering why the body of the person must be tilted forward. I have not seen any relationship between this posture with the magnitude of the forces acting on the person. Could you tell me why the person's body must be tilted forward? How does this posture provide mechanical advantages?
Probably the easiest way to analyze it is in terms of torque balance. So, you already know that a rigid body which happens to have a constant momentum, this constancy requires all of the external forces to sum to zero: this is force balance. (It is sometimes confused with Newton's third law; Newton's third law just says here that “You don't need to consider all the forces, only the external ones. The internal forces cancel each other out.”) Well, the same thing can be said of any conserved quantity, it doesn't just have to be momentum. If my sink is a bit clogged and there is a standing water level in my sink, conservation of mass of the water is going to guarantee that if the water level is not changing, then water coming in from the faucet is equally balanced by water leaving, either by evaporation or by slipping around the clog. There is a state of water flow balance. All of these balances are called “dynamic equilibrium” conditions. The conserved quantity that matters in the torque case is called angular momentum, torque is a property of force that can transfer angular momentum, and if we notice that a human is staying constant orientation, then we can conclude that they are in a state of torque balance. Once you know to look here the rest of the analysis is very straightforward. The forces are roughly comparable: the horizontal component to force on the feet always points forwards when walking forwards and has to provide also the horizontal component to force on the cart. The cart’s reaction force on the person is thus backwards, the force on the feet is forwards, and they are about the same magnitude. Torque says we should multiply by the lever arm, which is to say the distance from about this person's belly button where the line of the force intersects them. The problem is that the force on the feet is about as far as it can be, whereas if I'm grasping something with my arms at about waist level, that's about as close to my center of mass as it can be. So the torque from my legs might easily be 10 times the torque of the weight that I'm pulling. The leaning forward allows the person to use gravity to counter the torque.
{ "source": [ "https://physics.stackexchange.com/questions/664705", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/652/" ] }
665,487
I read that the time taken to cross a swimming pool (still waters) is the same minimum time required to cross a flowing river, provided the swimmer crosses it the river in the minimum time required, and the pool and the river are of the same width. My thinking is: In the pool, the whole component of the swimmer's velocity is forward, so time taken would be $\frac{W}{V}$ where $W$ is the pool width and $V$ the velocity. When crossing the river, time taken would be $\frac{W}{V_y}$ where $V_y$ is the component of the swimmer's velocity perpendicular to the width of the river, assuming the river flows along the x-direction. Hence time to cross the river would be more. Why is this wrong?
Your reasoning would be correct if the swimmer in the river was trying to reach the point on the bank opposite where they started. To do this they have to swim in a direction angled upstream, so relative to the water they have to swim a longer distance than the width of the river. But to cross the river in the minimum time the swimmer should swim in a direction perpendicular to the banks. The river will carry them some distance downstream, but they will only have to swim the width of the river relative to the water - which is the reference frame in which their swimming speed is measured. So although they travel further relative to the banks it only takes them the same time as swimming across a swimming pool of the same width.
{ "source": [ "https://physics.stackexchange.com/questions/665487", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/268529/" ] }
665,946
I've often heard that Infrared rays are called "heat rays". However, I feel like this term is a misnomer. Don't all the wavelengths of electromagnetic radiation carry energy? Judging by how gamma rays are highly penetrating and are dangerous when absorbed by tissues, radiations of lower wavelengths should carry more energy, and should be able to increase the internal energy of the object that absorbed it much more than infrared rays can. This seems consistent with the conservation of energy for an isolated system: $$T_{ER} = \Delta E_{int}$$ where $T_{ER}$ stands for transfer of energy by electromagnetic radiation Then why are UV rays, X-rays and gamma rays not classified as "heat rays".
Don't all the wavelengths of electromagnetic radiation carry energy? Yes. And that photon energy $E$ is given by $$E=h\nu$$ Where $h$ = Planck's constant and $\nu$ = frequency. But not all frequencies interact with matter in the same way. Judging by how Gamma rays are highly penetrating and are dangerous when absorbed by tissues Very little of the energy of Gamma rays is absorbed by tissue, i.e., tissue is basically transparent to Gamma rays. They can even pass through several inches of lead. But as they pass though human tissue they energy that is absorbed can cause ionizations that damage tissue and DNA. For this reason, it is called ionizing radiation. ...radiations of lower wavelengths should carry more energy, and should be able to increase the internal energy of the object that absorbed it much more than Infrared rays can. Yes they do, but the amount of energy that is actually absorbed depends on the frequency. Per the Hyperphysics website ( http://hyperphysics.phy-astr.gsu.edu/hbase/mod3.html ) regarding the interaction of radiation with matter: "as you move upward in frequency from infrared to visible light, you absorb (the energy) more and more strongly. In the lower ultraviolet range, all the uv from the sun is absorbed in a thin outer layer of your skin. As you move further up into the x-ray region of the spectrum, you become transparent again, because most of the mechanisms for absorption are gone. You then absorb only a small fraction of the radiation, but that absorption involves the more violent ionization events" Then why are UV rays, X-rays and Gamma Rays not classified as "heat rays" In the case of X-rays and Gamma rays, it's because they don't interact with the skin in the same way as infrared, namely, they do not create a feeling of warmth on the skin. The case of UV is a bit more complex. You don't directly feel UV radiation. But per the FDA.gov site, "When UV rays reach your skin, they damage cells in the epidermis. In response, your immune system increases blood flow to the affected areas. The increased blood flow is what gives sunburn its characteristic redness and makes the skin feel warm to the touch." Hope this helps.
{ "source": [ "https://physics.stackexchange.com/questions/665946", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/250116/" ] }
666,059
When you see models of water you see something like this: The hydrogens in the water molecule become negatively charged because the oxygen pulls electrons more. So why don't they repel and move to the opposite sides of the oxygen? Or just form on opposite sides in the first place?
There are six electrons in the outer orbital of an oxygen atom. In a water molecule two of these electrons bond with the lone electron of each hydrogen atom to form two “bond pairs”. The remaining four oxygen electrons pair up to form two “lone pairs” (because of quantum mechanics, it is energetically favourable for electrons with opposite spins to form pairs). If the repulsive forces between the bond pairs and the lone pairs were completely symmetrical then the four pairs would form the vertices of a regular tetrahedron, and the angle between the hydrogen atoms (the “bond angle”) would be approximately 109 degrees (the exact angle is $\cos^{-1}\left(-\tfrac 1 3 \right)$ ). This is what happens when four hydrogen atoms and one carbon atom form a molecule of methane, which has four bond pairs. However, in water the repulsive forces are not quite symmetrical and the hydrogen atoms are pushed a bit closer together - the actual bond angle is about 104 degrees. See this Wikipedia article for more details.
{ "source": [ "https://physics.stackexchange.com/questions/666059", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/313899/" ] }
666,238
Time isn't a physical object, but according to Einstein's theory of gravity, mass bends spacetime towards things with mass and makes them fall. How does a physical object affect something intangible?
To be precise, time is not curved , it is the spacetime manifold that has curvature. Such a manifold can be given a coordinate chart - an arbitrary one - that assignes to different points (spacetime events) some labels. It turns out that we need one time coordinate to label the time slices and three spatial coordinates to distinguish the events within that slice . What you might satisfactorily say is that the rate of flow of the proper time - the time that your own clock shows - depends on how much spacetime curvature is in your vicinity. This is a quantity independent of any coordinates, a true physical property if you will. This proper time is a measure of your aging, biological processes in your organism, the oscillatory frequencies in atomic transition in the atomic clock you might have etc. Imagine you and your friend orbiting a static black hole. You get the same clocks and synchronize them. They show the same time and tick at the same rate. Then your friend stays at his orbit (at a distance $r_{friend}$ ) and you go much closer to the black hole (at a distance $r_{you}$ ). You spend some time there and upon returning, you make a discovery that your clocks show different readings! In fact, using the Schwarzschild solution you could predict that your clocks will exhibit a ratio of proper times that have passed equal to $$ \frac{\tau_{you}}{\tau_{friend}} = \frac{1-2M/r_{you}}{1-2M/r_{friend}}$$ In the above, I omit the problem of determining when you start and stop the readings and how you account for the travel back and forth. It does not change the fact that your clock will exhibit fewer ticks-and-tocks when you compare it with the one of your friend.
{ "source": [ "https://physics.stackexchange.com/questions/666238", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/313968/" ] }
666,268
Is it possible to shape or focus magnetic fields like an optical lens does with light?
To be precise, time is not curved , it is the spacetime manifold that has curvature. Such a manifold can be given a coordinate chart - an arbitrary one - that assignes to different points (spacetime events) some labels. It turns out that we need one time coordinate to label the time slices and three spatial coordinates to distinguish the events within that slice . What you might satisfactorily say is that the rate of flow of the proper time - the time that your own clock shows - depends on how much spacetime curvature is in your vicinity. This is a quantity independent of any coordinates, a true physical property if you will. This proper time is a measure of your aging, biological processes in your organism, the oscillatory frequencies in atomic transition in the atomic clock you might have etc. Imagine you and your friend orbiting a static black hole. You get the same clocks and synchronize them. They show the same time and tick at the same rate. Then your friend stays at his orbit (at a distance $r_{friend}$ ) and you go much closer to the black hole (at a distance $r_{you}$ ). You spend some time there and upon returning, you make a discovery that your clocks show different readings! In fact, using the Schwarzschild solution you could predict that your clocks will exhibit a ratio of proper times that have passed equal to $$ \frac{\tau_{you}}{\tau_{friend}} = \frac{1-2M/r_{you}}{1-2M/r_{friend}}$$ In the above, I omit the problem of determining when you start and stop the readings and how you account for the travel back and forth. It does not change the fact that your clock will exhibit fewer ticks-and-tocks when you compare it with the one of your friend.
{ "source": [ "https://physics.stackexchange.com/questions/666268", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/106477/" ] }
666,524
I've noticed that unlike other liquids, when pouring olive oil for example, I don't hear any sound at all from it. Usually you can hear an audible sound as a cup gets filled with water, as the sound increases in pitch. What makes the oil behave this way?
The noise is generated by turbulent flow. Turbulence in the flow generates turbulence in the air at the interface between the air and the liquid surface, and that turbulence in the air is what we hear as the splashing sound. So how much noise you hear depends on how turbulent the liquid flow is, and this is inversely related to the viscosity of the liquid. Turbulent flow is exceedingly hard to describe mathematically (indeed you can win a million dollars if you can do this ) but as a general rule for a given flow rate the amount of turbulence decreases as the fluid viscosity increases. And oil has a higher viscosity than water so when pouring oil into a cup we get less turbulence than when pouring water at the same rate, and hence we hear less noise.
{ "source": [ "https://physics.stackexchange.com/questions/666524", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/314114/" ] }
666,525
Suppose a system (eg: rocket) consists of $N$ atoms. It starts moving away from the origin of an inertial frame at speed 0.9c . Will $N$ changes and if it changes where does this change come (if increases) or goes (if decreases)? Update: Let's suppose there is no fuel in the rocket and it attains this speed through a sequence (can be a large number) of gravitational slingshots, and the mass 0 we are talking about is calculated after receiving the first slingshot Follow-up Question: Since $N$ will not change and Total Mass = Sum(mass of all atoms in the system), and according to the equation $$m = \frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}$$ there is △m increase in Total mass. I want to understand how does this △m came into the system, does the particle mass increase or something else
The noise is generated by turbulent flow. Turbulence in the flow generates turbulence in the air at the interface between the air and the liquid surface, and that turbulence in the air is what we hear as the splashing sound. So how much noise you hear depends on how turbulent the liquid flow is, and this is inversely related to the viscosity of the liquid. Turbulent flow is exceedingly hard to describe mathematically (indeed you can win a million dollars if you can do this ) but as a general rule for a given flow rate the amount of turbulence decreases as the fluid viscosity increases. And oil has a higher viscosity than water so when pouring oil into a cup we get less turbulence than when pouring water at the same rate, and hence we hear less noise.
{ "source": [ "https://physics.stackexchange.com/questions/666525", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/314115/" ] }
666,876
Kestrels are birds of prey commonly found in Europe, Asia, Africa, and the North America. They belong to the falcon family but have a unique ability to hover in the air. You can find a whole bunch of videos (See 1 , 2 , 3 , 4 , for example) about these fascinating creatures if you search "Kestrel hunting." (You can click the images below to see the videos) Rearview : Video from wildaboutimages ( link here ) Video from viralhog ( link here ) Side view in slow motion: Video from wildaboutimages ( link here ) While I admire how they stabilize their head, I am fascinated by their ability to remain still in the air . Note that the bird doesn't have any external support and doesn't flap its wings during this process. There is no horizontal displacement even though there is a reasonably strong wind flow (enough to support its weight). Why doesn't the bird get thrown backward like, say, a paper plane would in the wind? While it could be possible that the movement is so small for us to see, watching and rewatching the video makes me think otherwise. Did the birds finally manage to get rid of drag, or is this some very delicate balancing of forces? It should also be noted that this behavior is not limited to Kestrels or even birds. See this video of a barn owl hunting, for instance (not as impressive, but worth mentioning.), or this video where a hang glider gracefully hovers in the wind.
A free-body diagram for a fixed-wing airfoil takes into account four interactions: weight, thrust, lift, and drag. For an unpowered airfoil, the thrust is zero. [ source ] These are approximately mutually perpendicular, but not quite: The weight force always points down. “Drag” is the part of the aerodynamic interaction that’s antiparallel to the motion through the air. “Lift” is the aerodynamic interaction that’s perpendicular to the motion through the air. The direction of thrust (in powered flight) depends on the orientation of your engine. If the motion of the wing through the air is perfectly level, then the lift and drag forces are vertical and horizontal, and constant-velocity motion (including zero-velocity motion, like hovering) is impossible: there’s nothing to oppose the horizontal drag force, so the wing will accelerate in the direction of the drag. Likewise, if the motion of the wing through the air has an upward component, then the horizontal parts of the drag and the lift point in the same direction. But in the illustration, the motion through the air has a slight downward tilt, which means the lift vector has a forward-pointing horizontal component that can in principle cancel out the horizontal part of the drag. The kestrel is “hovering” by gliding on a very slight updraft, so that its airspeed exactly cancels the wind’s velocity. Seagulls also hover , and they do so in flocks . When you see a flock of seagulls hovering, they all do so facing the same direction, and tend to hover relatively close to each other. That’s the place where the updraft is the strongest.
{ "source": [ "https://physics.stackexchange.com/questions/666876", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/155230/" ] }
667,000
I have been wondering why only electrons revolve around protons instead of protons other way around. They have electrostatic force and I think mass factor has nothing to do here. Then why?
NB: I interpreted the question to essentially mean, why do protons rather than electrons reside in nuclei? Electrons repel each other with a Coulomb force that grows very large when they are close together. Protons also repel each other in the same way, but the difference is that protons are also attracted to each other and to neutrons by the even stronger strong nuclear force (since protons are made up of quarks that feel the strong force), which acts over short range ( $\sim 10^{-15}$ m) and thus can be bound into dense nuclei. Electrons are point-like particles and not made up of quarks. They do not interact via the strong nuclear force and cannot be bound into dense nuclei.
{ "source": [ "https://physics.stackexchange.com/questions/667000", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/307732/" ] }
667,472
How much force do I do apply when I lift my leg above the ground? The same amount as gravity does on my leg ( mg )? Or MORE than it ( greater than mg )? If the displacement from the ground to my lifted leg is h meters, then what's the work done? mgh or more than mgh ? Basically I wanna know if I have to apply more force or do more work than gravity to lift my leg up.
Because your leg began at rest, moved for a time $\Delta t$ and ended at rest, the average force it felt was $$ \langle F \rangle = \frac{\Delta p}{\Delta t} = 0 $$ meaning on average your force was equal and opposite to gravity. However, when you accelerated upward you acted with more force than gravity, and when you decelerated it, you acted with less force.
{ "source": [ "https://physics.stackexchange.com/questions/667472", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/313241/" ] }
667,698
I am a high school student and have learnt about how curved surfaces reflect and refract (in "ray optics"). We were always told that these surfaces were spherical in shape, meaning they were an arc of a circle. However in mathematics, I have recently learnt the property that parabolic surfaces converge the light rays coming from infinite distance, exactly at the focus. But this property confused me because in physics we were taught that the ones who converge such light rays at the focus are spherical in shape, not parabolic. So what exactly is the shape of such mirrors? Are we using some approximation in physics when we say that "spherical" mirrors hold such property? What is that approximation, and what is its range of error? To satisfy my further curiosity, what about thin "spherical" lenses which converge rays coming from infinity at their focus? Are they really spherical or are they parabolic? What is the range of error (if any) in that case?
Well, the mirrors you are learning in physics are spherical. There are both spherical and parabolic mirrors. The only difference between them is that parabolic mirrors are more precise; they have only one focal point. Spherical mirrors also have one focal point only when the rays coming are paraxial (rays very close to principal axis). When rays hit the mirror far from principal axis they create different focal point creating multiple focal points, collectively known as focal volume . See the images below: You can see multiple focal points in concave one, whereas a single focal point in the parabolic one. This is called spherical aberration . Now, the question arises: if parabolic mirrors are more efficient than spherical mirrors, why even make spherical ones? For optical applications, like Newtonian telescopes , the illustrations here are greatly exaggerated. Telescope mirrors are much less curved, almost flat. And parabolic telescope mirrors look spherical and very nearly are spherical, deviating from the sphere by perhaps only millionths of an inch. In reality, all optics suffer from diffraction. If the spherical aberration causes less image degradation than diffraction, then little or nothing is gained by using a parabola, which is harder to make. If a spherical mirror is a small enough section of a sphere of large enough radius, then it can still be diffraction limited . Small Newtonian telescopes, commonly around 114 mm diameter and 900 mm focal length, usually have spherical mirrors and are diffraction limited or nearly so. Other kinds of telescopes use spherical mirrors, but correct the spherical aberration with lenses or other optical elements.
{ "source": [ "https://physics.stackexchange.com/questions/667698", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/263638/" ] }
667,714
Why doesn’t a ray of light which is emitted by a distant galaxy, say about a thousand light years away, die down in between? I mean, how do the light rays from different galaxies so far away reach us in this day and age, when they were emitted so long ago? Why aren’t they dampened and lost altogether? What drives the light rays to travel humongous distances WITHOUT being dampened or being lost altogether? Now, how to explain this if constant momentum and energy are what drives a photon through space (according to https://physics.stackexchange.com/a/667784/311056 )?
Intergalactic space is estimated to have a mean density of about $1$ molecule per cubic meter. Air has a density of about $ 3 \times 10^{25}$ molecules per cubic meter. 1 Light Year is about $9 \times 10^{15}$ meters. A crude bit of multiplication would thus suggest that a photon passing through 13.5 billion light years of intergalactic space has about as many encounters with molecules as a photon passing through 4 meters of air. Nothing drives light. An object with momentum does not lose its momentum unless it has an interaction in which it transfers momentum to something else. As to why that is the case, no-one knows - it's just the way the universe is. A photon is an object with momentum, so it keeps going forever unless it has an interaction with something else. Joseph H's linked answer covers the interaction with an expanding universe, known as cosmological redshift, which dims $^1$ and cools $^2$ distant light, but does not blot it out or change its direction. To change its direction, light needs to be scattered , which only happens in interactions with matter or gravity, not empty space. To cease to exist, light needs to be absorbed , which only happens in interactions with matter. 1: Light is dimmer which has lower number of photons per second per unit area. 2: Light is cooler which has lower energy per unit photon. In the visible spectrum, red is the lowest-energy color, which is why we call this phenomenon redshift . A reality-check edit: An equivalent ratio of smaller numbers, like 1 billion light years of intergalactic space to about 30 centimeters of air, would be more in line with real traveled distances through space that really has that density, since if we go too far back in time we have to unwind both gravity concentrating matter in galaxies and cosmological expansion spreading out whatever's left.
{ "source": [ "https://physics.stackexchange.com/questions/667714", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/311056/" ] }
667,819
Suppose I have a bottle of pills and I throw the bottle in the air vertically. I wonder if the pills inside the box also fly up, or do they remain stable at the bottom of the bottle? I tried this experiment several times and I think the pills remain stable. I took the top of the bottle off and moved the bottle quickly vertically and observed the pills don't go up. But what if I throw it to the air very quickly? Do they still remain stable? I don't know if my guess is correct and what physics rule is behind that?
In a vacuum, the moment the bottle leaves your hand it will be in free-fall, and both the pills and the bottle will be subject to exactly the same acceleration - namely, $9.8\ \text{m/s}^2$ toward the floor. As a result, they will move together. On the other hand, in real life there will be a small amount of air resistance which acts on the bottle (because the air in the room is stationary) but not the pills (because the air trapped in the bottle is moving with the same speed as the pills/bottle are when they leave your hand). As a result, at the moment the bottle leaves your hand the downward acceleration of the bottle will be slightly more than it would be in vacuum, and therefore slightly more than the pills, so the pills will begin to rise very slightly. Once the pill bottle reaches its apex and begins to fall back down toward the floor, the situation is reversed - the bottle will accelerate toward the floor at slightly less than $9.8\ \text{m/s}^2$ - and the pills will gently fall back down to the bottom. Finally, depending on the properties of the pills and bottle, this effect might be swamped by friction or adhesion which would act to keep the pills stationary. Experiments would be required to work out what actually happens on a case-by-case basis.
{ "source": [ "https://physics.stackexchange.com/questions/667819", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/276511/" ] }
667,826
Imagine a circuit with a battery and a resistor. Now I connect a resistance-less wire in parallel with the resistor. There will be a potential difference between the two ends of resistor, so current will be generated. But current prefers least resistance path so all of it will prefer the path that is in parallel with resistance. So current through resistor is…zero? But resistor is the thing because of which current is produced in the first place. What exactly is going on here? Also now if we apply KVL we can only take the potential difference across the battery and we equate it to zero, but we know it’s not zero
In a vacuum, the moment the bottle leaves your hand it will be in free-fall, and both the pills and the bottle will be subject to exactly the same acceleration - namely, $9.8\ \text{m/s}^2$ toward the floor. As a result, they will move together. On the other hand, in real life there will be a small amount of air resistance which acts on the bottle (because the air in the room is stationary) but not the pills (because the air trapped in the bottle is moving with the same speed as the pills/bottle are when they leave your hand). As a result, at the moment the bottle leaves your hand the downward acceleration of the bottle will be slightly more than it would be in vacuum, and therefore slightly more than the pills, so the pills will begin to rise very slightly. Once the pill bottle reaches its apex and begins to fall back down toward the floor, the situation is reversed - the bottle will accelerate toward the floor at slightly less than $9.8\ \text{m/s}^2$ - and the pills will gently fall back down to the bottom. Finally, depending on the properties of the pills and bottle, this effect might be swamped by friction or adhesion which would act to keep the pills stationary. Experiments would be required to work out what actually happens on a case-by-case basis.
{ "source": [ "https://physics.stackexchange.com/questions/667826", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/283657/" ] }
667,840
I am a software developer trying to teach myself some basic concepts of how electromagnetism works. At high school we were taught about electricity in all kinds of metaphors that, I realize now, really do not fit the model at all. What I understand so far (correct me if I am wrong) Electric energy in simple terms is the kinetic energy held by electrons, usually transferred to them by placing them in a electric field. Because of this, electrons are manipulated to travel (drift) through conductors by introducing a charge (let's say a negative one) at one end of the conductor, and a charge with a relatively less negative charge at the other end, recursively creating a continuous series of electric fields throughout the conductor. Now something that I can't find anywhere on the internet (I'm sure I haven't looked hard enough) is how these electrons convert their kinetic energy into other forms of energy, or, how resistance actually stops electrons from speeding up "forever" until they reach the end of the field. My intuition tells me these electrons must slam into atoms or other particles that hang around within the conductor, transferring their kinetic energy (thus slowing them down) to whatever they collide with. I've assumed through my reading that electrons colliding with atoms (causing them to move) is what is causing a conductor to warm up. Are these assumptions in some ways correct? This is currently my way to explain how a (Edison) light bulb actually radiates light: drifting electrons heat up a specific part of the circuit (I guess it would be material with a very high resistance) by colliding "a lot" with the particles inside that part of the circuit. Why a material starts emitting photon's at a certain temperature isn't part of my explanation but I think it's not very relevant to what I am currently trying to understand. Then: how does the concept of resistance work in a circuit containing something like an electric engine? I think I know electric engines work by current running through different coils creating magnetic fields, which cause other parts to move (because magnets, I will dive into magnetism later). Since energy cannot just "appear", the kinetic energy in the engine must be transferred from the moving electrons that create the magnetic field. How does this transformation work? And would this transfer of energy also count towards the resistance of the engine within the circuit? I hope I've been able to give context about what I am trying to grasp... Apologies for misusing (or not at all) using correct terminology.
In a vacuum, the moment the bottle leaves your hand it will be in free-fall, and both the pills and the bottle will be subject to exactly the same acceleration - namely, $9.8\ \text{m/s}^2$ toward the floor. As a result, they will move together. On the other hand, in real life there will be a small amount of air resistance which acts on the bottle (because the air in the room is stationary) but not the pills (because the air trapped in the bottle is moving with the same speed as the pills/bottle are when they leave your hand). As a result, at the moment the bottle leaves your hand the downward acceleration of the bottle will be slightly more than it would be in vacuum, and therefore slightly more than the pills, so the pills will begin to rise very slightly. Once the pill bottle reaches its apex and begins to fall back down toward the floor, the situation is reversed - the bottle will accelerate toward the floor at slightly less than $9.8\ \text{m/s}^2$ - and the pills will gently fall back down to the bottom. Finally, depending on the properties of the pills and bottle, this effect might be swamped by friction or adhesion which would act to keep the pills stationary. Experiments would be required to work out what actually happens on a case-by-case basis.
{ "source": [ "https://physics.stackexchange.com/questions/667840", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/314765/" ] }
667,948
The Standard Model of particle physics is immensely successful. However, it has many experimentally fitted input parameters (e.g. the fermion masses, mixing angles, etc). How seriously can we take the success of the Standard Model when it has so many input parameters? On face value, if a model has many input parameters, it can fit a large chunk of data. Are there qualitative and more importantly, quantitative, predictions of the Standard Model that are independent of these experimentally fitted parameters? Again, I do not doubt the success of the SM but this is a concern I would like to be addressed and be demystified.
It is inaccurate to think that all of the standard model of particle physics was determined through experiment. This is far from true. Most of the time, the theoretical predictions of particle physics were later confirmed experimentally and quite often to a very high accuracy. For example, theoretical physicists predicted the existence of the $W^\pm$ and $Z$ bosons and their masses, the Higgs boson, the existence of gluons, and many of their properties before these particles were even detected. Pauli postulated the existence of the neutrino to explain energy conservation in beta decay, before the neutrino was observed. The anomalous magnetic moment of the electron, whose value was predicted by Julian Schwinger, agrees with experiment to up to 3 parts in $10^{13}$ . Parity violation in the weak interaction, predicted by Lee and Yang, was later confirmed experimentally. The prediction of the positron by Dirac, was detected four years later by Anderson. The list goes on and the list is huge $^1$ . Particle physics is arguably the most successful physics theory because time and time again its predictions were later confirmed by experiment to surprisingly high accuracy (though sometimes our theories needed to be improved to explain some details of experimental data). I may be biased coming from a theoretical particle physics background, but I've always agreed that the Standard Model is the most mathematically beautiful, deep and profound model of all of physics. This is reflected in its almost miraculously accurate predictive power. $^1$ Some more of the highlights: 1935 Hideki Yukawa proposed the strong force in order to explain interactions between nucleons . 1947 Hans Bethe uses renormalization for the first time. 1960 Yoichiro Nambu proposes SSB (chiral symmetry breaking) in the strong interaction. 1964 Peter Higgs and Francois Englert, propose the Higgs mechanism. 1964 Murray Gell-Mann and George Zweig put forth the basis of the quark model. 1967 Steven Weinberg and Abdus Salam propose the electroweak interaction/unification.
{ "source": [ "https://physics.stackexchange.com/questions/667948", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/164488/" ] }
668,363
Why does something round roll down faster than something square? That's a question given to me by my five year old son. So let's not get into detailed discussion about what is 'square' and what is 'round' or what is 'rolling down'. I thought that this might be interesting enough to ask here. The question relates to the simple observation that 'round' objects roll down a 'hill' or 'slope' very fast. Can we say something insightfull or intuitive about this? For instance, I can imagine that different objects might roll down faster on a straight slope, or different objects do better on a slope with bumps. What makes an object roll/fall down fast?
If I was answering a 5 year old, I would probably say something like this: Because the corners get in the way, but round things don't have corners. Also, this example comes to mind: When you lie down on a hill and roll down, if you stick out your elbows away from your body, they will get in the way (hit the ground) and slow you down.
{ "source": [ "https://physics.stackexchange.com/questions/668363", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/214570/" ] }
668,369
I am trying to vary the laplace-Beltrami operator with respect to the metric. Using the following two rules \begin{align} \frac{\delta g^{\alpha \beta}}{\delta g^{\mu \nu}} &=\frac{1}{2} \left[\delta^\alpha_\mu \delta^\beta_\nu + \delta^\alpha_\nu \delta^\beta_\mu \right]\\ \frac{\delta g_{\alpha \beta}}{\delta g^{\mu \nu}} &=- \frac{1}{2} \left[g_{\mu \alpha} g_{\nu \beta} + g_{\mu \beta} g_{\nu \alpha} \right], \end{align} I have come across the following conundrum when taking the metric variation of the Laplace-Beltrami operator $\square_x = g^{\mu \nu} \partial^x_\mu \partial^x_\nu$ \begin{align} \frac{\delta \square}{\delta g^{\mu \nu}} &= \frac{\delta}{\delta g^{\mu \nu}} \left[g_{\alpha \beta} \partial^\alpha \partial^\beta \right]=-\frac{1}{2} \left[g_{\mu \alpha} g_{\nu \beta} + g_{\mu \beta} g_{\nu \alpha} \right] \partial^\alpha \partial^\beta \\ &=-\partial_\mu \partial_\nu\\ &=\frac{\delta}{\delta g^{\mu \nu}} \left[g^{\alpha \beta} \partial_\alpha \partial_\beta \right] = + \frac{1}{2} \left[\delta^\alpha_\mu \delta^\beta_\nu + \delta^\alpha_\nu \delta^\beta_\mu \right] \partial_\alpha \partial_\beta\\ &=+ \partial_\mu \partial_\nu. \end{align} Where did I make a mistake ? I assume it has something to do with the gradient being a covariant object, but I am yet to spot the missing minus sign - where does it come into play ? Thanks in advance for your help. Edit: How I derived the formulas for the metric variation: Starting from $g_{\mu \alpha} g^{\alpha \nu} = \delta^\nu_\mu$ , I find that \begin{align} \delta g_{\mu \alpha} g^{\alpha \nu} + g_{\mu \alpha} \delta g^{\alpha \nu} &=0 \\ \Rightarrow \quad g_{\lambda \nu} g^{\alpha \nu} \delta g_{\mu \alpha} = \delta g_{\mu \lambda} &= -g_{\lambda \nu} g_{\mu \alpha} \delta g^{\alpha \nu} \\ \Leftrightarrow \quad \frac{\delta g_{\mu \lambda}}{\delta g^{\alpha \nu}} &= - g_{\lambda\nu} g_{\mu \alpha}, \end{align} which I subsequently symmetrized in the Lorentz indices. Then I used that $g^{\mu \nu} g_{\mu \nu}=d$ to derive that \begin{align} \delta(g^{\mu \nu} g_{\mu \nu}) &=0 = g^{\mu \nu } \delta g_{\mu \nu} + \delta g^{\mu \nu} g_{\mu \nu} \\ \Rightarrow \quad g^{\mu \nu} \delta g_{\mu \nu} &=- \delta g^{\mu \nu} g_{\mu \nu}, \end{align} such that the other rule follows analogously with a minus sign and raised indices. Is this correct ?
If I was answering a 5 year old, I would probably say something like this: Because the corners get in the way, but round things don't have corners. Also, this example comes to mind: When you lie down on a hill and roll down, if you stick out your elbows away from your body, they will get in the way (hit the ground) and slow you down.
{ "source": [ "https://physics.stackexchange.com/questions/668369", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/315023/" ] }