source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
327,085
When a shell explodes, we get several pieces with different kinetic energies but the momentum is conserved since no external force is acting. But the sum total of the kinetic energies of the pieces is more than the original kinetic energy of the shell. Now by conservation of energy, work must be done to change kinetic energy of a system. This work is done by the internal forces. But for conservation of momentum, the internal forces should add to zero. Hence there should be no net force and hence no work.Then why is the momentum conserved if the internal forces do work?
Yes, we have. As other answers have explained, this is easy to do in the radio regime, but over the past fifteen years or so we've been able to do it for light too. The landmark publication here is Direct measurement of light waves. E. Goulielmakis et al. Science 305 , 1267 (2004) ; author eprint . which broke new ground on a method called attosecond streaking that lets us see things like this: $\qquad$ On the left you've got the (mildly processed) raw data, and on the right you've got the reconstruction of the electric field of an infrared pulse that lasts about four cycles. To measure this, you start with a gas of neon atoms, and you ionize them with a single ultrashort burst of UV radiation that lasts about a tenth of the period of the infrared. (For comparison, the pulse length, $250\:\mathrm{as}$, is to one second as one second is to $125$ million years.) This releases the electron out of the atom, and it does so at some precisely controlled point within the infrared pulse. The electric field of the infrared can then have a strong influence on the motion of the electron: it will be forced up and down as the field oscillates, but depending on when the electron is released this will accumulate to a different impulse, and therefore a different final energy. The final measurement of the electron's energy, as a function of the relative delay between the two pulses (top left) clearly shows the traces of the electric field of the infrared pulse.
{ "source": [ "https://physics.stackexchange.com/questions/327085", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/150492/" ] }
327,186
It is said that the constant interaction between the molecules of a gas (in the form of collisions) acts as a randomising influence and prevents the gas molecules from settling. But given the force of gravity, won't the gas molecules settle at some point of time?
You are used to all collisions being somewhat lossy – that is, when you think of most collisions, a little bit of the kinetic energy is lost at each collision so the particles will slow down. If they are subject to gravity, they will eventually settle. By contrast, the collisions between gas molecules are perfectly elastic – for a non-reactive gas (mixture), there is no mechanism by which the sum of kinetic energies after the collision is less than before. * Even if an individual gas molecule briefly found itself at rest against the bottom of the container, the thermal motion of the molecules of the container would almost immediately give it a "kick" and put it back into circulation. There is a theorem called the equipartition theorem that tells us that for each degree of freedom, the gas molecules will on average contain $\frac12 kT$ of energy. This is an average – individual molecules may at times have more or less. But the average must be maintained – and this means the gas molecules keep moving. One way you can get the molecules of the gas to settle at the bottom of the container would be to make the walls of the container very cold – taking thermal energy away, the molecules will eventually move so slowly that the effect of gravity (and intermolecular forces) will dominate. That won't happen by itself – you need to remove the energy somehow. To estimate the temperature you would need: for a container that is 10 cm tall, the gravitational potential energy difference of a nitrogen molecule is $mgh = 1.67\cdot 10^{-27}\ \mathrm{kg}\cdot 28 \cdot 9.8\ \mathrm{m\ s^{-2}}\cdot 0.1\ \mathrm m= 4.6\cdot 10^{-26}~\mathrm{J}$ . Putting that equal to $\frac12 kT$ gives us a temperature of $$T = \frac{2 m g h}{k} = 3.3\ \mathrm{mK}$$ That's millikelvin. So yes – when things get very, very cold, gravity becomes a significant factor and air molecules may settle near the bottom of your container. * Strictly speaking, this is a simplification. With sufficient energy, some collisions can lead to electronic excitation and even ionization of the molecules. The de-excitation of these states can result in radiative "loss" of energy, but if the system is truly closed (perfectly isolated) the radiation will stay inside until it's re-absorbed. Still, this means that, at least for a little while, kinetic energy may appear to be "lost". Similarly, there are some vibrational modes for molecules that get excited at sufficiently high energy/temperature; in these modes, energy moves from "kinetic" to "potential" and back again – so that it is not "kinetic" for a little bit of the time. An important consideration in all this is "what is the temperature of anything that the gas can exchange energy with". That is not just the walls of the container (although their temperature is very important), but also the temperature of anything the gas "sees" – since every substance at non-zero temperature will be a black body emitter (some more efficiently than others), if the gas can exchange radiation with a cooler region, this will provide a mechanism for the gas to cool. And if the gas gets cold enough, gravity wins.
{ "source": [ "https://physics.stackexchange.com/questions/327186", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/142799/" ] }
327,551
From Newton's Law of gravitation we know that: $$F=G\frac{m_1m_2}{d^2}$$ For simplicity, let's say that both $m$ are $1\;\mathrm{kg}$ and that the distance apart was $1\;\mathrm{m}$. Yielding $G$ as the attraction force in Newtons. Hence $F= 6.67\times 10^{-11}\;\mathrm{N}$. Now what if you had two $0.5\;\mathrm{kg}$ glued to each other and there was another two $0.5\;\mathrm{kg}$ glued together $1\;\mathrm{m}$ apart. If we were to find the gravitational attraction force of two $0.5\;\mathrm{kg}$ masses which are $1\;\mathrm{m}$ apart and multiply it by two since we have two of them that would yield a different answer than treating the two $0.5\;\mathrm{kg}$ that are glued together as one entity. Why does it yield a different answer? I observed this same phenomenon with Coulomb's law. (Serway & Faughn) Suppose that $1.00\;\mathrm{g}$ of hydrogen is separated into electrons and protrons. Suppose also that the protons are placed at the Earth's north pole and the electrons are placed at the south pole. What is the resulting compression force on the Earth? If I was to find the attraction force of one proton and electron than multiply it by Avogadro constant that would yield a different answer than saying that the charge of each particle is the elemental charge times Avogadro 's constant. So which is the proper way to do it?
You need to multiply by four not by two. To see why let's draw the situation: You are assuming the situation is as shown in the top diagram. So the two $M_1$s attract each other and the two $M_2$s attract each other. Those are the forces shown by the red lines. But you also need to include the force between $M_1$ on one side and $M_2$ on the other. Those are the green lines in the second diagram. So the total force will be: $$ F = F_{1-1} + F_{2-2} + F_{1-2} + F_{2-1} $$ Suppose we make the mass of each of the balls $M$, so if we combine them the total mass on each side is $2M$. If we combine the masses first then calculate the force we get: $$ F = \frac{G(2M)(2M)}{d^2} = 4\frac{GM^2}{d^2} $$ If we calculate the forces treating the masses separately we get${}^1$: $$\begin{align} F &= F_{1-1} + F_{2-2} + F_{1-2} + F_{2-1} \\ &= \frac{GMM}{d^2} + \frac{GMM}{d^2} + \frac{GMM}{d^2} + \frac{GMM}{d^2} \\ &= 4\frac{GM^2}{d^2} \end{align}$$ So the forces are the same. This also applies to the example of the electrostatic force that you mention. ${}^1$ Strictly speaking the distances $M_1 \rightarrow M_2'$ and $M_2 \rightarrow M_1'$ are slightly greater than the distance $M_1 \rightarrow M_1'$ and $M_2 \rightarrow M_2'$ but we'll assume this difference is negligibly small.
{ "source": [ "https://physics.stackexchange.com/questions/327551", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/145536/" ] }
327,554
When the angle of incidence is less than the critical angle, why is some light still reflected? When the angle of incidence is greater than the critical all light is reflected, why is not all light transmitted when it's less?
You need to multiply by four not by two. To see why let's draw the situation: You are assuming the situation is as shown in the top diagram. So the two $M_1$s attract each other and the two $M_2$s attract each other. Those are the forces shown by the red lines. But you also need to include the force between $M_1$ on one side and $M_2$ on the other. Those are the green lines in the second diagram. So the total force will be: $$ F = F_{1-1} + F_{2-2} + F_{1-2} + F_{2-1} $$ Suppose we make the mass of each of the balls $M$, so if we combine them the total mass on each side is $2M$. If we combine the masses first then calculate the force we get: $$ F = \frac{G(2M)(2M)}{d^2} = 4\frac{GM^2}{d^2} $$ If we calculate the forces treating the masses separately we get${}^1$: $$\begin{align} F &= F_{1-1} + F_{2-2} + F_{1-2} + F_{2-1} \\ &= \frac{GMM}{d^2} + \frac{GMM}{d^2} + \frac{GMM}{d^2} + \frac{GMM}{d^2} \\ &= 4\frac{GM^2}{d^2} \end{align}$$ So the forces are the same. This also applies to the example of the electrostatic force that you mention. ${}^1$ Strictly speaking the distances $M_1 \rightarrow M_2'$ and $M_2 \rightarrow M_1'$ are slightly greater than the distance $M_1 \rightarrow M_1'$ and $M_2 \rightarrow M_2'$ but we'll assume this difference is negligibly small.
{ "source": [ "https://physics.stackexchange.com/questions/327554", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/141935/" ] }
329,044
Sorry if this is a trivial question. Why does gravity act at the center of mass? If we have a solid $E$, shouldn't gravity act on all the points $(x,y,z)$ in $E$? Why then when we do problems we only only consider the weight force from the center of mass?
Suppose I have a collection of n vectors $x_i\quad \forall i\in(1,n), i\in \mathbb{Z}$ such that the corresponding masses at each $x_i$ is $m_i$. This is your body $E$ and if the total mass of your body is $M$, then $$M=\sum_{i=1}^{n}m_i$$ In that case, if $E$ is subjected to a uniform acceleration field $\vec{g}$, as specified in the answer above, then the net force acting on the body is $$F=\sum_{i=1}^{n}m_i \ddot{x}_i$$ But, the force on the entire body would be $F=Mg$. Let there be a point $X$ on the body such that I can say that $\ddot{X}=g$, Then I can write $F= M\ddot{X}=\sum_{i=1}^{n}m_i \ddot{x}_i$. From this you can interpret that $$\ddot{X}=\frac{\sum_{i=1}^{n}m_i \ddot{x}_i}{\sum_{i=1}^{n}m_i}$$ And the centre of mass is defined as $$\begin{equation}\label{com} x_{com}=\frac{\sum_{i=1}^{n}m_ix_i}{\sum_{i=1}^{n}m_i} \end{equation}$$ Since the body $E$ has constant mass, you can get the definition of center of mass above by simple integration.
{ "source": [ "https://physics.stackexchange.com/questions/329044", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/144977/" ] }
330,349
I have been out of physics for some time now since my childhood, so please bear with me if the question below feels too novice. I grew up with the understanding that the nuclear fusion reaction is still a dream of many people as it's a source of clean energy without the side effects of nuclear waste as we observe in nuclear fission. Now recently I was just checking the principle on which the hydrogen bomb works, and I was shocked that it uses nuclear fusion to generate all that energy. This contradicted my understanding that nuclear fusion is not a dream but it actually is a reality. So if we already achieved nuclear fusion why can't we create a nuclear fusion reactor out of it to generate all the power we need? Also why can't we have the small scale fusion reaction on Jupiter (as mentioned in my other question ) that can help us take over the outer planets of solar system. Also I just wanted to know if we can continue this fusion reaction to generate precious heavy metals – is it possible?
The example of a Molotov bomb , a favorite of anarchists, and a car engine are a good analogy. The technology needed to contain the energies in a fusion reaction is much harder than the one needed for a car engine because of the MeV energies needed to initiate fusion. Once initiated it is explosive, so it must be engineered into small explosions from which energy can be extracted continuously. Various ways of controlling fusion in a hot plasma of fusible materials , isotopes of hydrogen mainly, have been proposed and are being worked on. The tokamak is the basis of the international collaboration aiming to build an industrial prototype, ITER. . It is mainly an engineering problem coupled with the sociological problem of so many engineers and scientists working together in a project controlled by many research institutes. ( "too many cooks spoil the broth") Also just wanted to know if we can continue this fusion reaction to generate precious heavy metals, is it possible? Heavy metals are on the wrong curve for fusion, which can happen with elements up to iron or so. Each specific reaction will have to be considered, and it will be a completely different problem.
{ "source": [ "https://physics.stackexchange.com/questions/330349", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/146/" ] }
330,536
I've started studying QFT this year and in trying to find a more rigorous approach to the subject I ended up find out lots of people saying that "there is no way known yet to make QFT rigorous when there are interactions". As for the textbook approach, even without interactions it already seems not much rigorous, still, approaching it in the right way it seems to be possible to make it precise. Now, the rigour issue with interactions in QFT isn't explained in the books I'm using, and I confess I still didn't get it. I mean: some people say the problem are the Dyson's series that in QFT wouldn't converge, some people say the issue is that the Fock space representation cannot be built with interactions and hence particles don't even exist in this case. Some people even say that it is not even possible to describe the theory with Hilbert spaces. And there are quite a few more points people make on this matter. My question here isn't "how to solve these issues" because it seems to me that up to this day no one knows this yet. My question is: what really is the problem in more concrete terms. What are the problems that make QFT with interactions be non rigorous? How interactions causes these problems in contrast to free QFT?
There are many different problems with interactions; or, rather, many manifestations of the same problem. For example, interactions are always non-linear in the equations of motion, e.g., $$ (\partial^2+m^2)\phi=\lambda\phi^3 $$ or similar equations for QED. As the operators are distributions , their products are ill-defined, and there is no rigorous method to make sense out of $\phi^3$. It is just a meaningless expression (up to physicists standards, one could at this point mumble something about normal ordering, but this doesn't really archive much). Only in the case $\lambda=0$ does the equation above have a meaningful interpretation: it becomes a well-defined differential equation for a distribution, which can be made very rigorous within the context of distribution theory. For general $\lambda$, the equation is just meaningless. As free fields are well-defined and understood, one may attempt to fix the problem above by switching into the interaction picture, $$ \phi=U\Phi U^\dagger $$ where $\Phi$ is a free field, which is a well-defined object. Here Haag's theorem enters the picture and tells us that $U$ doesn't exist. Yet we physicist play to pretend that it does exist, and write $$ U=\mathrm {Te}^{iS_\mathrm{int}} $$ only to realise, later on, that $S_\mathrm{int}$ is plagued by divergences (for example, in the form of divergent counter-terms in the interaction Lagrangian). This is the price we must pay to have a finite $S$ matrix: as $U$ cannot possibly exist, we must encounter divergeneces in its very definition, or otherwise the theory would be utterly inconsistent. This is the point of view held by some people: QFT evades Haag's theorem through renormalisation, and only because the latter is an intrinsically ill-defined operation. One may even try to give up on trying to formulate the theory from first principles, and just content ourselves by defining the theory by its Feynman rules. Barring aside the fact that the perturbative series is asymptotic, Feynman rules are meaningless too from a rigorous point of view. For one thing, they include propagators and products thereof; and these objects are distributions as well, so their product is ill-defined. This fact of course manifests itself once again through divergences: Feynman diagrams include all sorts of divergences, which cannot be accommodated within a mathematically rigorous theory. This approach is typically hopeless too. The only way to really fix these problems is to work on a lattice . This is because when you go to a discrete space-time, distributions lose their singular nature, and you can use standard functions (i.e., the fields become operator-valued functions). For example, the Dirac delta in the r.h.s. of the canonical commutation relations becomes a Kronecker delta, so the l.h.s. loses its status of distribution. On more practical terms, when you work on a lattice everything is convergent, and so the theory makes sense, at least from a perturbative point of view. More fundamentally, when you work on a lattice, the degrees of freedom become finite, and so Haag's theorem doesn't apply anymore.
{ "source": [ "https://physics.stackexchange.com/questions/330536", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21146/" ] }
330,994
Light travels at speed x through a vacuum, and then it encounters a physical medium and slows down, only to leave the physical medium and re-enter vacuum. The speed of light immediately re-accelerates to speed x, the speed before going through the physical medium. How does this happen, what is the cause of this?
When light travels through a physical medium the photons don't actually slow down. They still travel at the speed of light. What makes it look like it slows down is the interactions between the photons and the physical medium. For example the electrons in atoms can absorb photons and go to a higher energy state and then re-emit the photons when they move back to their normal energy state. How long it takes between the absorption and emission of the photons determines how fast the light moves through a medium. But, if photons are absorbed an re-emitted, why do they (photons) have to get re-emitted on the same direction? Why not any direction? If the photons really would get fully absorbed by the atoms then that is what one would expect. One would also expect that some photons would bump into more atoms and some into less and thus sometimes they would take a long time to go through the medium and sometimes they would go through in a pretty short time. However that is not what you actually measure, the photons always take the same amount of time to travel through the medium. The photons are actually not fully absorbed, one can think of them being "virtually absorbed". They follow every possible path and interact with all of the atoms. The paths that don't cancel out correspond to the most likely paths that the photon will travel on. If you mathematically add all of these waves together that are traveling at the speed of light you get a wave that is traveling slower.
{ "source": [ "https://physics.stackexchange.com/questions/330994", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/116454/" ] }
331,632
A nonlinear spring whose restoring force is given by $F=-kx^3$ where $x$ is the displacement from equilibrium , is stretched a distance $A$ . Attached to its end is a mass $m$ . Calculate....(I can do that) ..suppose the amplitude of oscillation is increased, what happens to the period? Here's what I think: If the amplitude is increased the spring posses more total energy, at equilibrium the spring is traveling faster than before because it posses more kinetic energy. I think in the spring travels faster when it's at a similar displacement from equilibrium, but it has to travel more distance, so I can't conclude anything. I was think about solving, $$mx''=-kx^3$$ But realized this is a very hard job. Any ideas?
The potential energy is $U\left(x\right) = kx^4/4$ since $-d/dx\left(kx^4/4\right) = -kx^3 = F$, and the energy $$ E = \frac{1}{2}m\left(\frac{dx}{dt}\right)^2 + \frac{1}{4}kx^4 $$ is conserved. From the above you can show that $$ \begin{eqnarray} dt &=& \pm \ dx \sqrt{\frac{m}{2E}}\left(1-\frac{k}{4E}x^4\right)^{-1/2} \\ &=& \pm \ dx \sqrt{\frac{2m}{k}} \ A^{-2} \left[1-\left(\frac{x}{A}\right)^4\right]^{-1/2} \end{eqnarray} $$ where the amplitude $A = \left(4E / k\right)^{1/4}$ can be found from setting $dx/dt = 0$ in the expression for the energy and solving for $x$. The period is then $$ \begin{eqnarray} T &=& 4 \sqrt{\frac{2m}{k}} \ A^{-2} \int_0^A dx \left[1-\left(\frac{x}{A}\right)^4\right]^{-1/2} \\ &=& 4 \sqrt{\frac{2m}{k}} \ A^{-1} \int_0^1 du \left(1-u^4\right)^{-1/2} \\ &=& \left(4 \sqrt{\frac{2m}{k}} I\right) A^{-1} \\ &\propto& A^{-1} \end{eqnarray} $$ where $u = x/A$ and $I = \int_0^1 du \left(1-u^4\right)^{-1/2} \approx 1.31$ (see this ). You can repeat the above for a more general potential energy $U\left(x\right) = \alpha \left|x\right|^n$, where you should find that $$ dt = \pm \ dx \sqrt{\frac{m}{2\alpha}} \ A^{-n/2} \left[1-\left(\frac{\left|x\right|}{A}\right)^n\right]^{-1/2} $$ and $$ \begin{eqnarray} T_n &=& \left(4 \sqrt{\frac{m}{2\alpha}} I_n\right) A^{1-n/2} \\ &\propto& A^{1-n/2} \end{eqnarray} $$ where $$ I_n = \int_0^1 du \left(1-u^n\right)^{-1/2} $$ can be evaluated in terms of gamma functions (see this ). This is in agreement with the above for $\alpha = k/4$ and $n=4$, and with Landau and Lifshitz's Mechanics problem 2a of section 12 (page 27), where they find that $T_n \propto E^{1/n-1/2} \propto A^{1-n/2}$.
{ "source": [ "https://physics.stackexchange.com/questions/331632", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/144977/" ] }
331,699
Our teacher told us that protons are nearly 1800 times heavier than electrons. Is there any known reason as to why this is so? Or is this just an empirical value, one we do not know the reason to?
There are multiple reasons why protons are heavier than electrons. As you suggested, there are empirical and theoretical evidence behind this. I'll begin with the empirical, since they have important historical context associated with them. As a preface, this will be a fairly long post as I'll be explaining the context behind the experiments and the theories. Empirical Electron Mass Measuring the mass of an electron historically is a multi-step process. First, the charge is measured with the Millikan oil drop experiment, then the charge-to-mass ratio is measured with a variation of J.J. Thomson's experiment. Millikan Oil Drop In 1909, Robert A. Millikan and Harvey Fletcher measured the mass of an electron by suspending charged droplets of oil in an electric field. By suspending the oil droplets such that the electric field cancelled out the gravitational force, the charge of the oil droplet may be determined. Repeat the experiment many times for smaller and smaller oil droplets, and it may be determined that the charges measured are integer multiples of a singular value: the charge of an electron. $$e = 1.60217662 \times 10^{-19} \, \mathrm{C}$$ J.J. Thomson's Experiments In 1897 J. J. Thomson proved that cathode rays (a beam of electrons) were composed of negatively charged particles with a massive charge-to-mass ratio (as compared to ionized elements). The experiment began with first determining if cathode rays could be deflected by an electric field. The cathode ray was shot into a vacuumed Crookes tube, within which it'd pass between two plates before impacting an electric screen. When the plates were charged, the beam would deflect and hit the electric screen thereby proving that cathode rays contained a charge. Later he would perform a similar experiment, but exchange the electric field for a magnetic field. This time though, the magnetic field would induce centripetal acceleration upon the cathode ray and produce circles. By measuring the radius of the circle and the strength of the magnetic field produced, the charge-to-mass ratio ($e/m_e$) of the cathode ray would be obtained. $$e/m_e = 1.7588196 \times 10^{11} \, \mathrm{C} \cdot \mathrm{kg}^{-1}$$ Multiply this by the elementary charge obtained in the Millikan oil experiment, and account for uncertainty, and the mass of the electrons in the cathode ray is obtained. $$m_e = \frac{e}{\frac{e}{m_e}} = \frac{1.60217662 \times 10^{-19} \, \mathrm{C}}{1.7588196 \times 10^{11} \, \frac{\mathrm{C}}{\mathrm{kg}}} = 9.10938575 \times 10^{-31} \, \mathrm{kg}$$ Empirical Proton Mass Ernest Rutherford is credited with the discovery of the proton in 1917 (reported 1919). In that experiment he detected the presence of the hydrogen nucleus in other nuclei. Later he named that hydrogen nucleus the proton, believing it to be the fundamental building block to other elements. Since ionized hydrogen consisted only of a proton, he correctly deduced that protons are fundamental building blocks to the nuclei of elements; however, until the discovery of the neutron, ionized hydrogen and the proton would remain interchangeable. How then, was the proton mass measured? By measuring the mass of ionized hydrogen. $$m_p = 1.6726219 \times 10^{-27} \mathrm{kg}$$ This is done in one of several ways, only one of which I'll cite here. J.J. Thomson Variation Repeat J.J. Thomson's experiment with magnetic deflection; but, swap out the cathode ray for ionized hydrogen. Then you may measure the charge to mass ratio ($e/m$) of the ions. Since the charge of a proton is equivalent to the charge of an electron: $$m_p = \frac{e}{\frac{e}{m}} = \frac{1.60217662 \times 10^{-19} \, \mathrm{C}}{9.5788332 \times 10^{7} \, \frac{\mathrm{C}}{\mathrm{kg}}} = 1.67262 \times 10^{-27} \, \mathrm{kg}$$ Other variations Other variations may include the various methods used in nuclear chemistry to measure hydrogen or the nucleus. Since I'm not familiar with these experiments, I'm omitting them. Empirical Proton to Electron Mass Ratio So now we've determined: $$m_p = 1.6726219 \times 10^{-27} \, \mathrm{kg}$$ and $$m_e = 9.10938575 \times 10^{-31} \, \mathrm{kg}$$ Using the two values and arithmetic: $\frac{m_p}{m_e} = \frac{1.6726219 \times 10^{-27} \, \mathrm{kg}}{9.10938575 \times 10^{-31} \, \mathrm{kg}} = 1836$, or $1800$ if you round down. Theoretical Proton to Electron Mass Ratio Theoretically, you first need to understand a basic principal of particle physics. Mass and Energy take on very similar meanings in particle physics. In order to simplify calculations and use a common set of units in particle physics variations of $\mathrm{eV}$ are used. Historically this was developed from the usage of particle accelerators in which the energy of a charged particle was $\mathrm{qV}$. For electrons or groups of electrons, $\mathrm{eV}$ was convenient to use. As this extends into particle physics as a field, the convenience remains, because anything develop theoretically needs to produce experimental values. Using variations of $\mathrm{eV}$ thus removes the need for complex conversions. These "fundamental" units, called the planck units, are: $$\begin{array}{|c|c|c|} \hline \text{Measurement} & \text{Unit} & \text{SI value of unit}\\ \hline \text{Energy} & \mathrm{eV} & 1.602176565(35) \times 10^{−19} \, \mathrm{J}\\ \hline \text{Mass} & \mathrm{eV}/c^2 & 1.782662 \times 10^{−36} \, \mathrm{kg}\\ \hline \text{Momentum} & \mathrm{eV}/c & 5.344286 \times 10^{−28} \, \mathrm{kg \cdot m/s}\\ \hline \text{Temperature} & \mathrm{eV}/k_B & 1.1604505(20) \times 10^4 \, \mathrm{K}\\ \hline \text{Time} & ħ/\mathrm{eV} & 6.582119 \times 10^{−16} \, \mathrm{s}\\ \hline \text{Distance} & ħc/\mathrm{eV} & 1.97327 \times 10^{−7} \, \mathrm{m}\\ \hline \end{array}$$ Now then, what's the rest energies of a proton and electron? $$\text{electron} = 0.511 \, \frac{\mathrm{MeV}}{c^2}$$ $$\text{proton} = 938.272 \, \frac{\mathrm{MeV}}{c^2}$$ As we did with the experimentally determined masses, $$\frac{m_p}{m_e} = \frac{938.272 \, \frac{\mathrm{MeV}}{c^2}}{0.511 \, \frac{\mathrm{MeV}}{c^2}} = 1836$$ which matches the previously determined value. Why? I'll preface this section by pointing out that "why" is a contentious question to ask in any science without being much more specific . In this case, you may be wondering what causes the proton mass to 1800× larger than the electron. I'll attempt an answer here: Electrons are elementary particles. They cannot (or at least have never been observed to) break down into "constituent" particles. Protons, on the other hand, are composite particles composed of 2 up quarks, 1 down quark, and virtual gluons. Quarks and gluons in turn are also elementary particles. Here are their respective energies: $$\text{up quark} = 2.4 \, \frac{\mathrm{MeV}}{c^2}$$ $$\text{down quark} = 4.8 \, \frac{\mathrm{MeV}}{c^2}$$ $$\text{gluon} = 0 \, \frac{\mathrm{MeV}}{c^2}$$ If you feel that something is off, you're correct. If you assume $$m_p = 2m_{\uparrow q} + m_{\downarrow q}$$ you'll find: $$m_p = 2m_{\uparrow q} + m_{\downarrow q} = 2 \times 2.4 \, \frac{\mathrm{MeV}}{c^2} + 4.8 \, \frac{\mathrm{MeV}}{c^2} = 9.6 \, \frac{\mathrm{MeV}}{c^2}$$ but $$9.6 \, \frac{\mathrm{MeV}}{c^2} \ne 938.272 \, \frac{\mathrm{MeV}}{c^2}$$ This begs the question: what happened, why is the proton mass 100 times larger than the mass of its constituent elementary particles? Well, the answer lies in quantum chromodynamics, the 'currently' governing theory of the nuclear force. Specifically, this calculation performed above omitted a very important detail: the gluon particle field surrounding the quark that binds the proton together . If you're familiar with the theory of the atom, a similar analogy may be used here. Like atoms, protons are composite particles. Like atoms, those particles need to be held together by a "force". For atoms, the Electromagnetic Force binds electrons to the atomic nucleus with photons (who mediate the EM force). For protons, the Strong Nuclear Force binds quarks together with gluons (who in turn mediate the SN force). The difference between the two though, is that photons can exist independently of the electron and nucleus. Thus we can detect it and perform a host of measurements with them. For gluons though, they not only mediate the strong force between quarks, but may also interact with each other via the Strong Nuclear Force. As a result, strong nuclear interactions are much more complex than electromagnetic interactions. Gluon Color Confinement This goes further. Gluons carry a property called color. When two quarks share a pair of gluons, the gluon interaction is color constrained. This means that as the quarks are brought apart, the 'color field' between them increases in strength linearly. As a result, they require an ever increasing amount of energy to be pulled apart from each other. Compare this to the EM force. When you try to pull an electron from its atom, it requires enough energy to be plucked from its shell into the vacuum. If you don't, it'll jump up one or more energy levels, then fall back to its original shell and release a photon that carries the difference. Similarly, if you want to pluck an object from a planet, you need to provide it with enough energy to escape the planet's gravity indefinitely (energy needed to reach escape velocity). Unlike the gravitational force and the electromagnetic force, the force binding gluons to each other grows stronger as they grow apart. As a result, there comes an inevitable point where the it becomes increasingly more energetically favorable for a quark-antiquark pair to be produced than for the gluons to be pulled further. When this occurs, the quark and antiquark bind to the 2 quarks that were being pulled apart, and the gluons that were binding them are now binding the new pair of quarks. This animation is from Wikipedia , courtesy of user Manishearth under the Creative Commons Attribution-Share Alike 3.0 Unported license. But wait! Where did those two quarks came from? Recall how pulling the quarks apart requires energy? Well that energy is on the scale of $\mathrm{GeV}$. At these scales, the energy may convert to particles with kinetic energy. In fact in particle accelerators, we typically see jets of color neutral particles (mesons and baryons) clustered together instead of individual quarks. This process is called hadronization but is also referred to as fragmentation or string breaking depending on the context or year. Finally I must point out that this one of the least understood processes in particle physics because we cannot study or observe gluons alone. Proton Mass So, now going back to the original question. Earlier we noticed that the empirical proton mass was $938.272 \, \frac{\mathrm{MeV}}{c^2}$; but, theoretically its mass should be $9.6 \, \frac{\mathrm{MeV}}{c^2}$. The $928.672 \, \frac{\mathrm{MeV}}{c^2}$ difference arises from the color constraints that binds the three quarks together. In simpler terms: the nuclear binding energy of the proton.
{ "source": [ "https://physics.stackexchange.com/questions/331699", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/123688/" ] }
331,774
I am reading this really interesting book by Zwiebach called "A First Course in String Theory". Therein, he generalizes the laws of electrodynamics to the cases where dimensions are not 3+1. It's an intriguing idea but the way he generalizes seems like an absolute guess with no sound basis. In particular, he generalizes the behavior of electric fields to the case of 2 spatial and 1 temporal dimensions by maintaining $\vec{\nabla}. \vec{E} = \rho$. But I struggle to understand why. I could have maintained that $|\vec{E}|$ falls off as the square of the inverse of the distance from the source. Essentially, there is no way to differentiate between the Coulomb's law and the Gauss's law in the standard 3+1 dimensions--so how can I prefer one over the other in the other cases? To me, it seems like it becomes purely a matter of one's taste as to which mathematical form seems more generic or deep--based on that one guesses which form would extend its validity in the cases with the number of dimensions different than that in which the experiments have been performed. But, on the other hand, I think there should be a rather sensible reason behind treating the laws in the worlds with a different number of dimensions this way--considering how seriously physicists talk about these things. So, I suppose I should be missing something. What is it?
Great question. First of all, you're absolutely right that until we find a universe with a different number of dimensions in the lab, there's no single "right" way to generalize the laws of physics to different numbers of dimensions - we need to be guided by physical intuition or philosophical preference. But there are solid theoretical reasons for choosing to generalize E&M to different numbers of dimensions by choosing to hold Maxwell's equations "fixed" across dimensions, rather than, say, Coulomb's law, the Biot-Savart law, and the Lorentz force law. For one thing, it's hard to fit magnetism into other numbers of dimensions while keeping it as a vector field - the defining equations of 3D magnetism, the Lorentz force law and the Biot-Savart law, both involve cross products of vectors, and cross products can only be formulated in three dimensions (and also seven, but that's a weird technicality and the 7D cross product isn't as mathematically nice as the 3D one). For another thing, a key theoretical feature of 3D E&M is that it is Lorentz-invariant and therefore compatible with special relativity, so we'd like to keep that true in other numbers of dimensions. And the relativistically covariant form of E&M much more directly reduces to Maxwell's equations in a given Lorentz frame than to Coulomb's law. For a third thing, 3D E&M possess a gauge symmetry and can be formulated in terms of the magnetic vector potential (these turn out to be very closely related statements). If we want to keep this true in other numbers of dimensions, then we need to use Maxwell's equations rather than Coulomb's law. These reasons are all variations on the basic idea that if we transplanted Coulomb's law into other numbers of dimensions, then a whole bunch of really nice mathematical structure that the 3D version possesses would immediately fall apart.
{ "source": [ "https://physics.stackexchange.com/questions/331774", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/87745/" ] }
331,899
Why don't metals glow from red to yellow to green to blue etc.? Why only red, then yellow and then white? Shouldn't all wavelengths be emitted one by one as the temperature of the metal increases? If some metals do glow at with different colours, could you give me examples of such metals and the reason why this happens in specific cases?
The physics of why the heated metal glows like a black body has already been thoroughly covered in the previous answers. However, in order to completely bridge the gap with the physiology of color perception (which has been alluded to in some answers), it is worth showing a picture of the Planckian locus : Plot by PAR, from Wikimedia Commons This is the set of all the colors a black body can have, plotted in a chromaticity diagram. It is computed by combining the black body emission formula with the color matching functions , which are a mathematical model of our color vision. This graph clearly shows the path of a black body going hotter: red → orange → yellow → white → blue. Now, one may wonder by which coincidence it hits right into the white, rather than going slightly above (through the greens) or below (through the purples). That question, however, is backwards. The good question would be “Why have we chosen to name ‘white’ a color from the Planckian locus”. This is the question of the definition of white , and it is not straightforward. In the physicists jargon, the name white is often used to mean a “flat spectrum”, i.e. one in which the power per unit frequency does not depend on the frequency. When talking about actual visible colors, however, it has a completely different meaning: A surface is said to be white if it bounces back almost all the visible light that is shed to it. Light is said to be white if it looks like the light typically coming from a white surface. This leaves the notion of while light ill defined: the light coming from a white surface has the same spectrum as whatever illuminant (meaning: light source) was shined to it. Then the light from any typical illuminant could be considered, in some sense, to be “white light”. In practice, in the realm of color science, there are some so called “ standard illuminants ” which are deemed white. Most notably D65 and D55. These are meant to model natural daylight. The choice of daylight as a reference light source is obvious given that our species has evolved in a world where daylight has always been the standard light source, and thus our natural white reference. The spectrum of daylight varies with the weather and with the height of the sun above the horizon, but it is never too far from a black body spectrum. Which is probably not very surprising given that the Sun itself is a pretty good black body.
{ "source": [ "https://physics.stackexchange.com/questions/331899", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/102351/" ] }
331,976
I was recently reading Griffiths' Introduction to Quantum Mechanics , and I stuck upon a following sentence: but $\Psi$ must go to zero as $x$ goes to $\pm\infty$ - otherwise the wave function would not be normalizable. The author also added a footer: "A good mathematician can supply you with pathological counterexamples, but they do not arise in physics (...)". Can anybody give such a counterexample?
Take a gaussian (or any function that decays sufficiently quickly), chop it up every unit, and turn all the pieces sideways.
{ "source": [ "https://physics.stackexchange.com/questions/331976", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/155567/" ] }
331,977
How do we find the energy momentum tensor as Noether charge for translations in curved spaces. This should still exist since the action is still an integral over space such that it it invariant under translations right? Attempt at a solution Our Lagrangian will be of the form: $\mathcal{L} = \mathcal{L}[\phi, \partial_\mu \phi]$ and the corresponding Euler Lagrange equations are: $$\frac {\partial \mathcal{L}}{\partial \phi} - \nabla_\mu \frac{\partial \mathcal{L}}{\partial \partial_\mu \phi} = 0$$ 1)To obtain the Noether charge we must demand that the on-shell variation of $\mathcal{L}$ is a surface term indeed, we find that: $$\delta \mathcal{L} = \frac{\partial \mathcal{L}}{\partial \phi}\delta \phi + \partial_\mu(\frac{\partial \mathcal{L}}{\partial \partial_\mu \phi}\delta \phi) - \partial_\mu(\frac{\partial \mathcal{L}}{\partial \partial_\mu \phi})\delta \phi$$ The two partial derivatives in the second and third terms can be changed into covariant derivatives since the additional christoffel symbols will cancel out. After doing so we find that the first and third terms cancel due to the equations of motion such that we end up with: $$\delta \mathcal{L} = \nabla_\mu(\frac{\partial \mathcal{L}}{\partial \partial_\mu \phi}\delta \phi)$$ 2)We must also study the variation of the Lagragian due to the variation of the fields these changes are: $x^\mu \rightarrow x^\mu + \epsilon^\mu$ such that $ \phi \rightarrow \phi + \epsilon^\mu \partial_\mu \phi$ therefore we find that(for a free scalar): $$\delta \mathcal{L} = -\partial_\mu \phi \partial^\mu \delta \phi$$ $$=-\partial_\mu \phi \partial^\mu(\epsilon^\kappa \partial_\kappa \phi)$$ $$=\epsilon^\kappa \partial_\kappa(-1/2 \partial_\mu \phi \partial^\mu \phi) = \epsilon^\kappa \partial_\kappa(\mathcal{L})$$ 3)The next step is typically to state that $j_\mu$ = (1) - (2) is conserved $(\nabla_\mu J^\mu = 0)$ but I cannot find any way to rewrite (2) such that it contains only covariant derivatives. I think that I am mistaken somwhere in my derivation of point 2 but I cannot figure it out... Any help would be greatly appreciated ! :)
Take a gaussian (or any function that decays sufficiently quickly), chop it up every unit, and turn all the pieces sideways.
{ "source": [ "https://physics.stackexchange.com/questions/331977", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/147790/" ] }
331,993
A particle starts its motion from origin with a velocity $4$ m/s in positive $x$ direction. It's acceleration is related with position as $a = (2x + 2)$. Find magnitude of velocity of particle at $x=4$. I've tried a lot to bring it with respect to time. But I could not proceed any further. Is this problem valid or there's a typo in my book.
Take a gaussian (or any function that decays sufficiently quickly), chop it up every unit, and turn all the pieces sideways.
{ "source": [ "https://physics.stackexchange.com/questions/331993", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/123924/" ] }
332,461
In many texts, the non-relativistic (Newtonian) kinetic energy formula $$\text{KE}_\text{Newton} =\frac{1}{2}mv^2$$ is referred to as a first order approximation of the relativistic kinetic energy $$\text{KE}_\text{relativistic} = \gamma mc^2 - mc^2$$ The same is also said of the classical momentum formula in relation to its relativistic counterpart. However, comparing the Newtonian approximations to their respective relativistic formulas, the Newtonian KE formula appears to be a second order approximation while the momentum formula appears to be of first order. Let's begin with momentum. The relativistic formula for momentum is $$ p=\gamma mv=\frac{mv}{\sqrt{1-\left(\frac{v}{c}\right)^2}} \, . $$ For non-relativistic velocities ($v \ll c$), we use the Taylor series $$ \frac{x}{\sqrt{1-x^2}} \approx x\left(1 + \frac{x^2}{2}\right) \, , $$ giving $$p/c \approx mv/c \left[ 1 + \frac{1}{2}\left( \frac{v}{c} \right)^2 \right] \approx m (v/c)$$ which is first order in $v/c$. In other words, $p\approx mv$ which is the usual Newtonian expression. On the other hand, the relativistic kinetic energy is \begin{align} \text{KE}_\text{relativisitic} = \gamma mc^2 - mc^2 = \frac{mc^2}{\sqrt{1-\left( \frac{v}{c}\right)^2}} - mc^2 \end{align} which for $v \ll c$ is $$ \text{KE}_\text{relativistic} \approx mc^2 \left[ 1 + \frac{1}{2}\left( \frac{v}{c} \right)^2\right] - mc^2 = mc^2 \frac{1}{2} \left( \frac{v}{c} \right)^2 = \frac{1}{2} m v^2$$ which is obviously second order in $v$. If we compare plots of the Newtonian forms for kinetic energy and linear momentum against their respective relativistic formulas, there appears to be a closer agreement for the approximation of kinetic energy than can be seen for linear momentum. And hence my question: why is the Newtonian formula for kinetic energy referred to as a first order approximation when it appears to be of a second order?
The way I see it, there are four possible answers. You can pick the one you like the most, because in the end it doesn't matter very​ much. You're right, it's a second order approximation and those who say it's first order are making a terminology mistake. When we say first order, we really mean first non-null order, since the linear term vanishes. It's actually first order in $v^2$. It doesn't really matter. We all know what the non relativistic approximation is, its properties are not going to change if we call it by a different name. Personally I support answer 4, and I suggest you get used to it because physics is not known for its rigor and formality.
{ "source": [ "https://physics.stackexchange.com/questions/332461", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/40591/" ] }
332,972
The atomic mass unit is defined as 1/12th the mass of a carbon-12 atom. Was there any physical reason for such a definition? Were they trying to include electrons in the atomic mass unit? Why not define the amu as the mass of one proton or neutron so that in nuclear calculations at least one of the nuclear particles (out of protons and neutrons) would be a nice whole number?
Why was Carbon-12 chosen for the atomic mass unit? As is the case elsewhere in metrology, the answer is tied up in history, measurability, practicality, repeatability, past misconceptions, and consistency (despite those past misconceptions). The history of atomic mass and the mole (the two are quite interconnected) goes back to the early 19th century to John Dalton, the father of atomic theory[1]. The unified atomic mass unit is named after him. Scientists of that era were just learning about elements; the periodic table was 60 years in Dalton's future. Dalton initially proposed using hydrogen as the basis. Issues of measurability and repeatability quickly cropped up. So did mistakes. Dalton, for example, thought water was HO rather than H 2 O[2]. These issues resulted in chemists switching to to an oxygen-based standard based on the oxygen found on Earth. (That elements can come in multiple isotopes was not known at this time.) Physicists' investigations at the atomic level caused them to develop their own standard in the 20th century, based on 16 O rather than the natural mix of 16 O, 17 O, and 18 O (atomic masses: 15.994915, 16.999131, and 17.999161, respectively, with a nominal mix of 379.9 ppm for 17 O, 2005.20 ppm for 18 O, and the remainder 16 O) used by chemists. The natural mix of the various isotopes of oxygen is not constant. It varies with time, place, and climate. Improved measurements and more widespread usage made repeatability become a significant issue by the middle of 20th century. The primary cause is natural variations in the two most common isotopes of oxygen, 16 O (the dominant isotope) and 18 O (about 2000 parts per million, on average). The IUPAC Technical Report[4] on atomic weights of the elements lists the atomic weight of naturally occurring oxygen as varying from 15.99903 to 15.99977. The primary cause of these natural variations is the preferential evaporation and precipitation of water molecules based on various isotopes of oxygen. Water based on 16 O evaporates more slightly readily than does water based on 18 O, making tropical oceans a bit concentrated in 18 O compared to average. On the flip side, water based on 18 O precipitates slightly more readily than does water based on 16 O. This makes precipitation in the tropics have slightly higher 18 O concentrations compared to nominal, and it makes precipitation in high latitudes have slightly lower 18 O concentrations compared to nominal. Physicists had a solution: Switch to their isotopically pure 16 O standard. This would have represented an unacceptably large change (275 ppm[3]) in chemistry's oxygen-based standard. It would have required textbooks, reference books, and perhaps most importantly, the recipes used at refineries and other chemical factories to have been rewritten. The commercial costs would have been immense. It's important to keep in kind that metrology exists first and foremost to support commerce. Chemists therefore balked at that suggestion made by physicists. The carbon-based standard represented a nice compromise. By chance, defining the atomic mass as 1/16th of the mass of a mole of oxygen comprising a natural mix of 16 O, 17 O, and 18 O is very close to a standard defining the atomic mass as 1/12 the mass of a mole of 12 C [3]. This represented a 42 ppm change from the chemists' natural oxygen standard as compared to the 275 ppm change that would have resulted from changing to 1/16 of the mass of a mole of 16 O [3]. This new standard was based on a pure isotope, thereby keeping physicists happy, and it represented an acceptably small departure from the past, thereby keeping chemists and commerce happy. References: Britannica.com on John-Dalton/Atomic-theory entry I'm leary of referencing wikipedia. Britannica is still fair game for basic facts. Class 11: How Atoms Combine Dalton's mistake on assuming water was diatomic is widely reported. This is one of many sites that make this claim on Dalton's mistake. Holden, Norman E. "Atomic weights and the international committee–a historical review." Chemistry International 26.1 (2004): 4-7. I found this after the fact, after Emilio Pisanty asked me to find some references. This says everything I wrote, only better, in more detail, and with lots of references. Meija, Juris, et al. "Atomic weights of the elements 2013 (IUPAC Technical Report)." Pure and Applied Chemistry 88.3 (2016): 265-291. See table 1, and also figure 6.
{ "source": [ "https://physics.stackexchange.com/questions/332972", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/102351/" ] }
333,025
Recently I was eating a yellow rice for lunch in a restaurant with only yellow lights. But the rice looked white! I was intrigued by this because I always thought it should look yellow since the yellow pigment reflects only yellow light, but the rice looked really, really white. Why is that? I thought it could be something about the rice, but any yellow object was looking white. The room was full of yellow light bulbs (not normal yellowish bulbs, but very tinted yellow lights) and there was no other color of light to interfere. Maybe there's something to do with human perception?
Your brain adjusts your perception of color to compensate for lighting that is strongly tinted. This was the reason for the violent conflict some time back about a certain dress . Depending on whether people perceived the dress was being lit by yellow-tinted or blue-tinted light, they saw either a black and blue dress or a white and gold dress. Here's an animated version to show that color happens in the brain, not physics. Here's another picture to show that your brain interprets colors contextually. The squares marked A and B are exactly the same shade of grey. But, because your brain interprets square B as being in a shadow, it "knows" that the "real" color of the square is lighter. So, you perceive a lighter shade than what is actually there. Below is an edited version of the checkerboard showing a single color linking the squares A and B (the squares A and B have not been recolored). What I find funny is that half the time I see the squares and line as single color, and half the time I see a gradient from the "dark" square to the "light" square. In the animated picture above, I still see the moving swatch fading from one color to another as it moves. So, because of the strong yellow lighting in the restaurant, your brain thought the yellow of the rice was due to the lighting and "corrected" your perception. I keep saying the brain does this because all of this visual post-processing happens subconsciously. Pictures taken from the wikipedia article: https://en.wikipedia.org/wiki/Checker_shadow_illusion To summarize, the physics of light ends at your retina. Particles of light (photons), each with a a certain energy, hit the cells of your retina, setting off electrical signals that travel to your brain. Your brain then processes these electrical signals to create a coherent image. These processes include factors from memory (what things "should" look like), local contrasts (in color and brightness), cues from the environment (including available light sources), and many others. The result of all this mental post-processing can result in identical photons creating different colors in the mind, as evidenced by the picture above. Now, who do you trust? Me or your lying eyes?
{ "source": [ "https://physics.stackexchange.com/questions/333025", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/148522/" ] }
333,542
The YouTube video How Hot Can it Get? contains, at the 2:33 mark, the following claim: A pin head heated to 15 million degrees will kill everyone in a 1000 miles radius. On what basis can this claim be true? Some of the things I can think of: Radiation of the metal as it cools down Energy released in fusion (not sure if this works for an iron pin) Would the damage be only to organic matter or will it destroy other structures within that radius?
In this occasion Vsauce rather dropped the ball, I should think. As the other answers show, the claim as stated doesn't make much sense when you put in the numbers, and if you chase the source to its origin there's some crucial context that got dropped along the chain. The video description attributes the quote to the book The Universe and the Teacup: The Mathematics of Truth and Beauty , by KC Cole, which contains the quote, attributed to James Jeans (but without an actual reference), in the second page of chapter 2, A pinhead heated to the temperature of the center of the Sun, writes Jeans, "would emit enough heat to kill anyone who ventured within a thousand miles of it." The quote itself comes from The universe around us (Cambridge University Press, 1930), p. 289, and it reads The calculated central temperature of 30 to 60 million degrees so far transcends our experience that it is difficult to realise what it means. Let us, in imagination, keep a cubic millimetre of ordinary matter ─ a piece the size of an ordinary pin-head ─ at a temperature of 50,000,000 degrees, the approximate temperature at the centre of the sun. Incredible though it may seem, merely to maintain this pin-head of matter at such a temperature ─ i.e. to replenish the energy it loses by radiation from its six faces ─ will need all the energy generated by an engine of three thousand million million horse-power; the pin-head of matter would emit enough heat to kill anyone who ventured within a thousand miles of it. OK, so having filled in the references, let's pick apart the calculation and see what the claim actually is. What Cole and Vsauce missed in the quoting is a crucial qualifier: merely to maintain this pin-head of matter at such a temperature ... The claim is therefore that an object at that high a temperature, were it to radiate away as a blackbody, while also having a magical energy pump to keep it at that temperature, would be as deadly as claimed. To see whether this is true, let's put in some numbers. A blackbody at temperature $T$ radiates away a power determined by the Stefan-Boltzmann law, which reads $P=\sigma AT^4$, where $A=6\:\mathrm{mm}^3$ is the area of the cubic pinhead of the paragraph and $\sigma = 5.67\times 10^{-8}\:\mathrm{W\:m^{-2}\:K^{-4}}$ is the Stefan-Boltzmann constant, and this power then gets distributed equally over a sphere of radius $R=1000\:\mathrm{mi}$, so this gives a power density at that 1000-mile radius of $$ j = \frac{\sigma \, A \, T^4}{ 4\pi R^2} \approx 0.0104 \left(T/\mathrm{MK}\right)^4 \:\mathrm{W/m^2}. $$ Now, here we get to one of the sticky points: the claim in Jeans' book has over-estimated the temperature of the core of the sun by about $50/15\approx 3.33$, but Vsauce has missed that discrepancy and he's repeated the claim for the more modern value of $15\:\mathrm{MK}$. Normally, this would not be a problem, because factors of $3$ are pretty ignorable in Fermi analyses, but the Stefan-Boltzmann law has a quartic dependence in $T^4$ and this can mount up quickly, giving a discrepancy of $(50/15)^4\approx 120$ between the Vsauce claim and its Jeans source. In this case, the difference does matter, probably because Jeans has chosen his numbers so they're roughly at the edges of what they can give. If we put in the numbers for the modern value of the core temperature, we get $$ j_\mathrm{Vsauce} \approx 0.0104 \times 15^4 \:\mathrm{W/m^2} \approx 530\:\mathrm{W/m^2}, $$ which curiously enough is just under half of the solar constant , i.e. the energy flux density from the actual Sun at the surface of the Earth. Thus, from a pure heat-flow perspective, if you were exposed to this for an extended period of time, you might get slightly sunburned, but it's very far from deadly. The Jeans claim, on the other hand, is somewhat different because of that factor of a hundred, giving $$ j_\mathrm{Jeans} \approx 0.0104 \times 50^4 \:\mathrm{W/m^2} \approx 65\:\mathrm{kW/m^2} = 6.5 \:\mathrm{W/cm^2}, $$ and that's a lot closer to the damage thresholds. Going by Safety with Lasers and Other Optical Sources: A Comprehensive Handbook (Sliney and Mellerio, Springer, 1980, p. 162), the threshold for flash burns is at around $12\:\mathrm{W/cm^2}$, while second-degree burns start at $24\:\mathrm{W/cm^2}$ - for a flash exposure under half a second in duration. Stick around for more than a minute and it sounds about right that you'll very quickly develop some very severe burns, and succumb to them not long after that. However, as pointed out in the comments, the bulk of the radiation that carries this energy will be in the form of high-energy photons, peaking at around $1.3\:\mathrm{keV}$ (for $T=15\:\mathrm{MK}$; it's $4.3\:\mathrm{keV}$ at $T=50\:\mathrm{MK}$), and that's at the beginning of the ionizing-radiation regime (more specifically grenz rays ), which means that the effects are somewhat harder to model, and the detailed radiometry of what would happen is maybe an interesting exercise for an xkcd What if? episode. As a rough estimate, if you assume that all of the radiation is absorbed (reasonable given this plot of attenuation lengths in water), and taking a surface area of $1\:\mathrm{m}^2$ and a body mass of $75\:\mathrm{kg}$, the Jeans energy flux, when seen as ionizing radiation, is equivalent to an absorbed dose of about $870\:\mathrm{Gy/s}$, which immediately gets out of hand. The lower-temperature source at $15\:\mathrm{MK}$ delivers an equivalent dose of about $7\:\mathrm{Gy/s}$, which I guess is a bit over the ballpark of a Chernobyl liquidator after a couple of seconds. It seems, then, that at both temperatures you're likely to die through radiation sickness, though the details will be messy to work out - but then again it's not really what either of the original sources imply. To emphasize, though ─ this calculation assumes that you have a magical source of energy that can supply the ${\sim}2\times 10^{18}\:\mathrm{W}$ (!) required to keep that pinhead at $50\:\mathrm{MK}$. It's a reasonable thing to assume if you're already in hypothetics land, but it's a completely different question to the energy that's actually stored in that tiny bit of highly ionized iron plasma, and it's important to state that up front. I'll keep this around for the next time I need to scare someone into checking their sources ─ it's a monument to academic carelessness , if you will ─ because it's such a good example of how things fall apart if you don't look carefully enough. The claim, in its original context, is roughly reasonable - but the Vsauce claim falls flat under even mild scrutiny.
{ "source": [ "https://physics.stackexchange.com/questions/333542", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3641/" ] }
333,619
Fairly straightforward. I just thought it wouldn't be too hard to produce a ripple in spacetime several times smaller than a proton radius in a particle accelerator or something. It seems like it should be going on all the time.
In 1973, Grishchuk and Sazhin proposed in their paper, "Emission of gravitational waves by an electromagnetic cavity", a method by which to generate gravitational waves for experiments, using the argument that while the generation would be very weak, it would also not suffer from the decay in $r^{-2}$. The idea was to generate a rapid quadrupole moment change in the electrons of a metal cavity by applying very high-frequency EM radiations to it. The average of the energy flux was found to be of the order $G c^{-3} R^{-2} \lambda^2 r_0^2 \varepsilon^2$, with $\lambda$ the frequency of the gravitational radiation, $r_0$ the characteristic dimension of the cavity, and $\varepsilon$ the energy density of the EM wave. From the appearance of $G c^{-3}$, you can tell that it's going to be fairly small. I don't know if this idea was ever discussed again (this paper was almost never referenced), or if it would be more plausible today, but it is certainly a rather complex setup for very small results.
{ "source": [ "https://physics.stackexchange.com/questions/333619", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/116137/" ] }
334,004
In questions like this one , temperatures of millions of degrees (Celsius, Kelvin, it doesn't really matter at that point) are mentioned. But, what does it mean exactly? What is measured, and how? As I more or less expected, the Wikipedia article mentions that the official definition (the one I was told in elementary school, either between freezing and boiling points of water for Celsius, or between absolute zero and triple point of water for Kelvin) doesn't really work above 1300K. So, questions: For "earthly temperatures", how is temperature measured consistently? Why would linearity of the measuring device be assumed, in particular to extrapolate beyond the initial (0-100, or 0-273.16) range? What guarantees that two thermometers, working under different principles (say, mercury and electricity), that agree at 0C and 100C, will agree at 50C? What do statements like "the temperature in the Sun's core is $1.5\times10^7K$ mean? Or, even, what do the "7000K" in the Earth's core mean?
Definitions First and foremost, temperature is a parameter defining a statistical distribution , much as the statistical parameters of mean and standard deviation define the normal probability distribution. Temperature defines the equilibrium (maximum likelihood) distribution of energies of particles in a collection of statistically independent particles through the Boltzmann distribution. If the possible energies of the particles are $E_i$, then the maximum likelihood particle energy distribution is proportional to $\exp\left(-\frac{E_i}{k\,T}\right)$, where $T$ is simply a parameter of the distribution. Most often, the higher the system's total energy, the higher its temperature (but this is not always so, see my answer here ) and indeed for ideal gases, the temperature is proportional to the mean energy of the constituent molecules (sometimes one hears people incorrectly saying that temperature measures the mean particle energy - this is so for ideal gasses but not in general). This latter, incorrect definition will nonetheless give much correct intuition for common systems - an eight year old girl at my daughter's school in our parents' science sessions once told me that she thought temperature measured the amount of heat energy in a body, and I was pretty impressed by that answer from an eight year old. An equivalent definition that allows us to calculate the temperature statistical parameter is that an equilibrium thermodynamic system's reciprocal temperature, $\beta = \frac{1}{k\,T}$ is defined by: $$\frac{1}{k\,T} = \partial_U S\tag{1}$$ where $U$ is the total internal energy of a system and $S$ the system's entropy i.e. $\beta$ (sometimes quaintly called the "perk") is how much a given system "thermalizes" (increases its entropy) in response to the adding of heat to its internal energy $U$ (how much the system rouses or "perks up"). The Boltzmann constant depends on how one defines one's unit temperature - in natural (Plank) units unity temperature is defined so that $k = 1$. This definition hearkens back to Carnot's ingenious definition of temperature, whereby one chooses a "standard" heat reservoir and then measures the efficiency of an ideal heat engine working between a reservoir whose temperature is to be measured and the standard one. If the efficiency is $\eta$, then the temperature of the hot reservoir is $\frac{1}{1-\eta}$. The choice of the standard reservoir is equivalent to fixing the Boltzmann constant. Of course, ideal heat engines do not exist, but this is a "thought experiment" definition. Nonetheless, this definition leads to the the realization that there must be a function of state - the entropy - and that we can define the temperature through (1). See my answer here for more details. Measurements Extreme temperatures, such as the cores of stars, are calculated theoretically. Given a stellar thermodynamic model and calculations of pressure from gravitational theory, one can calculate the statistical distribution of energies that prevails. Stellar models predict surface temperatures and these latter, not so extreme temperatures can be measured by spectroscopy , i.e. by measuring the spectrum of emitted light and then fitting it to the Planck Radiation Law . Given reasonable agreement between predicted and observed quantities, one can have reasonable confidence in the temperatures calculated for the star core. Pyrometry , grounded on the Stefan-Boltzmann law, is another, simpler (but less accurate) way to measure highish temperatures. Earth core temperatures are deduced partly through theoretical models in the same way, but also inferred from what we know about the behavior of matter at these temperatures. Such temperatures and pressures can be created in the laboratory and monitored through pyrometry. We are reasonably confident of the phase diagram for iron, for example, and we know under what temperatures and pressures it will be liquid and when it will be solid. Then, seismic wave measurements give us a picture of the core of the Earth; thus we know the radius of the inner, solid core. Given that we know the phase diagram for the assumed core iron-nickel alloy, the solid core boundary gives us an indirect measurement of the temperature at the boundary.
{ "source": [ "https://physics.stackexchange.com/questions/334004", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/42892/" ] }
334,271
How can water be a medium to conduct current while its ionisation is so negligible that, in principle, no current should flow?
"Pure" water is a very poor conductor (resistivity is actually used as a measure of purity ). "Real" water is not pure - it contains electrolytes and is quite conductive. Also - when your skin is wet, its resistivity is significantly lower. For example - "pure" water has a resistivity of (about) 18.2 M $\Omega\cdot\rm{cm}$ . With 10 ppm of dissolved NaCl ("very good quality tap water" would have less than 50 ppm), the resistivity drops to about $43~\rm{k\Omega\cdot cm}$ A lot of detail can be found in "Conduction of Electrical Current to and Through the Human Body: A Review" (Fish and Geddes, ePlasty 2009, 9, e44). Table 3 sums it up: Table 3 Why immersion in water can be fatal with very low voltages Immersion wets the skin very effectively and great lowers skin resistance per unit area Contact area is a large percentage of the entire body surface area Electric current may also enter the body through mucous membranes, such as the mouth and throat The human body is very sensitive to electricity. Very small amounts of current can cause loss of ability to swim, respiratory arrest and cardiac arrest Table image
{ "source": [ "https://physics.stackexchange.com/questions/334271", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/127111/" ] }
334,974
A friend and I recently discussed the idea that radioactive decay rates are constant over geological times, something upon which dating methods are based. A large number of experiments seem to have shown that decay rate is largely uninfluenced by the environment (temperature, solar activity, etc.). But how do we know that decay rates are constant over billions of years? What if some property of the universe has remained the same over the one hundred years since radioactivity was discovered and measured, but was different one billion years ago? An unsourced statement on the Wikipedia page on radioactive decay reads: [A]strophysical observations of the luminosity decays of distant supernovae (which occurred far away so the light has taken a great deal of time to reach us) strongly indicate that unperturbed decay rates have been constant. Is this true? I'm interested in verifying constancy of decay rates over very long periods of time (millions and billions of years). Specifically, I'm not interested in radiocarbon dating or other methods for dating things in the thousands-of-years range. Radiocarbon dates, used for dating organic material younger than 50,000 years, are calibrated and crossed-checked with non-radioactive data such as tree rings of millennial trees and similarly countable yearly deposits in marine varves , a method of verification that I find convincing and that I am here not challenging.
The comment Samuel Weir makes on the fine structure constant is pretty close to an answer. For electromagnetic transitions of the nucleus, these would change if the fine structure constant changed over time. Yet spectral data on distant sources indicates no such change. The atomic transitions would change their energies and we would observe photons from distant galaxies with different spectral lines. For the weak and strong nuclear interactions, the answer is more difficult or nuanced. For the strong interactions, we have more of an anchor. If strong interactions changed their coupling constant this would impact stellar astrophysics. Stars in the distant universe would be considerably different than they are today. Again observations of distant stars indicate no such drastic change. For weak interactions, things are more difficult. A lot of nuclear decay is by weak interactions and the production of $\beta$ radiation as electrons and positrons. Creationists might argue the rate of weak interactions was considerably larger in the recent past to give the appearance of more daughter products than what occurs today. This then gives the appearance of great age that is not there. The problem with carbon dating with the decay process $$ {}^{14}_ 6C~\rightarrow~ {}^{14}_7N~+~e^−+~\nu_e $$ is that if this has changed over the last $6000$ years, a favorite time for creationists, this would mean there would be deviations between carbon dating methods and historical record. None of this is proof really, but it does fall in line with Bertrand Russell's idea of a teapot orbiting Jupiter.
{ "source": [ "https://physics.stackexchange.com/questions/334974", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/136952/" ] }
335,091
I was reading recently how the compatibility of quantum mechanics with special relativity was initially a problem for physicists and then Dirac succeeded in formulating a relativistic, quantum-mechanical theory through the Dirac equation. From my understanding, the whole idea of relativity is to keep the speed of light or photons the same in all frames. But in in quantum mechanics, particles don't have a definite position except when measured-let alone velocity (which is it's derivative) so how is "speed of a photon" even defined quantum mechanically? How do we generalize relativistic definitions like speed to be compatible with quantum mechanical concepts? How exactly did Dirac incorporate SR into his wave equation? I read related Wikipedia articles but I couldn't understand much. I am somewhat familiar with elementary concepts like the Lorentz-Transform, proper-time, the Schrodinger equation, etc. but as you might tell have no formal education in it so please be easy with the technicality while answering.
This answer is meant to add to Luke's excellent concise answer , so please read his answer first. In quantum mechanics, only measurements have the statistical distributions, the "uncertainties" and all the things that are (validly) bothering you. As you point out, this makes notions of measured spacetime co-ordinates problematic. But the underlying theory that lets one calculate these statistical distributions can be Lorentz-invariant. It is not emphasized enough, particularly in many lay expositions, that much, if not most of quantum mechanics is utterly deterministic . This deterministic part is concerned with the description and calculation of the evolution of a system's quantum state. Aside from some more modern mathematical techniques and notations, this part of quantum mechanics probably wouldn't look very alien or physically unreasonable to even Laplace himself (whom we can take as a canonical thinker from the philosophy of determinism). This quantum state evolution takes place on an abstract spacetime manifold just like classical physics. When a quantum theory is said to be relativistic or Lorentz invariant, it is usually the deterministic quantum state space evolution that is being talked about. Note, in particular, that no measurement takes place in this part of the description, so there's no problem with a spacetime manifold parameterized by zero uncertainty spacetime co-ordinates. We model measurements with special Hermitian operators and recipes for handling them called observables . Given a system's quantum state, these operators let us work out the statistical distributions of outcomes of the measurements we can make on a system with that quantum state. When people talk of quantum uncertainty, Heisenberg's principle and all the rest of it, they are speaking about the statistical distributions that come from these measurements. So, whilst Schrödinger's equation (the deterministic, unitarily evolving description) for the electron in a hydrogen atom is written in terms of zero uncertainty space and time co-ordinates (known as parameters to emphasize that they are not measurements), the outcome of a position measurement is uncertain and the position observable lets us calculate the statistical distribution of that outcome.
{ "source": [ "https://physics.stackexchange.com/questions/335091", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/45708/" ] }
335,097
I'm a first timer here. In a normal liquid-vapor P-V biphasic curve at a temperature below critical temperature, we can compress a vapor at a constant temperature till it becomes saturated, or is just about to turn into a liquid. Any further compression would lead to the existence of the two phases together, as we know. Here we can see from the phase rule that we have just one degree of freedom. So just specifying the temperature will set the pressure and volume, just like what the Antoine equation gives. A pre-question here is that once we come into the zone inside the bell curve of mutual phase existence, what does the volume in the X-axis specify? (Like volume of the gas or volume of the liquid) My main question is once it enters this zone where the temperature and pressure are set at a saturated value, what is the driving force for the conversion of the vapor to liquid. By this I mean, how can we ensure that two distinct desired compositions can be achieved if we perform the experiment two different times. Surely this isn't to be a transient process where the end result is always full saturated water. I would really appreciate if someone could clear this for me. I have tried looking everywhere for an answer to this, and I did not find it here either. Pardon me if I didn't look enough here.
This answer is meant to add to Luke's excellent concise answer , so please read his answer first. In quantum mechanics, only measurements have the statistical distributions, the "uncertainties" and all the things that are (validly) bothering you. As you point out, this makes notions of measured spacetime co-ordinates problematic. But the underlying theory that lets one calculate these statistical distributions can be Lorentz-invariant. It is not emphasized enough, particularly in many lay expositions, that much, if not most of quantum mechanics is utterly deterministic . This deterministic part is concerned with the description and calculation of the evolution of a system's quantum state. Aside from some more modern mathematical techniques and notations, this part of quantum mechanics probably wouldn't look very alien or physically unreasonable to even Laplace himself (whom we can take as a canonical thinker from the philosophy of determinism). This quantum state evolution takes place on an abstract spacetime manifold just like classical physics. When a quantum theory is said to be relativistic or Lorentz invariant, it is usually the deterministic quantum state space evolution that is being talked about. Note, in particular, that no measurement takes place in this part of the description, so there's no problem with a spacetime manifold parameterized by zero uncertainty spacetime co-ordinates. We model measurements with special Hermitian operators and recipes for handling them called observables . Given a system's quantum state, these operators let us work out the statistical distributions of outcomes of the measurements we can make on a system with that quantum state. When people talk of quantum uncertainty, Heisenberg's principle and all the rest of it, they are speaking about the statistical distributions that come from these measurements. So, whilst Schrödinger's equation (the deterministic, unitarily evolving description) for the electron in a hydrogen atom is written in terms of zero uncertainty space and time co-ordinates (known as parameters to emphasize that they are not measurements), the outcome of a position measurement is uncertain and the position observable lets us calculate the statistical distribution of that outcome.
{ "source": [ "https://physics.stackexchange.com/questions/335097", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/157112/" ] }
335,184
I appreciate the statement of Heisenberg's Uncertainty Principle. However, I am a bit confused as to how exactly it applies to the quantum mechanical situation of an infinite square well. I understand how to apply Schrödinger's equation and appreciate that energy Eigenvalues can be deduced to be $$E_n=\frac{n^2\hbar^2\pi^2}{2mL^2}.$$ However, I have read somewhere that the reason that the quantum particle cannot have $n = 0$ —in other words, $E = 0$ —is because by having zero energy we also have a definite momentum with no uncertainty, and by the Heisenberg uncertainty principle this should lead to an infinite uncertainty in the position of the particle. However, this cannot the case be in an infinite well, as we know the particle should be somewhere in the box by definition. Therefore $n$ can only be greater than or equal to one. Surely when $n = 1$ we have the energy as $$E_1 = \frac{\hbar^2 \pi^2}{2mL^2},$$ which is also a known energy, and so why does this (as well as the other integer values of $n$ ) does not violate the uncertainty principle?
This started off as comment, but right now I only have the reputation to answer. But this is by no means a rigorous answer. There are a couple of assumptions your question makes that aren't strictly true. For a start, you seem to say that any definite value of energy would entail a definite value of momentum. This is true for a completely free particle, but this is no longer true for a particle that is undergoing some interaction (where is the interaction, you ask? Well the fact that it is placed in a box, of course!) There's a simple and (in my opinion) instructive way to see this. If what you said were true, then the states of definite energy would also be the states of definite momentum. In other words, they would satisfy the eigenvalue equation $$\hat{p}\psi_n = p_n\psi_n$$ where $\hat{p}=\frac{\hbar}{i}\frac{d}{dx}$ is the momentum operator and $p_n$ is a constant (which would represent the measured momentum). Let us check if this is the case. The states of definite energy are given by $$\psi_n = \sqrt{\frac{2}{L}}\sin\left(\frac{n\pi x}{L}\right)$$ The action of the momentum operator is thus $$\hat{p}\psi_n = \frac{\hbar}{i} \sqrt{\frac{2}{L}} \frac{n\pi}{L} \cos\left( \frac{n\pi x}{L}\right) \neq p_n \psi_n$$ In other words, the states of definite energy are not states of definite momentum, since the momentum operator doesn’t yield the original wave function times a constant! Of course, you're right that the magnitude of the momentum would be fixed. If you're still having trouble with this, here's a sloppy "intuitive" semi-classical example: say I gave you a (1-dimensional!) box with a particle (of unit mass) bumping around constantly inside it. I tell you that I measured the energy of this particle many times and it always came out to be exactly 8. Now I ask you to give me its momentum. "Aha!" you say, "when it is inside the box, there are no forces acting on it, and so the energy is simply given by:" $$\frac{p^2}{2m} =\frac{p^2}{2} = 8$$ Thus, you find that the 'momentum' is 4! But wait a minute, you don't know if it's bouncing to the left or to the right. In other words, if it's $+4$ or $-4$! The fact that the particle interacts with the wall is responsible for its momentum 'flipping' sign. In the same way, for the particle in the box, the magnitude of the momentum $|p|$ is given by $$|p| = \pm \sqrt{2mE}$$ So what's the uncertainty in $p$? Well, it's simply $\Delta p = +|p| - (-|p|) = 2|p| = \frac{2 n\pi \hbar}{L}$. What about the uncertainty in $x$? Well, it could be anywhere in the box, and so $\Delta x = L$, the box's length. Let us try to find $$\Delta x \Delta p = 2n\pi \hbar > \frac{\hbar}{2}$$ for all values of $n\geq 1$. Clearly, when $n=0$ this no longer works. We can understand this in many ways. A simple way would be to realise that when $n=0$, the magnitude of the momentum is $0$, and thus there are no 'positive' and 'negative' values it could take: it most definitely has a momentum of exactly zero, with no uncertainty. This would be allowed, if you were not in a box. However, placing yourself in a box, meaning that $\Delta x < \infty$, means that you necessarily have a minimal non-zero momentum, using the argument you mentioned earlier. Moreover, the mathematics tells you that a state with $n = 0$ is the trivial state $\psi_0(x) = 0$. The mod-square integral of this function is 0, which can be interpreted as such a particle simply not existing.
{ "source": [ "https://physics.stackexchange.com/questions/335184", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/140512/" ] }
335,788
If one sings the letter "A" and "M" at the same volume and pitch, the two letters are still differentiable. If both pitch and volume are the same however, shouldn't the sound be the exact same?
You don't sing a single pitch - you sing a frequency and its harmonics. Using a simple spectrum analyzer, this is me "singing" the letter A and M, alternately (AMAMA, actually): The letter "A" is the one with more harmonics (brighter lines at higher frequencies), the letter "M" seems to have a bigger second harmonic. The frequency scale is not calibrated correctly (cheap iPhone app...) Here are two other shots, side by side (M, then A). You can see that the 2nd harmonic of the M is bigger than the first; by contrast, the higher harmonics from the A are dropping off more slowly: Simple vowels have this in common: the shape of your mouth changes the relative intensity of harmonics, and your ear is good at picking that up. Incidentally, this is the reason that it is sometimes hard to understand what a soprano is singing - at the top of her range, the frequencies that help you differentiate the different vowels might be "out of range" for your ears. For short ("plosive") consonants (P, T, B, K etc), the story is a bit more complicated, as the frequency content changes during the sounding of the letter. But then it's hard to "sing" the letter P... you could sing "peeeee", but then it's the "E" that carries the pitch. The app I used for this is SignalSpy - I am not affiliated with it in any way.
{ "source": [ "https://physics.stackexchange.com/questions/335788", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/140126/" ] }
336,386
When we put hands on A/C it gives cold winds. These winds have high kinetic energy but low temperature. How ? *don't confuse with A/C being heat pump , just an example, take antarctic blizzards. I can't understand the paradox of low temperature winds. Temperature is something defined by kinetic energy
The average speed of an air molecule can be approximated by the following equation, which is exact only in the case of an ideal gas: $$\langle v \rangle = \sqrt{\frac{2RT}{M}}$$ This means at $25$°C ($298$ K) air molecules will be moving randomly at an average speed of $\simeq 467$ m/s. Let's say that the AC cools the air at $15$°C ($288$ K) before blowing it out into the room. From the above formula, the average molecular speed will then be $\simeq 459$ m/s. When the AC blows out the air, it does so at $0.1-0.3$ m/s (1). This means that, in the worst case, a motion of $0.3$ m/s is superimposed to an average motion which is at around $460$ m/s, more than a thousand times faster. You can then see how the movement of the air mass as a whole is negligible: what matters is the average molecular speed in the rest frame of the air mass. Also, even if you use a simple fan instead of the AC you will perceive the air flux hitting your skin as colder. This is known as convective cooling. See for example this post for a simple explanation. (1) Source
{ "source": [ "https://physics.stackexchange.com/questions/336386", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/154547/" ] }
336,709
There is probably some reason for this, but I can't figure out what it is. I agree that it probably doesn't happen 100% of the time, but most all of the time, the cream is clinging to just one of the cookie sides.
The "stuff" sticks to itself better than it sticks to the cookie. Now if you pull the cookies apart, you create a region of local stress, and one of the two interfaces will begin to unstick. At that point, you get something called "stress concentration" at the tip of the crack (red arrow) - where the tensile force concentrates: To get the stuff to start separating at a different part of the cookie, you need to tear the stuffing (which is quite good at sticking to itself) and initiate a delamination at a new point (where there is no stress concentration). Those two things together explain your observation. Cookie picture credit (also explanation about manufacturing process introducing a bias) Update A plausible explanation was given in this article describing work by Cannarella et al: Nabisco won’t divulge its Oreo secrets, but in 2010, Newman’s Own—which makes a very similar “Newman-O”—let the Discovery Channel into its factory to see how their version of cookies are made. The key aspect for twist-off purposes: A pump applies the cream onto one wafer, which is then sent along the line until a robotic arm places a second wafer on top of the cream shortly after. The cream always adheres better to one of these wafers—and all of the cookies in a single box end up oriented in the same direction. Which side is the stronger wafer-to-cream interface? “We think we know,” says Spechler. The key is that fluids flow better at high temperatures. So the hot cream flows easily over the first wafer, filling in the tiny cracks of the cookie and sticking to it like hot glue, whereas the cooler cream just kind of sits on the edges of those crevices.
{ "source": [ "https://physics.stackexchange.com/questions/336709", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/44881/" ] }
337,075
So if I have any big object with a mass let's say 100 kg on the floor, why can't I with a force of 1 N not push that object upwards? If we add all the forces of the object (see picture below) What's wrong in my thinking, is the gravitational force somehow getting bigger to match the difference in forces? All help is appreciated (and I'm sorry if I posted this in the wrong place)
The gravitational force would not be getting bigger, but the upward normal force would be getting smaller. You can try this out by putting an object on a scale, and then pull up on it a bit. The object won't be lifted, but the reading on the scale will go down.
{ "source": [ "https://physics.stackexchange.com/questions/337075", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/157496/" ] }
337,423
I'm confused as to how quantum fields are defined mathematically, and I've seen from questions on this site and Wikipedia articles that classical fields are just functions that output a field value for a given point in space input. Is this the same for quantum fields? Are quantum fields too just functions? If so how do they account for laws of quantum mechanics? I've also seen answers on here saying things about operator valued distributions, etc... Are these operators the creation and annihilation operators of the second quantization? Also if the field is a field of operators then how do we determine the value of the field at a point? I have all these snippets of knowledge, and I'm not sure how they fit together to mathematically describe quantum fields. Finally, I'm confused as to how it works with the rest of QFT , and I guess this is my main question; if a quantum field is just a field of creation and annihilation operators, or even some other operators, how do we define particles and their interactions? You always hear that "particles are just excitations in their quantum fields." But mathematically how does this work. And fit in with the other bits I've mentioned?
There is no mathematically sound formulation of realistic QFT yet so at this point we have no real answer to your question. The QFT that physicists use to make predictions is in the so-called Lagrangian formulation, which is a heuristic framework for obtaining perturbative expansions using Feynman diagrams. There is also algebraic or axiomatic QFT , mathematically well-defined but so far confined to free theories and toy models. The idea is that QFT must satisfy a list of axioms, the Wightman axioms being most commonly used, and the challenge is to construct realistic theories that satisfy them. Mathematically constructing a Yang-Mills theory with a mass gap is one of the Millenium problems. In algebraic QFT fields are identified with operator-valued distributions, and the Fock space picture is a dual representation of them. This duality is similar to the Schrödinger vs Heisenberg pictures in quantum mechanics. The idea is that the Hilbert space of quantum fields, as distributions associated to localized regions of spacetime, is unitarily equivalent to the Fock space, where creation and annihilation operators are defined, and which is much more commonly used in practice. That is the Fock space of second quantization, so those operators are not the same as the field operators, which are quantized versions of classical fields (intuitively, the Fock space operators are "global" whereas the field operators are "localized"): " Fortunately, the operators on a QFT Hilbert space include a set of field operators. If a particular wave equation is satisfied by a classical field $\phi(x)$, it will also be satisfied in operator equation form by a set of operators $\widehat{\phi}(x)$ on the state space of the quantized version of the field theory. Speaking somewhat imprecisely, $\widehat{\phi}(x)$ acts like a field of operators, assigning to each point x an operator with expectation value $(\psi,\widehat{\phi}(x)\psi)$. As the state evolves dynamically, these expectation values will evolve like the values of a classical field. The set of field operators is sometimes called the operator-valued quantum field . One caveat that will be important later: Strictly speaking, we cannot construct a nontrivial field of operators $\widehat{\phi}(x)$ defined at points. But it is possible to define a “smeared” quantum field by convolution with test functions. [...] We need an interpretation of field-theoretic states to determine which physically contingent facts they represent. In single-particle QM, a state is a superposition of states with determinate values for the theory’s observables (e.g. position and momentum)... in field theories we’re interested in systems that take on values for some field $\phi(x)$ and its conjugate momentum $\pi(x)$. So, when quantizing a field theory, we should just do to the field what we did to the mechanical system to generate QM. Impose commutation relations on $\phi(x)$ and $\pi(x)$, and move our states to the Hilbert space of wavefunctionals ($\Psi(\phi)$) that describe superpositions of different classical field configurations. The equivalence to Fock space picture can be proved for free QFT, but axiomatic QFT has difficulties incorporating interactions or defining position operators. Because of this some argue that neither quantum field nor Fock space/particle interpretations can survive in a mathematically mature QFT, see e.g. Baker's Against Field Interpretations of Quantum Field Theory , from which the above quote is taken. Wallace has a nice review In defence of naivete: The conceptual status of Lagrangian QFT that analyzes the mathematical structure of QFT as it is practiced, and argues to the contrary that it can be seen as a valid approximation of what algebraic QFT may one day yield. If that is the case then operator-valued distributions and Fock space states, interpreted as particle states, will be effective realizations of what quantum fields "are" at low energy levels: " We have argued that such QFTs can be made into perfectly well-defined quantum theories provided we take the high-energy cutoff absolutely seriously; that the multiple ways of doing this are not in conflict provided that we understand them as approximations to the structure of some deeper, as yet unknown theory; that the existence of inequivalent representations is not a problem; that a concept of localisation can be defined for such theories which is adequate to analyse at least some of the practical problems with which we are confronted; and that the inexactness inherent in that concept is neither unique to relativistic quantum mechanics, nor in any way problematic.
{ "source": [ "https://physics.stackexchange.com/questions/337423", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/138757/" ] }
337,898
There is a difference between the classical field $\phi(x)$ (which appears in the classical action $S[\phi]$ ) and the quantity $\phi_c$ defined as $$\phi_c(x)\equiv\langle 0|\hat{\phi}(x)|0\rangle_J$$ which appears in the effective action. Even though $\phi_c(x)$ is referred to as the "classical field", I don't see why $\phi(x)$ and $\phi_c$ should be the same. In what sense, therefore, is the effective action $\Gamma[\phi_c]$ a quantum-corrected classical action $S[\phi]$ ? How can we compare the functionals of two different objects (namely, $\phi(x)$ and $\phi_c(x)$ ) and claim that $\Gamma[\phi_c]$ is a correction over $S[\phi]$ ? I apologize for any lack of clarity in the question and the confusion I'm hoping to clear up.
We want to calculate the path integral $$ Z = \int \mathcal{D}{\phi}\, e^{i \hbar^{-1} S[\phi]} $$ which encodes a transition amplitude between initial and final quantum states. If we had the effective action $\Gamma[\phi]$ at our disposal, we would have calculated the same result by solving for $$ \phi_c(x):\quad \left. \frac{\delta \Gamma}{\delta \phi} \right|_{\phi=\phi_c} = 0 $$ and plugging it back in the effective action: $$ Z = e^{i \hbar^{-1} \Gamma[\phi_c]}. $$ This is the definition of $\Gamma$. Note that no path integrals are required at this point. Boundary conditions are implicitly present throughout this answer, encoding the exact states between which the quantum transition occurs. Their existence ensures that there is only one solution $\phi_c$. Now to why $\phi_c$ is called classical : it solves the e.o.m. given by the action $\Gamma$. Think of $\Gamma$ as of an object in which all the short-scale properties of the integration measure $\mathcal{D}\phi$ (including renormalization-related issues) are already accounted for. You simply solve the e.o.m. and plug the solution in the exponential and you are done: here is your transition amplitude. That being said, $\Gamma$ is not classical in the sense that it still describes dynamics of a quantum theory. Only in a different fashion. Simple algebraic manipulations instead of path integrals. Finally, note how if the path integral is Gaussian, $$\Gamma[\phi] = S[\phi] + \text{const},$$ where $\text{const}$ accounts for the path integral normalization constant. There are no quantum corrections. In classical theory, however, we solve the e.o.m. w.r.t. $\phi = \phi_c$ for $S[\phi]$, not $\Gamma[\phi]$. Plugging it back into $S[\phi_c]$ gives us the Hamilton function. When the path integral is Gaussian, it doesn't matter if we use $S$ or $\Gamma$, and exponentiating the Hamilton function gives you the transition amplitude. However if we are dealing with an interacting theory, the correct way to do this would be to use $\Gamma$ instead of $S$. In this sense, $\Gamma$ is the quantum-corrected version of $S$. And yes, it is always true (can be shown using the saddle point approximation formula) that $$ \Gamma[\phi] = S[\phi] + \mathcal{O}(\hbar). $$ Why wouldn't we just use $\Gamma[\phi]$ to define the quantum theory and forget about $S[\phi]$ alltogether? Because $\Gamma$ is non-local and contains infinitely many adjustable parameters . These can be determined from the form of $S[\phi]$ by, well, quantization. That's why it is $S[\phi]$ which defines the theory, not $\Gamma$. $\Gamma$ is to be calculated via path integrals. UPDATE: It is also important to understand that in naive QFT $\Gamma$ contains divergences, while $S$ doesn't. However, the actual situation is opposite. It is $S$ which contains divergences (divergent bare couplings), which cancel out against the divergences coming from the path integral, rendering a finite (i.e. renormalized) $\Gamma$. That $\Gamma$ should be finite is evident from how we use it to calculate physical properties: we only solve the e.o.m. and plug the result back in $\Gamma$. Actually, the whole point of renormalization is to make $\Gamma$ finite and well-defined while adjusting only a finite number of diverging couplings in the bare action $S$.
{ "source": [ "https://physics.stackexchange.com/questions/337898", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/36793/" ] }
337,945
Light is bent near a mass (for example when passing close to the sun as demonstrated in the famous sun eclipse of 1919). I interpret this as an effect of gravity on the light. However, it seems (to me, at least) that light is not accelerated when it travels directly toward the (bary-)center of the sun. The same gravitational force applies yet the speed of light remains constant (viz. $c$). What am I missing?
One thing that the previous answers are missing -- the light is accelerated; it just is accelerated according to the rules of special relativity, which says that it cannot pick up speed when already travelling at the speed of light. Instead, it gains kinetic energy the way a photon gains kinetic energy -- by being blueshifted to a higher frequency, which does translate to more energy, according to the Planck relation $E = h\nu$.
{ "source": [ "https://physics.stackexchange.com/questions/337945", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/70582/" ] }
338,044
We know that for a position variable $x$ and momentum $p$, the uncertainties of the two quantities are bounded by $$\Delta x \Delta p \gtrsim \hbar$$ Now, this is usually first explained with $x$ being a simple linearly measured position and $p$ being linear momentum. But it should apply to any good coordinate and its conjugate momentum. It should, for instance, apply to angle $\phi$ about the $z$ axis, and angular momentum $L_z$: $$\Delta \phi \Delta L_z \gtrsim \hbar$$ The thing is, $\Delta \phi$ can never be greater than $2\pi$. I mean, you have to have some value of $\phi$ and $\phi$ only runs from 0 to $2\pi$. Therefore $$\Delta L_z \gtrsim \hbar/\Delta \phi \geq \hbar/2\pi$$ But, uh-oh! This means it is impossible for $\Delta L_z$ to be zero, and we should never be able to have angular momentum states with definite $L_z$ values. Of course, it doesn't mean that. But I have never figured out how this is not in contradiction with the Schroedinger eqn. calculations that give us states with definite values of $L_z$. Can anyone help me out? One answer I anticipate is that $\phi$ is sort of "abstract" in that if you chose your origin at some other point you will get completely different values of $\phi$ and $L_z$, and ipso facto , usual considerations don't apply. I don't think this will work, though. Consider a "quantum bead" sliding around on a rigid circular ring and you get the exact same problem with no ambiguity in $\phi$ or $L_z$. (Well, there will be some limited ambiguity in $\phi$, but still, there won't be in $L_z$.)
The problem here is there is at this time still no "legitimate" self-adjoint phase operator. As you phrase the problem, you assume that $\hat \phi$ and $\hat L_z$ would have the same commutation relations as $\hat x$ and $\hat p$, and in particular given that $\hat L_z\mapsto -i\hbar d/d\phi$ the $\hat \phi$ operator would be multiplication of an arbitrary function $f(\phi)$ by $\phi$, i.e. $$ \hat L_zf(\phi)=-i\hbar \frac{df}{d\phi}\, ,\qquad \hat \phi f(\phi)= \phi f(\phi) $$ Thus far everything is fine except that, when it comes to boundary condition, we must have $f(\phi+2\pi)=f(\phi)$. However, the function $\phi f(\phi)$ does not satisfy this. As a result, the action of a putative $\hat \phi$ as defined above takes a "legal" function $f(\phi)$ that satisfies the boundary conditions to an "illegal" one $\phi f(\phi)$, and make $\hat \phi$ NOT self-adjoint (which means trouble). The uncertainty relation assumes that the operators involved as self-adjoint. Since there is (thus far) no known definition of $\hat \phi$ that makes it self-adjoint, the quantity $\Delta \phi$ cannot be computed in the usual way and indeed is not necessarily well defined for arbitrary states. In other words, there is no mathematical reason to believe that $\Delta \phi\Delta L_z\ge \hbar /2$. Indeed an obvious "problem" with your expression is obtained by taking $f(\phi)$ to be an eigenstate of $\hat L_z$. Then clearly $\Delta L_z=0$ so the putative variance $\Delta \phi$ would have to be arbitrarily large, which is impossible given that $\phi$ physically ranges from $0$ to $2\pi$. The problem of constructing a self-adjoint phase operator is an old one. It has been the subject of several questions on this site, including this one . Finding a good definition of a phase operator remains an open research problem. Edit: added some clarifications after a query.
{ "source": [ "https://physics.stackexchange.com/questions/338044", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/93062/" ] }
338,089
I know a baryon is only stable when it contains a quark of each color. And as far as I know, the gluon essentially changes the color of a quark and moves onto the next, and this is what holds the particles together. But in the process of the gluon moving from one quark to the next, wouldn't the baryon have two quarks of the same color, making it unstable? Or does the gluon move instantaneously, or is the baryon not unstable enough to decay before the gluon reaches the next quark? Or... essentially, how does this process actually work?
The idea that baryons contain three quarks is a significant oversimplification wrong. It works for some purposes, but in this case it causes way more confusion than it's worth. So you should stop thinking of baryons as groups of three quarks and start thinking of them as excitations in quantum fields - and in particular, excitations in all the quantum fields at once. Quark fields, gluon fields, photon fields, and everything. These excitations propagate through spacetime and convert among each other as they go, and in a baryon, the propagation and mutual conversion happen to sustain each other so that the baryon can exist as a coherent particle for a while. One of the conditions required of all these excitations in fields is that they be a color singlet, which is the strong interaction's version of being uncharged. There's a simple intuitive justification for this: just as an electrically charged particle will tend to attract oppositely charged particles to form neutral composites (like protons and electrons attracting each other to form atoms), something which has the charge associated with the strong interaction (color charge) will attract other color-charged particles to form neutral composites (color singlets). Now, if you literally only had three quarks, the only way to make them a color singlet is to have one be red, one be green, and one be blue. 1 (Or the anticolor equivalents.) But with all the complicated excitations that make up a baryon, there are all sorts of ways to make a color singlet. You could have three red quarks, a green-antired gluon, and a blue-antired gluon. Or two red quarks, two green quarks, an antiblue antiquark, a blue-antired gluon, and a blue-antigreen gluon. Or so on; the possibilities are literally infinite. The point is that you don't actually have to have a quark of each color in the baryon at all times. Only the total color charge in the baryon matters. Given that, it should seem reasonable that gluons change the color of quarks whenever they are emitted or absorbed, in a way that keeps the total color charge the same. For example, a blue quark could absorb a green-antiblue gluon and become a green quark. 1 I'm glossing over some quantum-mechanical details here; specifically, a color singlet wavefunction needs to be an antisymmetrized linear combination, like $\frac{1}{\sqrt{6}}(rgb - rbg + gbr - grb + brg - bgr)$, not just $rgb$. But as long as you don't worry about which quark is which color, for purposes of this answer it's safe to ignore this.
{ "source": [ "https://physics.stackexchange.com/questions/338089", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/158497/" ] }
338,113
I do understand that open water and open ground cools by the means of convection — lower air takes the heat and goes up, where it cools. But why does the Earth lose energy and where does it go? Does it emit heat radiation to space? Does the heat go underground? What does happen to it?
It emits infrared radiation to space. This happens constantly, not only during the night, but during the day the net energy flux is positive because the amount energy coming from the Sun is much higher. Things are complicated a little bit by the atmosphere, meaning that most of the radiation emitted from the ground does not reach space immediately, but instead gets first absorbed by the atmosphere and then re-emitted to space. Also, some of the energy is first transported to the atmosphere by evaporation and condensation before being radiated away by it. If you want to know more, this introductory explanation by NASA looks good. In the picture (taken from the NASA website I linked above): satellite map showing the distribution of thermal infrared radiation emitted by Earth in September 2008) .
{ "source": [ "https://physics.stackexchange.com/questions/338113", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/44537/" ] }
338,711
How does 'Tear Here to Open' work? Why is it easier to tear there as opposed to anywhere else? What is its physics?
All the force will be concentrated at the tear (the tip of the tear) if there is one. With no tear, all the material helps to hold itself together — but with a tear, all the force is localized at this tiny point and only a tiny amount of material is responsible for holding on. The stress at this tear tip (the force per area) becomes huge because the area is very small and it overcomes the strength (the maximum or ultimate stress) of the material. On the illustration below you can clearly see that only a few particles are helping to hold the material together — they carry the whole burden. The colours show the stress distribution. Source Were the material flexible and less rigid, the stress would have been spread more out, since the bonds between atoms/molecules would allow some stretching for them to "lean back" on their neighbor atoms/molecules. This is why a crack propagates very fast in rigid or brittle materials like concrete or glass — a material might be strong and can withstand large forces for a long time, but as soon as a single tiny crack appears, the whole thing breaks in a split second because all the force is suddenly on just one point, which breaks and lets the next point take over, which breaks etc. The more rigid and brittle the material is, the less flexible and deformable it is, meaning that the material will not absorb as much of the energy but instead have it all concentrated at the crack. This is why glass looks like it almost explodes when cracking — the cracks are propagating at huge speed. Plastic sheets on the other hand will only tear in two if, along the way, you help and localize your force correctly, and you can control the speed. It will also feel warmer at the tearing edges, because energy was absorbed.
{ "source": [ "https://physics.stackexchange.com/questions/338711", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/148764/" ] }
338,731
I'm sure this is a silly question, but if the Laplace transform of a constant is not a constant, e.g. $$\mathfrak{L}[1] = \frac{1}{s}$$ then how come the impedance of a resistor is the same in the time domain as in the frequency domain? That is, why, for a resistor of resistance R, $$\mathfrak{L}[R] = R~$$ Couldn't you just take out the factor of $R$ and have $R \cdot \mathfrak{L}[1] = R \cdot \frac{1}{s}$?
Impedance is usually done in Fourier analysis, not Laplace transform, but that's a quibble. The reality is that you transform the signal, not the impedance. The impedance is an operator that modifies the signal, and it's the effect that operator has on the signal in the transformed domain that we deal with. To review, the basic linear circuit elements are: resistor: $V=I\,R$ ($R$ real), capacitor: $V = \frac{1}{C}\int I \operatorname{d}t$, and inductor: $V = - L \frac{\operatorname{d} I}{\operatorname{d} t}$. Putting them all on the same footing, we can write them all as operators that have the form $V(t) = \int A(t,t')\, I(t') \operatorname{d}t'$. On doing so the kernels of the operators ($A$) are given by: $R\, \delta(t-t')$, $\frac{1}{C}\, \Theta(t'-t)$, and $\frac{\partial}{\partial t} \delta(t-t')$, respectively. Fourier transforming the time domain kernel with respect to $t$ and inverse Fourier transforming with respect to $t'$ gives the frequency space representation of a kernel, $V(\omega) = \int A(\omega,\omega')\, I(\omega') \operatorname{d}\omega'$. For the three linear elements their angular frequency domain kernels are: $R\, \delta(\omega-\omega')$, $\frac{1}{i\omega C}\, \delta(\omega-\omega')$, and $i\omega L\, \delta(\omega-\omega')$. The short version usually given is: Fourier/Laplace transforms are linear, so multiplying one version of the signal is the same as multiplying the other by the same constant; derivatives become multiplication by frequency; and integrals become division by frequency.
{ "source": [ "https://physics.stackexchange.com/questions/338731", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58082/" ] }
338,741
I have a theory, I think that we cannot travel at speeds faster than light because, as we know,when you travel at speeds near light time passes slowly and that if we go further than light time may pause perhaps and that if time pauses its impossible to have motion because in 0 seconds ( I mean no time has passed) you cannot travel any distance. I have also another theory, I think that if we go a little high than light speed except pausing time may reverse, and if time reverses an object will never exists in space but continuously go back in time and reach big bang. The object will never exist in space but actually in a TIME dimension ( I know dimension word is wrong but I couldn't think of a word except this). Can anyone one of these 'theories' be true, even very tiny bit true? Please point out my mistakes. I am just a kid though of 9th grade. But, I really wonder could these be true or not.
Impedance is usually done in Fourier analysis, not Laplace transform, but that's a quibble. The reality is that you transform the signal, not the impedance. The impedance is an operator that modifies the signal, and it's the effect that operator has on the signal in the transformed domain that we deal with. To review, the basic linear circuit elements are: resistor: $V=I\,R$ ($R$ real), capacitor: $V = \frac{1}{C}\int I \operatorname{d}t$, and inductor: $V = - L \frac{\operatorname{d} I}{\operatorname{d} t}$. Putting them all on the same footing, we can write them all as operators that have the form $V(t) = \int A(t,t')\, I(t') \operatorname{d}t'$. On doing so the kernels of the operators ($A$) are given by: $R\, \delta(t-t')$, $\frac{1}{C}\, \Theta(t'-t)$, and $\frac{\partial}{\partial t} \delta(t-t')$, respectively. Fourier transforming the time domain kernel with respect to $t$ and inverse Fourier transforming with respect to $t'$ gives the frequency space representation of a kernel, $V(\omega) = \int A(\omega,\omega')\, I(\omega') \operatorname{d}\omega'$. For the three linear elements their angular frequency domain kernels are: $R\, \delta(\omega-\omega')$, $\frac{1}{i\omega C}\, \delta(\omega-\omega')$, and $i\omega L\, \delta(\omega-\omega')$. The short version usually given is: Fourier/Laplace transforms are linear, so multiplying one version of the signal is the same as multiplying the other by the same constant; derivatives become multiplication by frequency; and integrals become division by frequency.
{ "source": [ "https://physics.stackexchange.com/questions/338741", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/157555/" ] }
338,912
In a hypothetical situation I'm still sitting in a coffee shop but a gravitational wave similar to the three reported by LIGO passes through me from behind. The source is much closer though, so this one is perceptible but (let's hope) not yet destructive. (If a perceptible event would destroy the Earth, please put my coffee shop in a large spacecraft if necessary) Let's say the orbital plane of the black holes is coplanar to the floor of my coffee shop so that the wave is aligned vertically/horizontally. If I stand up and extend my arms, will I sense alternating compression and pulling as if I were in an oscillating quadrupole field (pulling on my arms while compressing my height, and vice versa)? The term "strain" is used do describe the measurement, but would I feel the effect of this strain as a distributed force gradient, so that my fingers would feel more pulling than my elbows? If I had a ball on the end of a stretchy rubber band, would it respond to this strain (especially if $k/m$ were tuned to the frequency of the wave)? Would I see it start oscillating? There is an interesting and somewhat related question and answer; How close would you have to be to the merger of two black holes, for the effects of gravitational waves to be detected without instruments? but I'm really trying to get at understanding what the experience would be like hypothetically. This answer seems to touch on this question but the conclusion "...if the ripples in space-time were of very long wavelength and small amplitude, they might pass through us without distorting our individual shapes very much at all." isn't enough. If it were strong enough to notice, how would a passing gravitational wave look or feel?
Let me try to answer in a few separate steps. (I'll try to make it simple and people should correct me where I oversimplify things.) What is the effect of a gravitational wave on a physical object? Let's start with just two atoms, bound to each other by interatomic forces at a certain effective equilibrium distance. A passing gravitational wave will start to change the proper distance between the two atoms. If for example the proper distance gets longer the atoms will start to experience an attractive force, pulling them back to equilibrium. Now, if the change of GW strain happens slow enough (for GW frequencies far below the system's resonance) everything will essentially stay in equilibrium and nothing really happens. Stiff objects will keep their length. However, for higher GW frequencies, and especially at the mechanical resonance, the system will experience an effective force and will be excited to perform real physical oscillations. It could even keep ringing after the gravitational wave has passed. If they are strong enough, these oscillations are observable as any other mechanical oscillations. All this stays true for larger systems like your example of a ball on a rubber band or for a human body. It is also how bar detectors work. How would a human experience this? So, a gravitational wave exerts forces on your body by periodically stretching and compressing all the intermolecular distances inside it. That means you will basically be shaken from the inside. With reference to the stiffer parts of your body the really soft parts will move by the relative amount that is given by the GW strain $h$ . The effect can be enhanced where a mechanical resonance is hit. I guess you would experience this in many ways just like sound waves, either like a deep rumbling bass that shakes your guts, or picked up directly by your ears. I assume that within the right frequency range the ear is indeed the most sensitive sense for these vibrations. Is it physically plausible that you could be exposed to high enough GW amplitudes? Lets take the GW150914 event where two black holes, of several solar masses each, coalesced. Here on Earth, an estimated 1.3 billion lightyears away from the event, the maximum GW strain was in the order of $h\approx 10^{-21}$ at a frequency of about $250\,\mathrm{Hz}$ . The amplitude of a gravitational wave decreases with $1/r$ , so we can calculate what the strain was closer by: Lets go as close as 1 million kilometres, which is about 1000 wavelengths out and so clearly in the far field (often everything from 2 wavelengths is called far field). Tidal forces from the black holes would be only about 5 times higher than on Earth , so perfectly bearable. At this distance the strain is roughly $h\approx 10^{-5}$ . That means that the structures of the inner ear that are maybe a few millimetres large would move by something in the order of a few tens of nanometres. Not much, but given that apparently our ears can pick up displacements of the ear drum of mere picometres that's probably perfectly audible!
{ "source": [ "https://physics.stackexchange.com/questions/338912", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/83380/" ] }
339,489
I was reading some popular science book and there was this sentence. It says that energy has to be emitted in discrete portions called quanta. Otherwise the whole energy in the universe would be converted into high frequency waves. I'm not a physicist, so this conclusion seems to me like a huge leap. First, we assume that energy is emitted in a continuous way (not in quanta). And how do we get to the statement "the whole energy in the universe would be converted into high frequency waves"?
The book is almost surely referring to the ultraviolet catastrophe . Classical physics predicts that the spectral energy density $u (\nu,T)$ of a black body at thermal equilibrium follows the Rayleigh-Jeans law : $$u(\nu,T) \propto \nu^2 T$$ where $\nu$ is frequency and $T$ is temperature. This is clearly a problem, since $u$ diverges as $\nu \to \infty$ (1). The problem was solved when Max Planck made the hypothesis that light can be emitted or absorbed only in discrete "packets", called quanta . The correct frequency dependence is given by Planck's law : $$u(\nu,T) \propto \frac{\nu^3}{\exp\left(\frac{h \nu}{k T}\right)-1}$$ You can verify that the low-frequency ($\nu \to 0$) approximation of Planck's law is the Rayleigh-Jeans law. (1) To be more specific: if you consider electromagnetic radiation in a cubical cavity of edge $L$, you will see that all the frequencies in the form $$\nu =\frac{c}{2 L} \sqrt{(n_x^2+n_y^2+n_z^2)}$$ with $n_x,n_y,n_z$ integers, are allowed. This basically means that we can consider frequencies as high as we want to, which is a problem, since we have seen that when the frequency goes to infinity the energy density diverges. So, if we used the Rayleigh-Jeans law, we would end up by concluding that a cubic box containing electromagnetic radiation has "infinite" energy. It is maybe this that your book is referring to when it says that " the whole energy in the universe would be converted into high frequency waves " (even if, if this is a literal quote, the wording is quite poor).
{ "source": [ "https://physics.stackexchange.com/questions/339489", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/159132/" ] }
339,645
Suppose pH of water is $6$, I think this means there is one $\text{H}^{+}$ ion for every $10^6$ water molecules. When we plug in the battery, I believe we see a current as the $\text{H}^{+}$ ions drift to the $-ve$ side of the battery and suck the electrons injected by the negative plate of the battery. Similarly, $\text{OH}^{-}$ ions drift to the $+ve$ plate of the battery and give an electron away to the positive plate of the battery. This way $\text{H}^{+}$ and $\text{OH}^{-}$ ions neutralize themselves as they contribute to the current. Since the ions were neutralizing themselves, would the current cease to exist after some time when all the $\text{H}^+$ ions in the water were used up ?
It is energetically unfavourable to split a water molecule into the two ions $\text{H}^+$ and $\text{OH}^-$ i.e. you need to put in energy to do it. However at room temperature water molecules have a range of energies and there are always a few molecules with enough energy to ionise. So any sample of pure water at everyday temperatures always contains a few $\text{H}^+$ and $\text{OH}^-$ ions. When you apply a voltage to your electrodes in water, you convert the $\text{H}^+$ ions to hydrogen atoms and they bubble off as $\text{H}_2$. Likewise the $\text{OH}^-$ ions are converted to water and oxygen molecules and the oxygen bubbles off. The net result is to remove water from your container. But as fast as the ion concentration is lowered by electrolysis, the remaining water ionises again to keep it constant. So electrolysis of pure water does not affect the ion concentration. You are correct that the current will continue to flow until all the water has gone (i.e. converted to hydrogen and oxygen).
{ "source": [ "https://physics.stackexchange.com/questions/339645", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/146502/" ] }
340,170
I'm currently reading about orbits of near-Earth satellites and some terminology is getting thrown around that I'm not sure I understand what they actually mean: The Earth's monopole moment and the Earth's quadrupole moment ? What are some easily understood explanations of the above terms?
A monopole (gravitational) of a system is basically the amount of mass-energy the system has. A dipole is a measure of how the mass is distributed away from some center. The quadrupole moment describes how stretched out the mass distribution is along an axis. Quadrupole would be zero for a sphere, but non-zero for a rod, for instance. It is also non-zero for the Earth, because the Earth is an oblate spheroid. The gravitational contribution from a quadrupole falls of faster than that of a monopole. (which is why the Earth's quadrupole moment is important for studying satellites and not really for studying the moon, owing to the $r^{-3}$ dependency of the contribution to the potential) Quadrupoles and other higher order moments are important in GR because the change in their distribution can produce gravitational waves. Example: Let's consider two cases, in both the cases, the large bodies are of mass $M$ and the small one of mass $m$, and the small one is on the line of symmetry at a distance $r$. Case 1: No quadrupole moment. The force here is a simple: $$\frac{GMm}{r^2}$$. Case 2: Non-zero quadrupole moment. (the larger spheres are separated by some distance $2R$.) The force in this case is: $$\frac{2GMmr}{(r^2+R^2)^{3/2}}$$ This, for large $r$, can be approximated to (two term series expansion): $$F \sim \frac{2GMm}{r^2}-\frac{3GMmR^2}{r^4}$$ The weird term here is because of the quadrupole moment of the system. As you go further away ($r>>R$), the force, $F$ is more or less: $$F \sim \frac{2GMm}{r^2}$$ This is why the "quadrupole moment effect" falls off with distance. Apologies for the obnoxious MS Paint diagrams.
{ "source": [ "https://physics.stackexchange.com/questions/340170", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/138682/" ] }
340,659
Caesium-133 is stable. Caesium-134 and caesium-137 are radioactive isotopes of caesium with half-lives of 2.065 years and 30.17 years respectively. Why does caesium-137 have a longer half-life if it contains three more neutrons than caesium-134 and four more neutrons than stable caesium? It would seem to me that caesium-134 would have a longer half-life given that it only contains one more neutron than stable caesium.
As noted in the comments, all of the various Cs isotopes I'll mention decay by emitting a beta, converting the Cs isotope to a Ba isotope. Now, while details of nuclear decays are not necessarily touched on unless you are in a nuclear physics course, they are at least somewhat analogous to electron or photon decays. What I mean by that is that you are, at a hand-waving level, looking at an initial state (the Cs), final states (Ba in various possible energy levels), and any applicable quantum numbers you would like to try and conserve (like nuclear spin). So, let's take a tour of the isotopes, relying mainly on data from nuclear datasheets. Start with Cs-134 (you probably did not know there was a journal called Nuclear Data Sheets). Going down to page 69, one finds that the Cs-134 nucleus has a spin of 4. It can decay to any one of 6 possible Ba-134 nuclear energy levels (the ground state and 5 excited states). The majority of the decays go through an excited state, which also has a nuclear spin of 4. The half-life is 2 years. Cs-135 is listed with a nuclear spin of 7/2. There is only one available Ba-135 level to decay to, and it has a nuclear spin of 3/2. The half-life of this decay is 2.3 million years. Only one state to go to, and a spin mismatch to slow it down. Cs-137 has a nuclear spin of 7/2. It can decay into 3 different Ba-137 levels, the ground state, and two excited states. The majority go through an excited state with spin 11/2. The other two states have spin 1/2 or 3/2 (the ground state). So, a few more states to decay to, but some pretty large spin mismatches on several of them. The half-life is 30 years. Cs-139 has a nuclear spin of 7/2. It can decay into one of 60 (!) different Ba levels, with most decays being to the ground state which has a spin of 7/2. The half-life is 9 minutes. Taken all together, what do we see? More available levels to decay to increases the chance of decaying. Closer spin values between the parent and daughter nuclei increase the chance of decaying. To go much deeper requires diving deeper into nuclear physics.
{ "source": [ "https://physics.stackexchange.com/questions/340659", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2077/" ] }
341,038
I understand that light entering a parallel block of glass at a non-90 degree angle will cause dispersion of colours within the block but that these will be refracted by the same degree upon exit so there will be no overall dispersion and will appear white. But in that case, why don't concave (or convex) lenses disperse? Additionally, with a block of parallel glass, if it were sufficiently thick and wide, although different frequencies would eventually "catch up" to each other and merge upon exit, this would occur only after a distance equal to the distance they were dispersed inside the glass right? So the human eye, if positioned close enough to the exit side of the glass, would be able to see a rainbow correct?
They do. It's called chromatic aberration - each different frequency has a slightly different focus point, blurring the image by different amounts for the different colors. Modern lenses of high quality have multiple elements added specifically to address the issue of chromatic aberration. What happens with flat glass can be explained by thinking of it in terms of the wave fronts instead of the ray paths, because that is closer to the physical optics. For a person looking directly through the glass, it messes up the phase relationship between waves of a different color, but our eyes aren't sensitive to that, anyway. A spherical wave front on one side will be spherical on the other, for all colors (same for flat). All of the spherical wave fronts that share a center on one side of the glass will have a center that is on the same line on the other side. Because of this, if you look through the glass at an angle you'll observe a small chromatic aberration.
{ "source": [ "https://physics.stackexchange.com/questions/341038", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/159900/" ] }
341,400
If this question has already been asked or is super basic, apologies. I'm a physics novice and this is my first question on this side of the site. In classical physics, waves and particles are mutually exclusive, but in quantum theory subatomic particles are waves. What if there were simultaneously waves and particles that we were testing, but they were separate and distinct (with our precision at the subatomic level not able to make that distinction)? When an ocean wave travels through water molecules, most molecules move up and down while staying in place, but a few are carried by the wave for a relatively short distance compared with the distance the wave travels. What if subatomic particles were so light and the waves so strong that some particles could be carried a tremendous distance. Then, energy would radiate from a star in huge waves carrying some photons for lightyears in distance (but eventually dropping them, which would put a limit on the distance a photon could travel). In the single-particle, double slit experiment , when firing a solitary photon, if the waves were still present in force it would make sense that individual photons would still demonstrate an interference pattern. As for the effects of observation on these photons ( delayed-choice quantum eraser experiment ), might their behavior not be explained by quantum entanglement sharing the interference of the sensors with the unobserved photon? As I stated, I'm a total amateur here so I expect I'm just confused about something or ignorant of a law or rule that would nullify this idea...but as of now it makes more sense to me than thinking particles = waves and observations in the future can effect behavior in the past. Please let me know what you think, why I'm probably wrong and any links that would help me continue to educate myself on this would be much appreciated.
Let me first say that I'm a fan of this theory, so whilst I'm giving what I believe to be a neutral response, bare in-mind that I'm pro-Bohmian which is in many fields an atypical viewpoint. To respond to the core question, 'why would a pilot-wave theory be wrong?': There are plenty of reasons why 'a' pilot-wave theory could be wrong, but in terms of serious interpretations of QM that ask 'why not particle AND wave?', Bohmian mechanics (BM), otherwise known as de Broglie-Bohm theory, quantum hydrodynamics (sometimes), or pilot wave theory, is really the only game in town, so I'll respond to this specifically; please say if you'd like a more general response, but I assume this is what you're interested in. First a little historical context - I won't go into too much detail because this is really more a topic for https://hsm.stackexchange.com/ but it helps put things into perspective so I don't think is off topic. Louis de Broglie was the first to come up with the idea of pilot-waves (some might argue it was Erwin Madelung - he did devise almost the same mathematics earlier, however never to my knowledge considered them conceptually as de Broglie did) to solve the issue of wave-particle duality, which had been the subject of his 1924 thesis (and would win him a Nobel prize in 1929). He presented his idea at the 1927 Solvay conference, alongside presentations of the Copenhagen interpretation (what we now think of as standard/textbook quantum mechanics), and Einstein's view that QM wasn't a complete theory. It wasn't well received; especially by Niels Bohr (who had already won a Nobel prize while de Broglie was still in grad school), who is perhaps one of the biggest reasons Copenhagen is the standard interpretation. He was a very vocal proponent of Copenhagen and had a history of shooting down people who disagreed with his views. Hugh Everett (the creator of the many-worlds interpretation) had a similar interaction with him much later which ended in Bohr's description of Everett as being 'undescribably stupid and could not understand the simplest things in quantum mechanics'. This perhaps gives an idea of his general attitude. His main objection to pilot-wave theory however seems to be (aside from being an affront to his own idea) that he believed QM was new physics, and anyone attempting to explain it in real terms (because particles always have a definite, real, position in pilot-wave theory) was kidding themselves and unable to let go of various preconceptions. So, after 1927 de Broglie essentially abandoned his theory and Copenhagen became the standard view of quantum mechanics. John von Neumann didn't help the situation when in 1932 he came out with a paper that would rule out de Broglie's pilot-wave theory as false. Grete Hermann quickly proved von Neumann incorrect, however her work remained in relative obscurity until the 70s, so until 1966 (when John Bell disproved von Neumann in the same way) many physicists (who cared about the problem) falsely believed a pilot-wave theory was not possible. In 1952 David Bohm; unaware of de Broglie's earlier work, reformulated pilot-wave theory. Unfortunately by this time he was largely being shunned by the scientific community due to his communist affiliations, so again his work didn't prove popular; it is only very recently (looking at published paper metrics, around the year 2000) that this theory has started to gain traction, and especially since 2006 when physical pilot-waves, which have already been discussed here, were discovered that were able to demonstrate some behaviours previously thought to solely be the domain of QM. So, that's some reason why the theory has often been ignored until now, and why many people and textbooks dismiss it; often it is falsely considered to have been proven wrong, or it just isn't on people's radars (Viz. if you have a working model of QM, why do you need another one?) These days the criticisms BM receives are generally philosophical objections such as its surrealist trajectories or s-state electrons being time-invariant. These are mathematically sound though and have not been disproven; my personal view is complaining that quantum mechanics doesn't work like you expect it to, or doesn't act like a classical system, has already been flogged to death - we know quantum mechanics isn't 'normal', so why should we suddenly expect various trajectories to work in the same way as say, pitching a baseball. This is a common objection nonetheless, and interestingly the exact opposite to Bohr's original objection; that pilot-waves were too classical. Next are Occam's razor arguments; that BM adds complexity without giving anything in return. There is possibly some truth to this, however the guidance equation is derived from the Schrodinger equation and conceptually isn't too far removed. Certainly Copenhagen has its own problems when it comes to this, and one could argue that a theory such as superdeterminism is simpler than them both. On the other hand, the extra maths of BM does have some use in a few fields like quantum chemistry, where it provides more efficient ways of solving certain problems than through the maths of standard quantum mechanics (SQM). Finally there are arguments about BM not being compatible with relativity and QFT. It's important to be aware that BM is a non-relativistic theory. There are extensions to it though that do in fact incorporate relativity/QFT, and you can find this discussed in various literature e.g. https://arxiv.org/abs/1205.1992 . It certainly isn't as mature as SQM is, so this can be an argument again for SQM, however it certainly shows that this is not a failing of BM, and the maturity of SQM is really its advantage here (BM has had only a handful of people working on it seriously for about a decade). At the end of the day (and this is perhaps the important takeaway), all experimentally verifiable results are identical in BM to all other not-disproved interpretations of QM ; they are all as right as each other, so all objections are philosophical in nature or have historical context. That doesn't mean the situation will remain this way; certainly it's possible that experiments will be devised that prove or disprove various current interpretations, and of the various interpretations my money would be on BM being one of the most likely to, if it is wrong, be ruled out by experiment in the future.
{ "source": [ "https://physics.stackexchange.com/questions/341400", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/160071/" ] }
341,422
What are the quark contents of $D^{0 \ast}$ meson ? How to distinguish between $D^{0 \ast}$ meson and $D^{0}$ meson as I guess both have same quark content.
Let me first say that I'm a fan of this theory, so whilst I'm giving what I believe to be a neutral response, bare in-mind that I'm pro-Bohmian which is in many fields an atypical viewpoint. To respond to the core question, 'why would a pilot-wave theory be wrong?': There are plenty of reasons why 'a' pilot-wave theory could be wrong, but in terms of serious interpretations of QM that ask 'why not particle AND wave?', Bohmian mechanics (BM), otherwise known as de Broglie-Bohm theory, quantum hydrodynamics (sometimes), or pilot wave theory, is really the only game in town, so I'll respond to this specifically; please say if you'd like a more general response, but I assume this is what you're interested in. First a little historical context - I won't go into too much detail because this is really more a topic for https://hsm.stackexchange.com/ but it helps put things into perspective so I don't think is off topic. Louis de Broglie was the first to come up with the idea of pilot-waves (some might argue it was Erwin Madelung - he did devise almost the same mathematics earlier, however never to my knowledge considered them conceptually as de Broglie did) to solve the issue of wave-particle duality, which had been the subject of his 1924 thesis (and would win him a Nobel prize in 1929). He presented his idea at the 1927 Solvay conference, alongside presentations of the Copenhagen interpretation (what we now think of as standard/textbook quantum mechanics), and Einstein's view that QM wasn't a complete theory. It wasn't well received; especially by Niels Bohr (who had already won a Nobel prize while de Broglie was still in grad school), who is perhaps one of the biggest reasons Copenhagen is the standard interpretation. He was a very vocal proponent of Copenhagen and had a history of shooting down people who disagreed with his views. Hugh Everett (the creator of the many-worlds interpretation) had a similar interaction with him much later which ended in Bohr's description of Everett as being 'undescribably stupid and could not understand the simplest things in quantum mechanics'. This perhaps gives an idea of his general attitude. His main objection to pilot-wave theory however seems to be (aside from being an affront to his own idea) that he believed QM was new physics, and anyone attempting to explain it in real terms (because particles always have a definite, real, position in pilot-wave theory) was kidding themselves and unable to let go of various preconceptions. So, after 1927 de Broglie essentially abandoned his theory and Copenhagen became the standard view of quantum mechanics. John von Neumann didn't help the situation when in 1932 he came out with a paper that would rule out de Broglie's pilot-wave theory as false. Grete Hermann quickly proved von Neumann incorrect, however her work remained in relative obscurity until the 70s, so until 1966 (when John Bell disproved von Neumann in the same way) many physicists (who cared about the problem) falsely believed a pilot-wave theory was not possible. In 1952 David Bohm; unaware of de Broglie's earlier work, reformulated pilot-wave theory. Unfortunately by this time he was largely being shunned by the scientific community due to his communist affiliations, so again his work didn't prove popular; it is only very recently (looking at published paper metrics, around the year 2000) that this theory has started to gain traction, and especially since 2006 when physical pilot-waves, which have already been discussed here, were discovered that were able to demonstrate some behaviours previously thought to solely be the domain of QM. So, that's some reason why the theory has often been ignored until now, and why many people and textbooks dismiss it; often it is falsely considered to have been proven wrong, or it just isn't on people's radars (Viz. if you have a working model of QM, why do you need another one?) These days the criticisms BM receives are generally philosophical objections such as its surrealist trajectories or s-state electrons being time-invariant. These are mathematically sound though and have not been disproven; my personal view is complaining that quantum mechanics doesn't work like you expect it to, or doesn't act like a classical system, has already been flogged to death - we know quantum mechanics isn't 'normal', so why should we suddenly expect various trajectories to work in the same way as say, pitching a baseball. This is a common objection nonetheless, and interestingly the exact opposite to Bohr's original objection; that pilot-waves were too classical. Next are Occam's razor arguments; that BM adds complexity without giving anything in return. There is possibly some truth to this, however the guidance equation is derived from the Schrodinger equation and conceptually isn't too far removed. Certainly Copenhagen has its own problems when it comes to this, and one could argue that a theory such as superdeterminism is simpler than them both. On the other hand, the extra maths of BM does have some use in a few fields like quantum chemistry, where it provides more efficient ways of solving certain problems than through the maths of standard quantum mechanics (SQM). Finally there are arguments about BM not being compatible with relativity and QFT. It's important to be aware that BM is a non-relativistic theory. There are extensions to it though that do in fact incorporate relativity/QFT, and you can find this discussed in various literature e.g. https://arxiv.org/abs/1205.1992 . It certainly isn't as mature as SQM is, so this can be an argument again for SQM, however it certainly shows that this is not a failing of BM, and the maturity of SQM is really its advantage here (BM has had only a handful of people working on it seriously for about a decade). At the end of the day (and this is perhaps the important takeaway), all experimentally verifiable results are identical in BM to all other not-disproved interpretations of QM ; they are all as right as each other, so all objections are philosophical in nature or have historical context. That doesn't mean the situation will remain this way; certainly it's possible that experiments will be devised that prove or disprove various current interpretations, and of the various interpretations my money would be on BM being one of the most likely to, if it is wrong, be ruled out by experiment in the future.
{ "source": [ "https://physics.stackexchange.com/questions/341422", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/147093/" ] }
341,809
I do not have a strong background in physics, so please refrain from using complex mathematics in any answers :) I was on this site , and I read: When an electrical charge is moving or an electric current passes through a wire, a circular magnetic field is created. I am trying to grasp the notion of magnetic fields under different reference frames. Suppose I have a point charge next to a compass needle. It is experimentally verifiable that the compass needle is not affected by the point charge - i.e., the needle does not move. It can be concluded that the point charge does not create any magnetic field. But since the point charge is on the Earth, and the Earth is moving, shouldn't the point charge produce a magnetic field? Why doesn't the compass needle move in response to this magnetic field? Is there a magnetic field in the room?
You are indeed correct about the frame-dependence of magnetic fields. The reason the point charge doesn't affect the compass is because the compass and the charge are both moving at the same speed, both being on the Earth, and therefore, the compass sees the charge as stationary. This means no magnetic field is produced. As a side note, you hit upon an important realization: in order for electrodynamics to be consistent, you must adopt the same set of assumptions as in special relativity! In other words, special relativity is a necessary consequence of electrodynamics. Some books even derive the phenomenon of time dilation by considering the magnetic field experienced by a point charge moving parallel to a line charge.
{ "source": [ "https://physics.stackexchange.com/questions/341809", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/102008/" ] }
341,869
When I am taking a shower, the shower curtain slowly moves towards my legs. Also, it seems that the hotter the water, the faster it gets to my skin. Why is that?
It is due to the convection in the shower, as mentioned in Steeven's comment. The water is significantly above room temperature. Obviously it is going to heat the air (and you can feel the warm air/vapour yourself). The air in the shower is both warming up, and becoming more humid. Both of those factors lower the density of the air, making it raise up. When the air flows up, it creates a low pressure zone in the shower. Once the pressure gets low enough compared to the room outside, the pressure difference pushes the curtain into the shower until enough air can get in to account for the air lost to convection. Warmer water will cause greater convective flows, so the effect should be more noticeable. Disclaimer : Note that the heat and humidity are not the only cause of convective flows; and convective airflow is likely only part of the situation. As some comments (and the wikipedia page on this subject) mentioned, the flow of the water downwards may actually generate an air vortex in the shower. This vortex creates a low pressure zone, which drives the convective process regardless of water temperature. There are also other potential reasons, and without experiments and potentially (multi-physics) simulations, it's very unclear what the primary cause of this phenomenon is. I still feel this answer properly addresses the second question about why warmer water makes the process more noticeable.
{ "source": [ "https://physics.stackexchange.com/questions/341869", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/160310/" ] }
341,888
"Light is deflected by powerful gravity, not because of its mass (light has no mass) but because gravity has curved the space that light travels through." If the Mass of the Sun is so great, how is it that the light from the Sun reaches Earth and other planets? Wouldn't it's gravity keep the light from 'reaching out'? Is there any equation relating this?
It is due to the convection in the shower, as mentioned in Steeven's comment. The water is significantly above room temperature. Obviously it is going to heat the air (and you can feel the warm air/vapour yourself). The air in the shower is both warming up, and becoming more humid. Both of those factors lower the density of the air, making it raise up. When the air flows up, it creates a low pressure zone in the shower. Once the pressure gets low enough compared to the room outside, the pressure difference pushes the curtain into the shower until enough air can get in to account for the air lost to convection. Warmer water will cause greater convective flows, so the effect should be more noticeable. Disclaimer : Note that the heat and humidity are not the only cause of convective flows; and convective airflow is likely only part of the situation. As some comments (and the wikipedia page on this subject) mentioned, the flow of the water downwards may actually generate an air vortex in the shower. This vortex creates a low pressure zone, which drives the convective process regardless of water temperature. There are also other potential reasons, and without experiments and potentially (multi-physics) simulations, it's very unclear what the primary cause of this phenomenon is. I still feel this answer properly addresses the second question about why warmer water makes the process more noticeable.
{ "source": [ "https://physics.stackexchange.com/questions/341888", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/160324/" ] }
342,105
I was recently reading up about Coulomb's law and Gauss law and several sources seemed to state that the Gauss law was more "fundamental" than Coulomb's law even though one is deducible from the other, which got me thinking: what does it even mean for a law/theorem to be more fundamental?
Roughly speaking, one law is more fundamental than another if it explains it. (There's no guarantee any law is "fundamental", in the sense of there being nothing even more fundamental; maybe all laws have a deeper explanation, but at any given time our knowledge is finite.) The most obvious guess at what it means for $A$ to explain $B$ is that $B$ is deducible from $A$ , but if the deduction works both ways this doesn't reveal which is more fundamental, which is the point you've hit on. (If you want to get technical, in the philosophy of science the simple definition of explanation that I just critiqued is the deductive-nomological model .) Indeed, we can obtain Coulomb's law as a special case of Gauss's law, or Gauss's law from Coulomb's law by linearity. More fundamental claims provide deeper insight. Gauss's law is more fundamental in the sense that from Maxwell's equations we obtain a vector-calculus description of the electromagnetic fields that works for arbitrary charge distributions. It is at this point that the fields $\vec{E},\,\vec{B}$ become related in a theory that unifies them. Unification is typically a sign of deeper insight in physics, whereas Coulomb's law speaks only of $\vec{E}$ . From Maxwell's equations emerge Lorentz-invariant wave equations that ultimately inspired special relativity. If we rewrite $\vec{E},\,\vec{B}$ in terms of $\vec{A},\,\phi$ (which unite into $A^\mu$ relativistically), we reduce Gauss's law to $\nabla^2\phi=-\frac{\rho}{\epsilon_0}$ . But a manifestly relativistic formalism gives an even deeper understanding of electromagnetism, far beyond anything Coulomb imagined. At this point we wonder only where $A^\mu$ comes from. Scalar electrodynamics explains this in terms of local symmetries of a scalar field; this provides an even more fundamental exposition. (We could go further, but you see my point.)
{ "source": [ "https://physics.stackexchange.com/questions/342105", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/156723/" ] }
342,123
On my book I read: $S_{z-tot}\chi_+(1)\chi_+(2)=[S_{1z}+S_{2z}]\chi_+(1)\chi_+(2)=[S_{1z}\chi_+(1)]\chi_+(2)+[S_2\chi_+(2)]\chi_+(1)=...$ Now, I have two questions: What's $\chi_+(1)\chi_+(2)$ ? I know that $\chi_+=(1,0)$ but I really don't understand that writing (what is a product between vectors?!). Is it maybe just a way to indicate a vector in $C^4$ ? Or instead it is a matrix or other? What's $S_{z-tot}$ ? Is it 2x2 matrix or a, 4x4 matrix or other? I also don't understand why the book distinguishes $S_{1z}$ and $S_{2z}$ , aren't them the same 2x2 matrix defined for the single electron? Thanks for your attention and please answer in a simple way (I'm a begginer in these subjects).
You must study about product states, product space of two (linear) spaces, product of linear transformations etc (product symbol $\;'\otimes\;'$) \begin{equation} \chi_+(1)\chi_+(2) \equiv \chi_+(1) \otimes\chi_+(2) \tag{01} \end{equation} \begin{equation} S_{z-tot}= S_{1z}+S_{2z}\equiv \left(S_{1z} \otimes I_2\right)+ \left(I_1 \otimes S_{2z}\right) \tag{02} \end{equation} \begin{align} &S_{z-tot}\chi_+(1)\chi_+(2)=[S_{1z}+S_{2z}]\chi_+(1)\chi_+(2) \nonumber\\ &\equiv \left[\left(S_{1z} \otimes I_2\right)+ \left(I_1 \otimes S_{2z}\right)\right]\left[\chi_+(1) \otimes\chi_+(2)\right] \nonumber\\ &=\left(S_{1z} \otimes I_2\right)\left[\chi_+(1) \otimes\chi_+(2)\right]+\left(I_1 \otimes S_{2z}\right)\left[\chi_+(1) \otimes\chi_+(2)\right] \nonumber\\ &=\left[S_{1z}\chi_+(1)\right] \otimes\chi_+(2)+\chi_+(1) \otimes\left[S_{2z}\chi_+(2)\right] \tag{03} \end{align} A representation : \begin{equation} \chi_+(1)= \begin{bmatrix} \xi_1\\ \xi_2 \end{bmatrix}\;,\; \chi_+(2)= \begin{bmatrix} \eta_1\\ \eta_2 \end{bmatrix} \quad \Longrightarrow \quad \chi_+(1) \otimes\chi_+(2) = \begin{bmatrix} \xi_1 \eta_1\\ \xi_1 \eta_2\\ \xi_2 \eta_1\\ \xi_2 \eta_2 \end{bmatrix} \tag{04} \end{equation} Now \begin{align} & S_{1z}= \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{bmatrix}\;,\; I_2= \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} \nonumber\\ &\quad \Rightarrow \quad S_{1z} \otimes I_2= \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22} \end{bmatrix} \otimes \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} = \begin{bmatrix} a_{11}\cdot\begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} & a_{12}\cdot\begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}\\ &\\ a_{21}\cdot\begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} & a_{22}\cdot\begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} \end{bmatrix} \nonumber\\ &\quad \Rightarrow \quad S_{1z} \otimes I_2= \begin{bmatrix} a_{11} & 0 & a_{12} & 0\\ 0 & a_{11} & 0 & a_{12} \\ a_{21} & 0 & a_{22} & 0\\ 0 & a_{21} & 0 & a_{22} \end{bmatrix} \tag{05} \end{align} and \begin{align} & I_1= \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}\;,\; S_{2z}= \begin{bmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{bmatrix} \nonumber\\ &\quad \Rightarrow \quad I_1 \otimes S_{2z}= \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} \otimes \begin{bmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{bmatrix} = \begin{bmatrix} 1\cdot \begin{bmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{bmatrix}&0\cdot\begin{bmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{bmatrix}\\ &\\ 0\cdot\begin{bmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{bmatrix}& 1\cdot\begin{bmatrix} b_{11} & b_{12}\\ b_{21} & b_{22} \end{bmatrix} \end{bmatrix} \nonumber\\ &\quad \Rightarrow \quad I_1 \otimes S_{2z}= \begin{bmatrix} b_{11} & b_{12} & 0 & 0\\ b_{21} & b_{22} & 0 & 0 \\ 0 & 0 & b_{11} & b_{12}\\ 0 & 0 & b_{21} & b_{22} \end{bmatrix} \tag{06} \end{align} From equations (05) and (06) \begin{equation} S_{z-tot}=\left(S_{1z} \otimes I_2\right)+ \left(I_1 \otimes S_{2z}\right)= \begin{bmatrix} \left(a_{11}+b_{11}\right) & b_{12} & a_{12} & 0\\ b_{21} & \left(a_{11}+b_{22}\right) & 0 & a_{12} \\ a_{21} & 0 & \left(a_{22}+b_{11}\right) & b_{12}\\ 0 & a_{21} & b_{21} & \left(a_{22}+b_{22}\right) \end{bmatrix} \tag{07} \end{equation} If for example \begin{equation} S_{1z}=\tfrac{1}{2} \begin{bmatrix} 1 & 0\\ 0 &\!\!\! -\!1 \end{bmatrix}\;,\; S_{2z}=\tfrac{1}{2} \begin{bmatrix} 1 & 0\\ 0 &\!\!\! -\!1 \end{bmatrix} \tag{08} \end{equation} then \begin{equation} S_{z-tot}=\left(S_{1z} \otimes I_2\right)+ \left(I_1 \otimes S_{2z}\right)= \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 &\!\!\! -\!1 \end{bmatrix} \tag{09} \end{equation} The matrix in (09) is already diagonal with eigenvalues 1,0,0,-1. Rearranging rows and columns we have \begin{equation} S'_{z-tot}= \begin{bmatrix} \begin{array}{c|cccc} 0 & 0 & 0 & 0\\ \hline 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & \!\!\!\!-\!1 \end{array} \end{bmatrix} = \begin{bmatrix} \begin{array}{c|c} S_{z}^{(j=0)} & 0_{1\times 3}\\ \hline 0_{3\times1} & S_{z}^{(j=1)} \end{array} \end{bmatrix} \tag{10} \end{equation} because, as could be proved (1) , the product 4-dimensional Hilbert space is the direct sum of two orthogonal spaces : the 1-dimensional space of the angular momentum $\;j=0\;$ and the 3-dimensional space of the angular momentum $\;j=1\;$ : \begin{equation} \boldsymbol{2}\boldsymbol{\otimes}\boldsymbol{2}=\boldsymbol{1}\boldsymbol{\oplus}\boldsymbol{3} \tag{11} \end{equation} In general for two independent angular momenta $\;j_{\alpha}\;$ and $\;j_{\beta}\;$, living in the $\;\left(2j_{\alpha}+1\right)-$ dimensional and $\;\left(2j_{\beta}+1\right)-$ dimensional spaces $\;\mathsf{H}_{\boldsymbol{\alpha}}\;$ and $\;\mathsf{H}_{\boldsymbol{\beta}}\;$ respectively, their coupling is achieved by constructing the $\;\left(2j_{\alpha}+1\right)\cdot\left(2j_{\beta}+1\right)-$ dimensional product space $\;\mathsf{H}_{\boldsymbol{f}}\;$ \begin{equation} \mathsf{H}_{\boldsymbol{f}}\equiv \mathsf{H}_{\boldsymbol{\alpha}}\boldsymbol{\otimes}\mathsf{H}_{\boldsymbol{\beta}} \tag{12} \end{equation} Then the product space $\:\mathsf{H}_{\boldsymbol{f}}\:$ is expressed as the direct sum of $\:n\:$ mutually orthogonal subspaces $\:\mathsf{H}_{\boldsymbol{\rho}}\: (\rho=1,2,\cdots,n-1,n) $ \begin{equation} \mathsf{H}_{\boldsymbol{f}}\equiv \mathsf{H}_{\boldsymbol{\alpha}}\boldsymbol{\otimes}\mathsf{H}_{\boldsymbol{\beta}} = \mathsf{H}_{\boldsymbol{1}}\boldsymbol{\oplus}\mathsf{H}_{\boldsymbol{2}} \boldsymbol{\oplus} \cdots \boldsymbol{\oplus} \mathsf{H}_{\boldsymbol{n}}=\bigoplus_{{\boldsymbol{\rho}}={\boldsymbol{1}}}^{{\boldsymbol{\rho}}={\boldsymbol{n}}} \mathsf{H}_{\boldsymbol{\rho}} \tag{13} \end{equation} where the subspace $\:\mathsf{H}_{\boldsymbol{\rho}}\:$ corresponds to angular momentum $\;j_{\rho}\;$ and has dimension \begin{equation} \dim \left(\mathsf{H}_{\boldsymbol{\rho}}\right) =2\cdot j_{\rho}+1 \tag{14} \end{equation} with \begin{align} j_{\rho} & = \vert j_{\beta}-j_{\alpha} \vert +\rho - 1\: , \quad \rho=1,2,\cdots,n-1,n \tag{15a}\\ n & =2\cdot\min (j_{\alpha}, j_{\beta})+1 \tag{15b} \end{align} Equation (13) is expressed also in terms of the dimensions of spaces and subspaces as : \begin{equation} (2j_{\alpha}+1)\boldsymbol{\otimes} (2j_{\beta}+1)=\bigoplus_{\rho=1}^{\rho=n}(2j_{\rho}+1) \tag{16} \end{equation} Equation (11) is a special case of equation (16) : \begin{equation} j_{\alpha}=\tfrac{1}{2} \:,\:j_{\beta}=\tfrac{1}{2} \: \quad \Longrightarrow \quad \: j_{1}=0 \:,\: j_{2}=1 \tag{17} \end{equation} (1) the square of total angular momentum $\mathbf{S}^2$ expressed in the basis of its common with $\:S_{z-tot}\:$ eigenvectors has the following diagonal form : \begin{equation} \mathbf{S'}^2= \begin{bmatrix} \begin{array}{c|cccc} 0 & 0 & 0 & 0\\ \hline 0 & 2 & 0 & 0 \\ 0 & 0 & 2 & 0\\ 0 & 0 & 0 & 2 \end{array} \end{bmatrix} = \begin{bmatrix} \begin{array}{c|c} \left(\mathbf{S'}^2\right)^{(j=0)} & 0_{1\times 3}\\ \hline 0_{3\times1} & \left(\mathbf{S'}^2\right)^{(j=1)} \end{array} \end{bmatrix} \tag{10'} \end{equation} since for \begin{equation} S_{1x}=\tfrac{1}{2} \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix}\;,\; S_{2x}=\tfrac{1}{2} \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} \tag{18} \end{equation} \begin{equation} S_{1y}=\tfrac{1}{2} \begin{bmatrix} 0 &\!\!\! -\!i\\ i & 0 \end{bmatrix}\;,\; S_{2y}=\tfrac{1}{2} \begin{bmatrix} 0 &\!\!\! -\!i\\ i & 0 \end{bmatrix} \tag{19} \end{equation} we have \begin{equation} S_{x-tot}=\left(S_{1x} \otimes I_2\right)+ \left(I_1 \otimes S_{2x}\right) =\tfrac{1}{2} \begin{bmatrix} 0 & 1 & 1 & 0\\ 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & 1\\ 0 & 1 & 1 & 0 \end{bmatrix} \tag{20} \end{equation} \begin{equation} S_{y-tot}=\left(S_{1y} \otimes I_2\right)+ \left(I_1 \otimes S_{2y}\right)=\tfrac{1}{2} \begin{bmatrix} 0 & \!\!\! -\!i & \!\!\! -\!i & 0\\ i & 0 & 0 & \!\!\! -\!i \\ i & 0 & 0 & \!\!\! -\!i \\ 0 & i & i & 0 \end{bmatrix} \tag{21} \end{equation} and consequently \begin{align} S^{2}_{x-tot} & =\tfrac{1}{4} \begin{bmatrix} 0 & 1 & 1 & 0\\ 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & 1\\ 0 & 1 & 1 & 0 \end{bmatrix}^{2} =\tfrac{1}{2} \begin{bmatrix} 1 & 0 & 0 & 1\\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0\\ 1 & 0 & 0 & 1 \end{bmatrix} \tag{22x}\\ S^{2}_{y-tot} & =\tfrac{1}{4} \begin{bmatrix} 0 & \!\!\! -\!i & \!\!\! -\!i & 0\\ i & 0 & 0 & \!\!\! -\!i \\ i & 0 & 0 & \!\!\! -\!i \\ 0 & i & i & 0 \end{bmatrix}^2 =\tfrac{1}{2} \begin{bmatrix} 1 & 0 & 0 & \!\!\! -\!1 \\ 0 & 1 & 1 & 0\\ 0 & 1 & 1 & 0\\ \!\!\! -\!1 & 0 & 0 & 1 \end{bmatrix} \tag{22y}\\ S^{2}_{z-tot} & =\quad \!\! \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 &\!\!\!-\!1 \end{bmatrix}^{2} =\quad \!\! \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{22z} \end{align} From \begin{equation} \mathbf{S}^{2}_{tot}=S^{2}_{x-tot}+S^{2}_{y-tot}+S^{2}_{z-tot} \tag{23} \end{equation} we have finally \begin{equation} \mathbf{S}^{2}_{tot}= \begin{bmatrix} 2 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 2 \end{bmatrix} \tag{24} \end{equation} For its eigenvalues $\lambda$ \begin{equation} \det\left(\mathbf{S}^{2}_{tot}-\lambda I_{4}\right)= \begin{vmatrix} 2-\lambda & 0 & 0 & 0\\ 0 & 1-\lambda & 1 & 0 \\ 0 & 1 & 1-\lambda & 0\\ 0 & 0 & 0 & 2-\lambda \end{vmatrix} =-\lambda \left(2-\lambda \right)^{3} \tag{25} \end{equation} So the eigenvalues of $\;\mathbf{S}^{2}_{tot}\;$ are: the eigenvalue $\lambda_{1}=0=j_{1}\left(j_{1}+1\right)$ with multiplicity 1 and the eigenvalue $\lambda_{2}=2=j_{2}\left(j_{2}+1\right)$ with multiplicity 3.
{ "source": [ "https://physics.stackexchange.com/questions/342123", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/160431/" ] }
343,474
The name of the particle (resonance?) in the recent announcement Observation of the doubly charmed baryon $\Xi^{++}_{cc}$ is complicated. I'm sure it's standard notation but I don't know how to "decode" it. Is it possible to explain the Xi and the two super-scripts and subscripts? A short description would be fine. I'm wondering if the two plus signs indicate a double charge or isospin or something else, but I'm guessing that the double c's are related to "doubly charmed" in the title.
The PDG naming scheme for hadrons is the authoritative source on this. The easy bit is the superscript: that's just the charge. $++$ means a charge of $+2$. The symbols for baryons are based on those chosen for the baryons formed of light quarks ($u$, $d$, $s$). They encode the isospin and quark content. The rules are relatively straightforward, but I'll refer you to the naming scheme for the full list. A light-quark $\Xi$ has a quark content of $uss$ or $dss$. A heavy-quark $\Xi$ retains the single $u$ or $d$ and replaces one or both of the $s$ quarks with a $c$ and/or a $b$. This is denoted in the subscript: e.g. a $\Xi_c$ has $dsc$ or $usc$.
{ "source": [ "https://physics.stackexchange.com/questions/343474", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/83380/" ] }
344,701
I heard that as approaching the temperature of a kugelblitz the laws of physics break down, I saw this in the video The Kugelblitz: A Black Hole Made From Light , by SciSchow Space.
Hank Green is describing the concept of the Planck temperature , $$ T_\mathrm{P} = \sqrt{\frac{\hbar c^5}{Gk_B^2}}\approx1.4\times 10^{32}\:\mathrm K, $$ which is defined as $\frac{1}{k_B}$ times the Planck energy $E_\mathrm{P}=\sqrt{\hbar c^5/G}\approx 1.9\times 10^{9}\:\mathrm J$. As with all the Planck units , we don't really know what happens at those scales, but we're pretty sure that the laws of physics as we know them are likely to require modifications to continue describing nature at some point before you reach that regime. What doesn't happen at the Planck scale is that "the laws of physics break down", which is a meaningless catchphrase that shouldn't be used. Unless, in fact, the world changes so much that there is no regularity to physical phenomena and no way to predict how an experiment will pan out, even in principle, then what you have is not a breakdown of the laws of physics, it's just that you've left the region of validity of the laws you know, and you need to figure out what the laws are on the broader regime.
{ "source": [ "https://physics.stackexchange.com/questions/344701", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/151259/" ] }
344,705
I understand that a space elevator should be at least as long as a geosynchronous orbit (22,236 mi), because only beyond that is the centrifugal force from the Earth's rotation is greater than the effects of its gravity. At the same time, astronauts in a space station in low Earth orbit experience weightlessness even though gravity is about 90% of the strength as on the surface. I understand that this is because they are in free-fall, but remain in orbit due to the high speed in which the space station was initially placed. Let's say you have a 40,000 mi space elevator at the equator, and it has a climber wherein occupants are strapped into chairs. If I understand correctly, if you were to stop the climber and hold it there or apply the brakes, then at... 4,000 mi (roughly the Earth's radius), the occupants would feel about 1/4 of the effects of Earth's gravity. So if they were holding an object and let go, it would fall, but more slowly than on Earth. 22,236 mi (GEO), the occupants would feel weightless. If they were holding an object and let go, it would float (in theory). Greater than 22,236 mi, the occupants would feel drawn toward the ceiling. If they let go of an object, it would rise and hit the ceiling, at some speed proportional to how much greater than GEO they are stopped at. Do I have this correct?
Hank Green is describing the concept of the Planck temperature , $$ T_\mathrm{P} = \sqrt{\frac{\hbar c^5}{Gk_B^2}}\approx1.4\times 10^{32}\:\mathrm K, $$ which is defined as $\frac{1}{k_B}$ times the Planck energy $E_\mathrm{P}=\sqrt{\hbar c^5/G}\approx 1.9\times 10^{9}\:\mathrm J$. As with all the Planck units , we don't really know what happens at those scales, but we're pretty sure that the laws of physics as we know them are likely to require modifications to continue describing nature at some point before you reach that regime. What doesn't happen at the Planck scale is that "the laws of physics break down", which is a meaningless catchphrase that shouldn't be used. Unless, in fact, the world changes so much that there is no regularity to physical phenomena and no way to predict how an experiment will pan out, even in principle, then what you have is not a breakdown of the laws of physics, it's just that you've left the region of validity of the laws you know, and you need to figure out what the laws are on the broader regime.
{ "source": [ "https://physics.stackexchange.com/questions/344705", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/162199/" ] }
344,813
The Higgs boson is lighter than the top quark. But the top quark was discovered in the mid-1990s where the Higgs boson escaped detection for two more decades. So if the energy has already been achieved to produce Higgs boson, why did it escape detection so far? I understand that the couplings of Higgs boson to fermions is small and doesn't interact with the detector appreciably. Does it mean that in LHC, with the increase in energy, the Higgs coupling increased and we finally detected Higgs?
With both particles you cannot detect them within their own lifetimes, only look at what they decay into. The top decays to a b jet and W (which can then become fermion anti-fermion or leptons) and is fairly distinctive. The dominant Higgs decay, however, is to two b jets. B jets are very common within the LHC and we cannot infer from two b jets that a Higgs detection has been made. It's all about the statistics. With the top, its decay mode was distinctive enough that fewer events were needed before a statistically significant signal was seen above the background, whereas the Higgs needed many more events.
{ "source": [ "https://physics.stackexchange.com/questions/344813", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/36793/" ] }
345,033
I know this seems like a simple question, but I'm trying to debate with a flat earth theorist. I asked him to explain why can the ISS visibly be seen orbiting the Earth with the naked eye, and he put this question to me instead. He asked: "How can a full moon be observed south of an observer's location, despite the fact that if the moon is illuminated by the sun, an observer has to be almost directly between the sun and the moon to observe a full moon?"
Here is a picture that may help in describing this. First row: Sun (red), Earth (blue) and Moon (black) to scale (axes - in km). You can see just how far away the sun is... and on this scale the Earth and Moon are essentially invisible (they are inside the "zoom box"). Second row: zooming in (50x), you can barely make out the Earth and Moon, Zooming in all the way to 300x, you can finally see just the Earth and the Moon - still to scale. Now in order for the Moon to be in full sunlight, it needs to shift out of the direct line from the Sun to the Earth. The distance it has to shift to be in "full sunlight" is approximately the radius of the Earth plus the radius of the Moon: the actual shadow (umbra, where you get a total lunar eclipse) has a diameter of just 9000 km at the distance of the moon. This means that the Moon needs to be just 1.3° below the ecliptic to be out of the shadow (the ecliptic is the name of the plane containing the Sun and the Earth ... so called because when the Moon is on the ecliptic, you get an eclipse). The lunar orbit is in fact inclined by 5.145° relative to the ecliptic - so it spends most of its time away from the shadow of the earth. And that's why most of the time, we can see the full moon. Bottom row: the difference between what a full moon looks like at 0°, 1°, and 5° "off perfect" illumination. As you can see, "full" looks pretty full for all of these - so a casual observation of the shape of the Moon along won't allow you to tell how far off the ecliptic it is. (The "flattening" is happening at the bottom of these plots - if you look closely, you can just see a few missing pixels on the right-most image). And that is why you can see the full moon. And the earth is not flat. You can find a nice animation (not to scale) on the earthsky.org site
{ "source": [ "https://physics.stackexchange.com/questions/345033", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/162360/" ] }
345,381
It was claimed that if an iceberg melts in the ocean, the sea level won't change as the ice displaces as much water as there will be melted water. The other claim was that the sea level should rise because oceans contain salt, so the water in oceans is denser than the water in the ice. Which one is the correct reasoning?
The Archimedes principle says that a floating body will displace an amount of fluid that is equal to its weight. Since the iceberg floats, it weighs the same as the water it displaces. If it had the same salt concentration as the ocean, then once thawed, it would occupy exactly the same volume as it displaced and the sea level wouldn't change. But most icebergs are made of nonsalty water, with a density a bit lower than sea water. So once melted, that same mass will occupy more volume (same mass, less density equals more volume), and the sea level will increase… very very slightly.
{ "source": [ "https://physics.stackexchange.com/questions/345381", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/32222/" ] }
345,861
I've found a number of questions that concern the Doppler effect, but none that seem to address my question. I have a background in music. People with a musical ear can generally tell the ratio between two frequencies (as a musical interval). For anyone who's not already aware, we perceive a ratio of 2:1 as an octave, 3:2 as a perfect fifth, 4:3 as a perfect fourth, 5:4 as a major third and 6:5 as a minor third. Therefore, if a vehicle passes at speed, and I perceive that the frequency of the engine noise drops by a fourth as it does so, then I know that the ratio of the frequencies (approaching:departing) is 4:3. Is this information (just the ratio of the frequencies) enough, coupled with an assumed speed of sound of around 330m/s, to calculate the speed at which the vehicle passed? We'll assume the car passed quite close by, so can be considered to be coming almost directly towards me when approaching, and almost directly away when departing. At this point, we don't know the actual frequency of the sound - just the relative frequencies. Some people (alack not myself) are fortunate enough to have perfect pitch, in which case they could even estimate the exact frequencies. let's assume 220Hz and 165Hz. Is this extra information helpful/needed to ascertain the speed of the passing vehicle? I'm not interested in telling the difference between 35 and 38mph. More like "By the sound of it, that must have been going at least 80mph!"
Let us consider that you are at rest and the car, which emits at frequency $f_0$, approaches you with speed $v$. The frequency you receive increases to $$f_1=f_0\frac{c}{c-v},$$ where $c$ is the speed of sound. When the car get passed you the perceived frequency is reduced to $$f_2=f_0\frac{c}{c+v}.$$ The ratio is $$\frac{f_1}{f_2}=\frac{c+v}{c-v}.$$ Now solve this equation for $v$, $$v=\frac{r-1}{r+1}c,$$ where $r=f_1/f_2$. Edit Let us consider some examples. If the ratio corresponds to an octave (2:1), $r=2$, the car speed is $c/3\approx400\, km/h$, and that should be a Bugatti Veyron. If you notice a fifth (3:2), $r=3/2$ and $v\approx 240\, km/h$, which may be a nice sport car. A minor third (6:5), $r=6/5$, corresponds to $v\approx 110\, km/h$ which can even be a bus. For a difference in frequency corresponding to a semitone , $r\approx 1.06$, the speed is about $36\, km/h$ and for a tone, $r\approx 1.12$, the result is $v\approx 70\, km/h$. In all examples the speed of sound was taken as $c\approx 1240\, km/h$.
{ "source": [ "https://physics.stackexchange.com/questions/345861", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/162799/" ] }
346,150
I'm a helicopter pilot with limited physics knowledge (units in BSc and HNCs). I have recently challenged an assertion that rotating blades are stiffened by centrifugal force . In My mind, stiffness refers to the resistance of a member to bending deformation, K. From the comments, perhaps this is my problem? My counter argument is quite simple. A force can only affect the stiffness of a blade if it changes the physical characteristics of the blade and such force(s) can only be exerted as a result of centripetal acceleration and aerodynamic effects as the blade flies. A more accurate statement might be that the "blade resists the bending moments since counter-moments are exerted on them arising from the centripetal and aerodynamic forces". I am very happy to be wrong (since I then learn) but I am catching a lot of heat for this challenge and no-one on Aviation.SE has been able to explain why I am wrong. I do understand that there is a certain amount of pendantry in my claim but precision, particular in answers on the stacks, is part of my motivation. What am I missing?
For simplicity, let's model the helicopter blade as a simple massless beam with a point mass at the end. When there is no gravity, the beam will be straight. We now introduce a force to the beam tip, which will cause the beam to deflect. The bending stiffness $k$ is equal to the ratio of the force to the deflection: $k=\frac{F}{d}$ When we now put this beam in a rotating reference frame, like the blade of a spinning helicopter rotor, we have to introduce a centrifugal force on the mass to account for the constant acceleration of the beam tip. When the beam is deflected upward, the centrifugal force will cause a downward bending moment and hence the beam will deflect less than in the scenario without the rotation. Since the bending stiffness is the ratio of vertical force to the vertical deflection $K=\frac{F}{d}$, the (apparent) bending stiffness is higher in a rotating blade.
{ "source": [ "https://physics.stackexchange.com/questions/346150", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/72071/" ] }
346,407
I have just experienced a snowfall and I am not so clear on how it works. Three days after a short day of snowfall, and having 2 min | 17 max degrees Celsius, full sunny scarcely cloudy each day, there is still some snow persisting in shadow and dark places. This is contrary to my intuition: I would've expected all the snow to have melted and disappeared after the first sunny day, or after the second. Yet we are on the third day and still some snowman heads are alive. Is it because the snow contains salt? Or does the snow create low temperature air around itself? Or does the daily morning humidity turn the snow into ice blocks that are harder to melt and more solid to scatter sun rays?
Just as a complement to Ziggurat's answer: you can try to estimate the time required for the sun to melt a certain quantity of snow by yourself. The energy required to melt a mass $m$ of snow is $$Q=L m$$ where $L$ is the latent heat of fusion. For ice, $L=334$ kJ/kg . The density of snow $\rho$ ranges from $100$ to $800$ kg/m$^3$ Solar irradiance $I$ ranges from $150$ to $300$ W/m$^2$. The albedo of snow (percentage of reflected sunlight) $A$ ranges from $0.2$ for dirty snow to $0.9$ for freshly fallen snow. If the surface exposed to sunlight is $S$, the absorbed energy in the time interval $\Delta t$ will be $$E_{in}=(1-A) IS \Delta t$$ If $V$ is the snow volume, the energy required to melt it will be $$E_{melt} =L \rho V$$ Equating these two expressions we get $$\Delta t = \frac{L \rho V}{(1-A)IS}$$ Assuming $A=0.9$, $\rho=300$ kg/m$^3$ and $I=200$ W/m$^2$, we get, for a sheet of snow of surface $1$ m$^2$ and thickness $1$ cm, $\Delta t \simeq 5 \cdot 10^4$ s, i.e. $\simeq 14$ hours. This is a very rough estimate that doesn't consider conduction processes. But anyway, you can see that even if we assume a pretty high irradiance we need a considerably long time to melt a modest quantity of snow. If the snow is in the shade, the value of $I$ will be less. Also, for snowmen, since we would be talking about compressed snow, the value of $\rho$ could be $2-2.5$ times larger.
{ "source": [ "https://physics.stackexchange.com/questions/346407", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/135187/" ] }
346,683
I'm looking for an intuition on the relationship between time period and amplitude (for a small pertubation) of pendulums. Why does the period not depend on the amplitude? I know the math of the problem . I am looking for physical intuition .
The higher you are, the greater the maximum velocity and maximum potential energy. Consider one pendulum lifted higher than a second both released at the same time. When the higher pendulum reaches the starting point of the second, it already has a velocity greater than 0. This higher velocity allows the higher pendulum to complete its swing in the same amount of time as the lower, even though it has a longer path. Since I'm at a computer now I will address a majority of what is said in the comments. For starters, this does not exactly apply to pendulum; only approximately , and the approximation gets worse as $\theta$ increases. A good way to visualize this is through the Tautochrone curve which is a frictionless curve where for all heights, the time to fall is the same (this is the equivalent of a pendulum period if you ignore the backswing, or have 2 of these curves mirrored; which will be a perfect mirror of the front swing if energy is conserved in the system). In this scenario, the accelerations work out perfectly (under the same gravity) so that they all arrive at the same time. This is unlike the circular motion, which is only approximately correct for small angles. The interesting thing to note is that looking at a small displacement of a tautochrone curve; it looks approximately circular if you only look at a small section near the bottom. This is an intuitive way to explain why a circular pendulum approximately has this behaviour with small angles. (Henning mentioned a tautochrone curve in his answer as well. It seemed to be an appropriate way to add more intuition to this)
{ "source": [ "https://physics.stackexchange.com/questions/346683", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/120952/" ] }
348,026
There are a lot of questions here dealing with infrared cameras and thermographic cameras. I think I understand the reason why a thermographic camera is able to retrieve the temperature values from any object and convert them to a falsecolor representation, but why is a "regular" infrared camera not able to retrieve this information? What are the differences between these cameras? Is it just the sensor within the camera?
This is a common confusion, because both thermographic cameras and "normal" cameras with some IR capability are called IR cameras often. The typical video camera with IR capability has a solid state semiconducting camera sensor normally used for capturing visible light, which relies on the photons interacting with electrons and electron-"holes" inside the semiconductor to convert the incoming light into electric charge which is subsequently measured. These photons are in the wavelength range of 300-800 nm or so, but the sensor technology is typically responsive up to 1000 nm or more. As the eye is not sensitive to the energy in the 800-1000 nm band, an IR cut filter is normally inserted in cameras to make the resulting photo seem similar to what the eye sees. But if you remove the IR filter, you can get some "nightvision" capability by bathing the scene with light in the 850-950 nm range which is invisible to the eye. On the other hand, thermal radiation is peaked at a much longer wavelength, typically at 8000 nm or longer, and is much more difficult to work with in a direct photon -> charge process, so the typical thermal camera uses a completely different and more mundane physical process - it actually uses an array of thermometers! These are nothing else than a grid of small metal squares that are heated by the incoming thermal radiation, and their temperature can be read out because their resistance changes by their temperature (they are called micro-bolometers). So, very different physical processes are used and the radiation is of an order of magnitude different wavelengths. The thermal cameras need optics that can bend these longer wavelengths, they are often made of germanium for example and are opaque to visible light.
{ "source": [ "https://physics.stackexchange.com/questions/348026", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/164363/" ] }
348,359
I am interested in experimental physics and looking for information about the above question.
Research has created antihydrogen , and that is about it for the present as far as antimatter in bulk, which one would need for antiwater.. Scientists in the US produced a clutch of antihelium particles, the antimatter equivalents of the helium nucleus, after smashing gold ions together nearly 1bn times at close to the speed of light. They were gone as soon as they appeared, but for a fleeting moment they were the heaviest particles of antimatter a laboratory has seen. If you look at the nuclear binding energy plot, oxygen needs a lot of antinucleons to materialize. Present research has just seen antihelium .
{ "source": [ "https://physics.stackexchange.com/questions/348359", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/164511/" ] }
348,471
I don't understand how the centripetal force, which always points to the center of our circular motion can cause this scenario: We have a big stone which spins very fast, so fast that a part breaks down, because of the centrifugal force (this is at least how my text books describes it). My problem : the centrifugal force does not really exist (we only use it in accelerated frames of reference, so the newton laws still work there), so if we're in a laboratory frame of reference, which force would "pull" that piece of the stone to the outside of the circle, when we only have the centripetal force (as mentioned pointing to the center of the circular motion...)? (Please don't try to explain it in an accelerated frame of reference, because there I understand it, but I don't understand it in a laboratory frame of reference)
In the lab frame of reference, you need to reverse the question - don't ask yourself what pulls the particles apart but what keeps them together . By Newton's laws, everything on which no force acts keeps travelling in a straight line . So what requires explanation is not that a collection of moving particles - such as a rotating flywheel - flies apart but what keeps them together. The force that keeps them together is a centripetal force, in this case exerted by the bonds that keep the material together. When you reach a velocity where this force is not enough anymore to keep the particles on a circular trajectory/bound orbit, they fly apart.
{ "source": [ "https://physics.stackexchange.com/questions/348471", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/117307/" ] }
348,514
My father explained to me how rockets work and he told me that Newton's Third Law of motion worked here. I asked him why it works and he didn't answer. I have wasted over a week thinking about this problem and now I am giving up. Can anyone explain why Newton's Third Law works? For reference, Newton's third law: To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal, and directed to contrary parts.
Why do you want to know? I'm not kidding. That's actually an important question. The answer really depends on what you intend to do with the information you are given. Newton's laws are an empirical model. Newton ran a bunch of studies on how things moved, and found a small set of rules which could be used to predict what would happen to, say, a baseball flying through the air. The laws "work" because they are effective at predicting the universe. When science justifies a statement such as "the rocket will go up," it does so using things that we assume are true. Newton's laws have a tremendous track record working for other objects, so it is highly likely they will work for this rocket as well. As it turns out, Newton's laws aren't actually fundamental laws of the universe. When you learn about Relativity and Quantum Mechanics (QM), you will find that when you push nature to the extremes, Newton's laws aren't quite right. However, they are an extraordinarily good approximation of what really happens. So good that we often don't even take the time to justify using them unless we enter really strange environments (like the sub-atomic world where QM dominates). Science is always built on top of the assumptions that we make, and it is always busily challenging those assumptions. If you had the mathematical background, I could demonstrate how Newton's Third Law can be explained as an approximation of QM as the size of the object gets large. However, in the end, you'd end up with a pile of mathematics and a burning question: "why does QMs work." All you do there is replace one question with another. So where does that leave you? It depends on what you really want to know in the first place. One approach would simply be to accept that scientists say that Newton's Third Law works, because it's been tested. Another approach would be to learn a whole lot of extra math to learn why it works from a QM perspective. That just kicks the can down the road a bit until you can really tackle questions about QM. The third option would be to go test it yourself. Science is built on scientists who didn't take the establishment's word at face value, went out, and proved it to themselves, right or wrong. Design your own experiment which shows Newton's Third Law works. Then go out there and try to come up with reasons it might not work. Test them. Most of the time, you'll find that the law holds up perfectly. When it doesn't hold up, come back here with your experiment, and we can help you learn how to explain the results you saw. That's science. Science isn't about a classroom full of equations and homework assignments. It's about scientists questioning everything about their world, and then systematically testing it using the scientific method!
{ "source": [ "https://physics.stackexchange.com/questions/348514", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
348,537
I'm looking for all (or most) theoretical semi-classical derivations of the maximal magnetic field intensity that there may be in the Universe. As an example, this paper evaluate a maximal field of about $3 \times 10^{12} \, \mathrm{teslas}$ : https://arxiv.org/abs/1511.06679 but its calculations are buggy, especially on the first page, since there's a relativistic $\gamma$ factor missing in the first few formulae. When we take into account the $\gamma$ factor, it destroys the derivation of a maximal value for $B$ ! So this paper isn't satisfaying at all. Please, I need semi-classical calculations with some rigor , using fully special relativity and some quantum bits only, without a full blown quantum electrodynamics calculation with Feynmann diagrams ! You may use the Heisenberg uncertainty principle , the quantification of angular momentum , Einstein/deBroglie relations , and even things about "vacuum polarization" or some other quantum tricks, but they all need to be properly justified to make the calculations convincing ! From the previous paper as a (buggy) example, is it possible to derive a theoretical maximal $B_{\text{max}}$ using classical electrodynamics and special relativity only (without quantum mechanics) ? I don't think it's possible, without at least general relativity which suggest that the field energy density cannot be larger than a certain limit without creating a black hole. EDIT : Curiously, even in general relativity, it appears that there's no theoretical limit to the magnetic field strenght. The Melvin magnetic universe is an analytical solution to the Einstein field equation that is as close as possible to an uniform magnetic field. See for example this interesting paper from Kip Thorne : http://authors.library.caltech.edu/5184/1/THOpr65a.pdf The spacetime metric of Melvin's magnetic universe doesn't have any singularity, wathever the value of the field parameter, and there can be no gravitational collapse of the field under perturbations ! So apparently there's no maximal value of $B$ in classical general relativity, without matter!
Why do you want to know? I'm not kidding. That's actually an important question. The answer really depends on what you intend to do with the information you are given. Newton's laws are an empirical model. Newton ran a bunch of studies on how things moved, and found a small set of rules which could be used to predict what would happen to, say, a baseball flying through the air. The laws "work" because they are effective at predicting the universe. When science justifies a statement such as "the rocket will go up," it does so using things that we assume are true. Newton's laws have a tremendous track record working for other objects, so it is highly likely they will work for this rocket as well. As it turns out, Newton's laws aren't actually fundamental laws of the universe. When you learn about Relativity and Quantum Mechanics (QM), you will find that when you push nature to the extremes, Newton's laws aren't quite right. However, they are an extraordinarily good approximation of what really happens. So good that we often don't even take the time to justify using them unless we enter really strange environments (like the sub-atomic world where QM dominates). Science is always built on top of the assumptions that we make, and it is always busily challenging those assumptions. If you had the mathematical background, I could demonstrate how Newton's Third Law can be explained as an approximation of QM as the size of the object gets large. However, in the end, you'd end up with a pile of mathematics and a burning question: "why does QMs work." All you do there is replace one question with another. So where does that leave you? It depends on what you really want to know in the first place. One approach would simply be to accept that scientists say that Newton's Third Law works, because it's been tested. Another approach would be to learn a whole lot of extra math to learn why it works from a QM perspective. That just kicks the can down the road a bit until you can really tackle questions about QM. The third option would be to go test it yourself. Science is built on scientists who didn't take the establishment's word at face value, went out, and proved it to themselves, right or wrong. Design your own experiment which shows Newton's Third Law works. Then go out there and try to come up with reasons it might not work. Test them. Most of the time, you'll find that the law holds up perfectly. When it doesn't hold up, come back here with your experiment, and we can help you learn how to explain the results you saw. That's science. Science isn't about a classroom full of equations and homework assignments. It's about scientists questioning everything about their world, and then systematically testing it using the scientific method!
{ "source": [ "https://physics.stackexchange.com/questions/348537", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/98263/" ] }
348,588
I imagine that there is a pretty simple answer to my question, but I have just never gotten it straight. If a proton is comprised of two up quarks and a down, and neutrons are comprised of two down and an up, how can a neutron be a proton and electron?
A neutron is not "a proton and an electron". A neutron is not composed of a proton and an electron inside of the neutron. In quantum mechanics, particles can appear and disappear or change into other particles. With the neutron, one of the down quarks can decay change into an up quark by emitting a W boson, turning into a proton. The W boson quickly decays into an electron and an electron antineutrino. The new up quark didn't exist until the down quark turned into it. The W boson is what is called a virtual particle. It doesn't exist in the classical sense, it's just kind of there in the ambiguous region of spacetime where the decay occurs. The electron and antineutrino didn't exist until the decay. Here is a Feynman diagram of the process, from here :
{ "source": [ "https://physics.stackexchange.com/questions/348588", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/164626/" ] }
348,756
Suppose, for the sake of this thought experiment, I am structurally identical to an average human, with the only difference being that my body is scaled in all directions by factor of 3. This would result in me having 27x more mass than a normal human. In cinema, giants tend to move very sluggishly, and that seems to match our expectation of creatures of that size--after all, most people think of whales and elephants as being very large, slow-moving creatures. However , I now also have 27x more muscle mass... Wouldn't my much larger muscles provide the necessary force to counteract the increase in inertia? Barring a slight increase in air resistance, wouldn't my body movements (walking, running, moving my arms) be indistinguishable from that of a normal-sized human , and not sluggish as depicted in movies?
Assuming for a moment that your bones are proportionately stronger... (because you are asking about motion, not strength: but see for example this question about scaling in nature ) That still leaves us with some physics that "doesn't scale well". First, there is the issue of muscle mass: assuming your muscles are made of the same fibers, their strength (ability to exert a force) goes as the cross sectional area, while their power (force times velocity) scales with the total volume (if each element of muscle contracts by some amount, the total contraction of the muscle, and thus the velocity, depends on the length; and since the force depends on the cross section, the power scales with the volume; this also makes sense from an energy balance perspective, assuming that each cell expends a certain amount of energy per unit time, the total power will scale with the number of cells). So you are not actually "27 times stronger" when you are scaled up - your ability to accelerate yourself is less than if you were your normal size. Second, there is the issue of inertia. Have you ever tried balancing a match stick on your finger? Hard, right? A fork is still quite hard, while a broomstick is easy. The reason for this is the moment of inertia. An simple rod has a moment of inertia $I=m\ell ^2$; if we scale all dimensions by 3x, the mass increases by 27x and the length by 3x, so the moment of inertia increases as the FIFTH power of scale. When you look at the effect of gravity on balance, the only thing that matters is the $\ell^2$ term - so the 3x bigger your will "tip over" much more slowly 1 . This means that when you lift up a foot to take a step, it will take much longer for your body to start "falling forward" so you can actually take a step forward. Of course once you are running, your superior strength will carry you further, faster - but when it comes to the "usual" maneuvering around, this extra size will be bothersome. In nature, there is the additional complication that as things get bigger, they have to be built with stronger bones, etc. This is why a mouse seems to move so rapidly, and an elephant (giraffe) so slowly. So yeah - the movies have it right. Giants are "lumbering". It's physics. Incidentally, if I saw "giant you" running in the distance, it would look to me like gravity had been reduced: you would bounce "more slowly than I expected" because the time it would take you to land after jumping to "knee height" would be significantly longer (because your knee height is much higher than mine). That factor alone should mean that I would expect to have to speed up a movie of "giant you" by a factor $\sqrt{3}$ just so it would look normal. UPDATED: And the "slowing down" due to the moment of inertia thing has the same factor! This means that if we film "big" you, then speed up the movie by 1.7x, we should see something that "looks normal". And that's assuming you are strong enough to overcome the problem of muscle strength... 1 : "...tip over much more slowly": the time constant of the motion goes as $\sqrt{\frac{2\ell}{3g}}$ as I derived in this answer about balancing a pencil on its tip. So when you are 3x taller, the time it takes to tip over will be about 1.7x slower.
{ "source": [ "https://physics.stackexchange.com/questions/348756", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/128182/" ] }
349,032
In some submarines, divers can exit and enter through a moon pool, an opening in the bottom of the submarine. We can clearly see ocean's water flowing in the moon pool, but it won't come inside. Why so? What is stopping the water here from coming inside?
The air is stopping the water from coming inside. For the water to enter a cavity already filled with another fluid, it has to either displace or compress this fluid. The shape of the container prevents the air from escaping and the water can't rush in if the air inside is at the same pressure as the water outside, because then the net force on the interface is zero.
{ "source": [ "https://physics.stackexchange.com/questions/349032", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/164892/" ] }
349,042
If a tennis ball and a bowling ball are dropped of a rooftop, they hit the ground at the same time. But if they are rolled down a slope, the bowling ball rolls faster. Why?
The easy explanation is that the tennis ball is hollow. When you merely drop the objects, they are subjected to the same acceleration - the aceleration due to gravity - and nothing else. Conservation of energy then says that their gravitational potential energy should be completely transformed into kinetic energy at the ground: $$mg\Delta h=\frac{1}{2}mv^2\to v=\sqrt{2g\Delta h}$$ Since the initial heights $\Delta h$ are equal, they both have the same velocity as each other (though not constant in time) no matter how far they fall and, thus, hit at the same time. However, when you roll them down the roof, the initial gravitational potential energy, $mg\Delta h$, is transformed not only into kinetic energy, but also into rotational energy. The rotational energy of something is $\frac{1}{2}I\omega^2$, where $I$ is the moment of inertia (the rotational equivalent of mass) and $\omega$ is the angular velocity ($\omega=v/r$; the velocity of the object divided by its radius). This is all well and good, so the difference between the bowling ball and the tennis ball is now because the bowling ball is solid and the tennis ball is hollow. When just dropped, there is no difference. However, when rolling, the different distributions of mass affect the moments of inertia differently. A solid sphere has $I=\frac{2}{5}mr^2$, while a hollow sphere (I know the tennis ball is not perfectly hollow, but let's make this approximation, okay?) has $I=\frac{2}{3}mr^2$. What does this mean? Well, let's do the math (math is fun!). For the bowling ball, we have: $$mgh=\frac{1}{2}\left(I\omega^2+mv^2\right)=\frac{1}{2}\left(\frac{2}{5}mr^2\cdot\frac{v^2}{r^2}+mv^2\right)\to v=\sqrt{\frac{10}{7}gh}$$ Whereas, for the tennis ball, we have: $$mgh=\frac{1}{2}\left(I\omega^2+mv^2\right)=\frac{1}{2}\left(\frac{2}{3}mr^2\cdot\frac{v^2}{r^2}+mv^2\right)\to v=\sqrt{\frac{6}{5}gh}$$ Notice that the mass of either ball is mostly irrelevant and that, since $\sqrt{\frac{10}{7}}>\sqrt{\frac{6}{5}}$, the forward velocity, $v$, of the bowling ball is greater than that of the tennis ball; just because one is hollow and one is solid. It's also worth noting that the radius, as you may have concluded, does not ideally affect the forward velocity. This is something easily shown through the equations above as well as experimentally. Grab some solid spheres of different radii and roll them down an incline (I work in a physics teaching lab, so believe me when I say I've done this many times), you should see they hit the bottom at the same time. Yay! Physics is cool!
{ "source": [ "https://physics.stackexchange.com/questions/349042", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/164900/" ] }
349,054
If I want to transport a litre of a hot liquid, what's the best option? Use a single one litre flask [ two 500 ml flasks. (Preferable due to the distribution of weight across two people). Which option will keep the liquid warmer for longer (accounting for the fact that the liquid will be drunk at regular intervals)? Assume that all flasks have the same levels of insulation.
The easy explanation is that the tennis ball is hollow. When you merely drop the objects, they are subjected to the same acceleration - the aceleration due to gravity - and nothing else. Conservation of energy then says that their gravitational potential energy should be completely transformed into kinetic energy at the ground: $$mg\Delta h=\frac{1}{2}mv^2\to v=\sqrt{2g\Delta h}$$ Since the initial heights $\Delta h$ are equal, they both have the same velocity as each other (though not constant in time) no matter how far they fall and, thus, hit at the same time. However, when you roll them down the roof, the initial gravitational potential energy, $mg\Delta h$, is transformed not only into kinetic energy, but also into rotational energy. The rotational energy of something is $\frac{1}{2}I\omega^2$, where $I$ is the moment of inertia (the rotational equivalent of mass) and $\omega$ is the angular velocity ($\omega=v/r$; the velocity of the object divided by its radius). This is all well and good, so the difference between the bowling ball and the tennis ball is now because the bowling ball is solid and the tennis ball is hollow. When just dropped, there is no difference. However, when rolling, the different distributions of mass affect the moments of inertia differently. A solid sphere has $I=\frac{2}{5}mr^2$, while a hollow sphere (I know the tennis ball is not perfectly hollow, but let's make this approximation, okay?) has $I=\frac{2}{3}mr^2$. What does this mean? Well, let's do the math (math is fun!). For the bowling ball, we have: $$mgh=\frac{1}{2}\left(I\omega^2+mv^2\right)=\frac{1}{2}\left(\frac{2}{5}mr^2\cdot\frac{v^2}{r^2}+mv^2\right)\to v=\sqrt{\frac{10}{7}gh}$$ Whereas, for the tennis ball, we have: $$mgh=\frac{1}{2}\left(I\omega^2+mv^2\right)=\frac{1}{2}\left(\frac{2}{3}mr^2\cdot\frac{v^2}{r^2}+mv^2\right)\to v=\sqrt{\frac{6}{5}gh}$$ Notice that the mass of either ball is mostly irrelevant and that, since $\sqrt{\frac{10}{7}}>\sqrt{\frac{6}{5}}$, the forward velocity, $v$, of the bowling ball is greater than that of the tennis ball; just because one is hollow and one is solid. It's also worth noting that the radius, as you may have concluded, does not ideally affect the forward velocity. This is something easily shown through the equations above as well as experimentally. Grab some solid spheres of different radii and roll them down an incline (I work in a physics teaching lab, so believe me when I say I've done this many times), you should see they hit the bottom at the same time. Yay! Physics is cool!
{ "source": [ "https://physics.stackexchange.com/questions/349054", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/45808/" ] }
349,115
I've read several popular articles telling that frequent opening of the fridge highly increases the power consumption. Is it really so significant? Isn't the heat in the room-temperature food which is brought to the fridge so much more relevant that some air which goes into the fridge upon opening the door is nothing compared to that? To make it more concrete: How many times do I have to open and close the fridge so the effect is comparable with putting there a 1 litre box of milk at the room temperature? Let's say the room has 22°C, the fridge 7°C.
That depends on whether the fridge monitors the temperature or not. Where I work, the large walk-in fridge has a temperature monitor. It starts to cool only when the temperature rises above 4.7°C and stops when it sinks to 3.5°C. The fridge is very well insulated, meaning the fridge very rarely has to turn on when the door is closed. I frequently retrieve items from the fridge. This normally means the door is open for less than 15 seconds, but in that time the fridge frequently rises to 5+°C, and you hear the cooler start up again. For that fridge, energy consumption is close to 0 when not opened and reaches its maximum every time it is opened. However, you want to know the difference between opening the fridge and cooling 1l of milk. The Carnot coefficient of refrigeration $$\gamma = {T_c \over T_h-T_c}$$ is the ratio of the heat extracted to the work required to extract this heat. $T_c$ is the temperature in the fridge (I'll say 2°C = 275 K), and $T_h$ is room temperature (at 22°C = 295 K), So $\gamma = 13.75$. This means to move one joule of heat energy from the milk to outside it takes 0.073 J from the mains. The energy we want to remove from 1 litre of milk when cooling from $22°C$ to 2°C is ( a , b ) $$Q=mc\Delta\theta = 1\text{kg} \times 4181 {\text{J} \over \text{kg} °C} \times 20°C = 83620 \text{J}$$ removed from the milk (assuming milk $\approx$ water - it's close, but not perfect ). This will take $83620 \text{J} \times 0.073 = 6104 \text{J}$. My fridge contains about 224l (10 mol) of air. Opening the door raises the temperature from 4°C to around 10°C (I just checked). The $\gamma$ ratio for that is 46.17, so every Joule removed requires 0.02J. Cooling 224l of air from 10°C to 4°C means moving $Q = 0.288 \text{kg} \times 1000 {J \over kg °C} \times 6°C = 1728 \text{J}$. This will take $1728 \times 0.02 = 34.56$. However, when I open my fridge, a 15W bulb is turned on. If I open the fridge for 10 seconds, the bulb has already used 4.3x more electricity than will be used cooling the air. This means you can open the fridge over 175 times before you've reached the energy consumption of cooling your milk (although when including the light bulb, it’s closer to just 33 times). However, at current electricity costs , it's around \$0.00026 to cool that milk - so I doubt the power consumption will ever really matter to you. If you drink the average amount of milk for a French citizen ( 260 litres - Wikipedia has bizarre lists ) you’re spending just \€0.067 per year on your milk. Instead of worrying about the milk here’s a few quick suggestions: turning lightbulbs off, and changing for energy saving ones - up to €180/yr buy a TV which uses very little electricity in standby mode - up to €38/yr don’t boil too much water - up to €58/yr
{ "source": [ "https://physics.stackexchange.com/questions/349115", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/72027/" ] }
349,403
I have a rather basic, maybe even dumb, question. I was wondering why speed is defined as it is: $s = d/t$ Of course, what the equation means is not too difficult to understand. However, there are many ways that d and t could be related, for instance: $s = d + t$ I am not sure who the first person to define speed was, but I was wondering how they made the decision to define speed as distance divided by time .
The definition of speed (please, let me call it velocity hereinafter) is not random at all. It seems you understand that it must depend on the distance $d$ and the time $t$, so I'll skip to the next stage. Evidently (for a constant $t$) velocity increases if $d$ does; and (for a constant space) $v$ decreases if $t$ rises. That constrains the ways we can define it. For example, your example of $d+t$ is authomatically discarded. You could say $d-t$, that satisfies the growing conditions. Then we apply the reasoning in the limit case. For a 0 distance, velocity must be 0 independently of time (unless time is 0 too), that discards any sums. If the time to reach the space is infinite, the velocity must be 0. That's forcing $t$ to be a denominator. So we deduce it's a fraction, but how can we sure there are not powers of those quantities? We impose the linearity of space. It doesn't make sense that the velocity is different if you pass from 50 to 60, or from 70 to 80 in the same time. If all points in space are equivalent, there cannot be distinctions like these, so using the numerator $\Delta d$ guarantees that all points in space are equivalent. If it were $\Delta d^2$ the result would be different from 70 to 80 and from 50 to 60, for example. That's againts the obvious principle that we can set the origin where we want (we must be able to measure from the point we choose, as we do everyday with a simple ruler, placing it where we want). The same reasoning applies to time. So they must be a fraction, and there cannot be other powers than 1. The only possible difference is a constant factor $s=k \frac{\Delta d}{\Delta t}$ And this is what speed (or velocity) is, after all. The constant is actually the unit factor. It depends on what units you are using. I hope this is useful to you.
{ "source": [ "https://physics.stackexchange.com/questions/349403", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/128120/" ] }
349,587
For any set of data points, you can comprise a 100% correlated and fitted curve using a sum of sloped lines all multiplied by their respective Heaviside step functions to form a zig-zag shaped curve. And yet, we do not use those models. This asks the question: how does the academic science community of physicists know when to disregard a model that has a higher correlation than another possible model?
For any set of data points, you can comprise a 100% interpolated and fitted curve using a sum of sloped lines all multiplied by their respective Heaviside step functions to form a zig-zag shaped curve. And yet, we do not use those models. Physics from the time of Newton to now is the discipline where mathematical differential equations are used , whose solutions fit the data points and are predictive of new data. In order to do this , a subset of the possible mathematical solutions is picked by use of postulates/laws/principles , as strong as axioms as far as fitting the data. What you describe is a random fit to a given data curve, and no possibility of predicting behavior for new boundary conditions and systems. It is not a model. how does the academic science community of physicists know when to disregard a model that has a higher correlation than another possible model? If one has a complete set of functions, like Fourier or Bessel functions one can always fit data curves. this is not a physics model, it is just an algorithm for recording data. The physics model has to predict what coefficients the functions that are used for a fit will have.
{ "source": [ "https://physics.stackexchange.com/questions/349587", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
349,672
The law of conservation of electric charge states that the net electric charge of an isolated system remains constant throughout any process. In simple words, charge can neither be created nor destroyed. The question that popped into my head is that if I take out some electrons from a neutral body, it would become positively charged. So didn't I just create some charge contradicting the law of conservation? This feels kinda vacuous but what am I missing here?
if I take out some electrons from a neutral body, it would become positively charged. So didn't I just create some charge You didn't create anything. The electrons were already there (and so were the protons that make up the positive charge). All you did was move them. When you talk about conservation laws you have to include the whole system. If you're removing something from something else, you and what you remove are all part of the system to be considered. EDIT : A comment by @luk32 suggests mentioning a situation which can arise in particle physics where e.g. a neutral particle can decay into charged particles. Note that we again consider the complete system and when we include the new charged particles we find that they have a net charge of zero. Many more complex conversions are possible in the quantum world, but again electric charge must be conserved and we have to pay careful attention to the what is include in the complete system to balance our sums. An example of such an event might be the decay of the neutron .
{ "source": [ "https://physics.stackexchange.com/questions/349672", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/148026/" ] }
349,847
Basically I'm wondering what is the nature of an out of focus image. Is it randomized information? Could the blur be undone by some algorithm?
The blurring is not randomised, it is predictable. See Can someone please explain what happens on microscopic scale when an image becomes unfocused on a screen from a projector lens? for a basic explanation. Each point of the in-focus image is spread out into a diffraction pattern of rings called a point spread function (PSF), and these ring patterns overlap to form the out-of-focus image. The blurred image is the convolution of the object and the PSF. Convolution is a mathematical transformation which can in some circumstances be reversed (deconvolution) - for example when the image has been made using coherent light (from a laser) and the PSF is known. When photos are taken using ordinary incoherent light, and the PSF is unknown, the blurring cannot be reversed completely, but a significant improvement can be made, eg using the blind deconvolution algorithm . Examples of objects and resulting images can be used to approximately re-construct the PSF, or a Gaussian function can be used. Blurring due to motion (of the camera or object) can also be corrected. For both cases the techniques and problems are discussed in Restoration of De-Focussed and Blurred Images , and examples given of what can be achieved. Software is available online to fix blurred images .
{ "source": [ "https://physics.stackexchange.com/questions/349847", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/99567/" ] }
349,864
How do we know that the direction of increasing time inside the event horizon a Schwarzchild black hole is that of decreasing $r$, instead of increasing $r$? Both directions would be timelike, but how do we know which is 'future-pointing'? In other words, is there an easy way to see that gravitational collapse produces a black hole instead of a white hole?
The blurring is not randomised, it is predictable. See Can someone please explain what happens on microscopic scale when an image becomes unfocused on a screen from a projector lens? for a basic explanation. Each point of the in-focus image is spread out into a diffraction pattern of rings called a point spread function (PSF), and these ring patterns overlap to form the out-of-focus image. The blurred image is the convolution of the object and the PSF. Convolution is a mathematical transformation which can in some circumstances be reversed (deconvolution) - for example when the image has been made using coherent light (from a laser) and the PSF is known. When photos are taken using ordinary incoherent light, and the PSF is unknown, the blurring cannot be reversed completely, but a significant improvement can be made, eg using the blind deconvolution algorithm . Examples of objects and resulting images can be used to approximately re-construct the PSF, or a Gaussian function can be used. Blurring due to motion (of the camera or object) can also be corrected. For both cases the techniques and problems are discussed in Restoration of De-Focussed and Blurred Images , and examples given of what can be achieved. Software is available online to fix blurred images .
{ "source": [ "https://physics.stackexchange.com/questions/349864", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/130345/" ] }
350,121
I googled for the above question, and I got the answer to be $$[\Psi]~=~L^{-\frac{3}{2}}.$$ Can anyone give an easy explanation for this?
The physical interpretation of the wavefunction is that $|\psi(\vec r)|^2dV$ gives the probability of finding the electron in a region of volume $dV$ around the position $\vec r$ . Probability is a dimensionless quantity. Hence $|\psi(\vec r)|^2$ must have dimension of inverse volume and $\psi$ has dimension $L^{-3/2}$ .
{ "source": [ "https://physics.stackexchange.com/questions/350121", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/131602/" ] }
350,404
According to quantum mechanics, any quantum angular momentum is quantized in units of $\hbar$. Does it mean that the angular momentum of the ceiling fan (due to its rotation) is quantized? If yes, what does it physically mean? Does it mean that it cannot rotate with arbitrary speed?
Yes, the angular momentum of a ceiling fan is quantized. This means that when the ceiling fan speeds up, it is actually jumping from one speed to another. However, the size of these jumps is so small--because Planck's constant $h$ is so small--that the difference between two allowed speeds is immeasurably small. This is similar to how a thrown baseball's position is uncertain because of Heisenberg's Uncertainty Principle. Same as the ceiling fan, the smallness of $h$ makes the size of the uncertainty immeasurably small. There are macroscopic systems where quantized angular momentum can be observed. When liquid helium is cooled enough to become a superfluid (~2 Kelvin), it will not rotate if the container is rotated slowly. If the containers rotation is slowly sped up, there will be a certain speed where a little whirlpool suddenly appears. The liquid helium has gained one unit of angular momentum. As the container continues to speed up, more of these quantum vortices appear, each one containing a quantized unit of angular momentum.
{ "source": [ "https://physics.stackexchange.com/questions/350404", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/164488/" ] }
350,568
I'm quite surprised by these regularly spaced rings of alternating brightness in a fluorescent tube. These are also moving along the tube and only appear when the voltage is low. What are these and does the pattern have anything to do with AC frequency?
This reminds me of the Franck-Hertz experiment where similar patterns occur. At low voltage, the free electrons in the tube will be accelerated by the voltage until they have enough energy to excite a gas atom by hitting it. The atom eventually falls back to the ground state and emits the light. Due to quantization of the excitation levels, this appears in spacing. When the voltage is high enough, most of the gas is excited such that the electrons can move freely and therefore the pattern is gone.
{ "source": [ "https://physics.stackexchange.com/questions/350568", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/123393/" ] }
350,597
In this photo, why is the sun not positioned exactly above the "axis" from which its reflection seems to spread? I think it could be how sunlight refracts across the northern hemisphere but I'm not sure. I took this photo and I would be fascinated if this was a physics problem rather than a problem with my camera. I used an iphone 5s.
This is a steal / riff on Asher's correct answer. If you are using a tablet, tilt it so that the left hand side is closer to your face, now does it look ok, ........on my cheap tablet it does. No building authority (except Pisa Municipal Council in Italy) would allow this degree of vertical tilt, so obviously the camera does sometimes lie. This is an example of aberration, caused by distortions of light ray paths inside of the camera lens, particularly with zoom lenses. This curvilinear aberration can take one of two forms, either like this: Pincushion distortion:which is what causes the effect in your picture. Barrel distortion, Or like this: These pictures are taken from Curvilinear Distortion . PhotoSE deals with this, but on first reading anyway, not in as much detail as I would have expected. I agree, if it was due to a previously unknown aspect of physics, that would be great. Sadly, it's more mundane. EDIT In case his comment is deleted, please take account of Samuuel Weir's remark in assessing my wording above: I don't think that it's a matter of lens distortion. As I recall, it's an apparent distortion primarily due to the fact that the plane of the camera's sensor is tilted upward with respect to the horizon. There are so-called "perspective control" lenses which can correct for the distortion. See, for example: kenrockwell.com/nikon/19mm.htm . Also: en.wikipedia.org/wiki/Tilt–shift_photography (Sorry, it doesn't appear that the entire web address was converted into a hyperlink because of the hyphen. Just copy the whole address and paste it into the address line of your browser And tfb I agree with @SamuelWeir I think, although I originally thought the barrel-distortion reasoning was really good: it's more likely to be perspective distortion due to the sensor plane being angled upwards. Although the image is missing any useful EXIF data and the person didn't tell us what camera it was taken with, many modern cameras, if they know enough about their lens will, when creating JPEGS, correct for the distortions they know about. So if this was taken with such a camera, it seems unlikely you'd get so much lens-related distortion. Both of these users have a much better grasp of physics than I do. END EDIT
{ "source": [ "https://physics.stackexchange.com/questions/350597", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/165685/" ] }
350,805
Is there a logical/mathematical way to derive what the very maximum percentage of surface area you can see from one angle of any physical object? For instance, if I look at the broad side of a piece of paper, I know I have only seen 50% of its surface area (minus the surface area of the very thin sides). Is 50% always the maximum amount of surface area you can see of any object from one angle? Assumptions: This is assuming we aren't considering transparent/semi-transparent objects or the extra parts we can see with the help of mirrors. Just looking at the very surface of an object from one particular angle.
There is no such upper bound. As a simple counter-example, consider a thin right-angled solid cone of base radius $r$ and height $h$, observed on-axis from some large(ish) distance $z$ away from the cone tip. You then observe the tilted sides, of area $\pi r\sqrt{r^2+h^2}$, and you don't observe the area of the base, $\pi r^2$, so you observe a fraction \begin{align} q &=\frac{\pi r\sqrt{r^2+h^2}}{\pi r^2+ \pi r\sqrt{r^2+h^2}} \\ &= \frac{\sqrt{1+r^2/h^2}}{r/h+\sqrt{1+r^2/h^2}} \\ &\approx 1- \frac rh \end{align} of the surface, in the limit where $r/h\ll 1$, and this can be arbitrarily close to $1$ so long as the cone is thin enough and long enough.
{ "source": [ "https://physics.stackexchange.com/questions/350805", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/148654/" ] }
351,108
If you add red light (~440 THz) and green light (~560 THz), you get what we perceive as yellow light (~520 THz). But I assume what you really get is a mixed waveform that we perceive as yellow? Suppose the red is a perfect sine wave, and so is the green, the mix of both will not be a perfect sine wave but a wobbly composite thing - right? Which is different than a perfect sine wave of ~520 THz. But we call both things "pure" yellow. Is that correct? If so, are there animals that can discern composite pure yellow from singular pure yellow, like we can discern the mixture of multiple audio sine waves as a chord? Or is there machinery that can do that? See also: Why both yellow and purple light could be made by a mix of red, green and blue?
Our ability to separate different colors from each others depends crucially on how many different receptors we have for colored light. Humans have three different receptors for light, which means that we can characterize colors by three numbers, just like the RGB-codes of colors on your screen. At the end of the day, what determines with colors we perceive is how the wave-form is projected onto these three numbers. Since there is an infinite set of wave forms, there is an infinite mixture of colors that we will perceive as identical (for every perceived color). Some animals have more than three types of color receptors, and can therefore distinguish more wave-forms of light. You can say that their color perception is higher dimensional (4D,5D,... etc) than our 3 dimensional color perception.
{ "source": [ "https://physics.stackexchange.com/questions/351108", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/165967/" ] }
352,048
Assume that a man is travelling in a space ship at a certain relativistic speed with respect to a man at rest at some point in space, such that 3 minutes in the ship is equal to 5 minutes for the person at rest . Also assume that the man in the ship has a lighter which contains gas of a certain amount such that the lighter can be lit for 5 minutes . Now if the man in the space ship lights the lighter for 3 minutes, then he would have 2 minutes' worth of gas left, but the stationary observer would have seen light emitted for about 5 minutes (since 3 in that space ship = 5 minutes for the stationary observer) How is it possible for the stationary observer to see light for 5 minutes? And in this case, how is energy conserved?
From the perspective of the stationary observer, the light is dimmer. Why? Because the chemical reaction is happening more slowly, so the fire emits fewer photons per second. The flame burns for longer, but it emits less energy per second. Both observers will agree on the total energy emitted by the flame (once they have accounted for possible red-shift) and thus total energy is conserved.
{ "source": [ "https://physics.stackexchange.com/questions/352048", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/166466/" ] }
352,465
If you light a candle it starts emitting photons, right? And photons travel at the speed of light. But how come they can't light the whole of a dark room? It means that the photons are not reaching the dark corners of the room, but that is impossible if they travel at 300 thousand kilometres per second?
It might well be that the photons reach the corners of the room but that doesn't mean that you can see the corners. Why? For you to see something, photons have to be emitted or reflected by the object and afterwards reach your eye. Also, it is not enough that one photon reaches your eye, but several photons are necessary for you to see the object (e.g. here ). So why does a single candle not suffice to light the whole room? While the candle emits photons in (nearly) all directions, the number of photons it emits is too small. There are simply not enough photons being reflected back into your eye to see the whole room. The reason that you can see objects close to the candle but not far away is the inverse square law. (source: http://hyperphysics.phy-astr.gsu.edu/hbase/Forces/imgfor/isq.gif ) The candle emits (approximately) an equal number of photons in each direction. If an object is very close to the candle nearly all photons emitted in the direction of this object will hit it and be reflected. However, if the candle is far away, the number of photons per unit area gets smaller and smaller because the number of photons emitted in this direction is constant but the area becomes larger. Thus, an object far away is hit by much less photons. Then, the photons are either absorbed or reflected in all directions and while some travel into the direction of your eye, the inverse square law becomes important again. Hence, in the end, only very few photons are reflected by the corner and reach your eye and you can't see it.
{ "source": [ "https://physics.stackexchange.com/questions/352465", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/162986/" ] }
352,754
Actually, in the mathematics sine and cosine functions are defined based on right angled triangles. But how will the representation of a wave or signal say based on these trigonometric functions (we can't draw any right angled triangles in the media, i.e., the air) then how can we say that?
While Sine and Cosine functions were originally defined based on right angle triangles, looking at that point of view in the current scenario isn't really the best thing. You might have been taught to recognize the Sine function as "opposite by hypotenuse", but now it's time to have a slightly different point of view. Consider the unit circle $x^2+y^2=1$ on a Cartesian plane. Suppose a line passing through the origin makes an angle $\theta$ with the $x$ -axis in a counterclockwise direction, the point of intersection of the line and the circle is $(\cos{\theta},\sin{\theta})$ . Think about it. Does this point of view correlate with the earlier one? Both of the definitions are the same. So you'd wonder, why do we need this point of view ? Well, I'd say it's easier to understand how Sine waves are actually important in many common phenomena. Suppose we start to spin the line, by making $\theta$ increase linearly. You'd get something like this: The Sine and Cosine functions are arguably the most important periodic functions in several cases: The periodic functions of how displacement, velocity, and acceleration change with time in SHM oscillators are sinusoidal functions. Every particle has a wave nature and vice versa. This is de-Broglie's Wave Particle Duality. Waves are always sinusoidal functions of some physical quantity (such as Electric Field for EM Waves, and Pressure for Sound Waves). Sound itself is a pressure disturbance that propagates through material media capable of compressing and expanding. It's the pressure at a point along the sound wave that varies sinusoidally with time. Speech signals are not perfectly sine waves. A pure sound, from a tuning fork would be the perfect sine wave. Regular talking is not a pure sine wave as people don't maintain the same loudness or frequency. As a result, this is what noise looks like, compared to pure frequencies. Notice the irregularities in the amplitude and the frequency of the noise wave. Alternating voltages used in your everyday plug sockets are infact sinusoidally varying voltages as a function of time. tl;dr: Considering Sine waves as "opposite by hypotenuse" is far from the best comparison when dealing with everyday applications of physics. Further reading: History of Trigonometry Sound Wave model of Electromagnetic radiation Alternating current Simple Harmonic Motion
{ "source": [ "https://physics.stackexchange.com/questions/352754", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/166425/" ] }
352,914
I am not planning on staring into the sun during an eclipse or any other time. I have been reading about how no variety of regular sunglasses are safe enough to view the eclipse with. I'm not talking about being able to see things clearly, but just actual eye safety. From what I understand it is the ultraviolet light that causes damage to the retina, but maybe it is more complicated. How do my eyes get hurt if I am looking at the sun through so called "100% UV protection" and what makes the eclipse glasses sold in stores different? edit: To clarify this is not about how the rays from the sun are dangerous, but about why "100% UV blocking" sunglasses fail. Do other dangerous rays get through? Is the "100%" marketing? Essentially, in what way are the best consumer sunglasses inadequate for looking at an eclipse. Answers about pupil dilation and what makes an eclipse more dangerous for naked-eye viewers are not what I'm after.
The damage to your eyes comes from the total energy from the visible and near - infrared region even when you wear a 100% UV blocked sunglasses. When you look at the sun in normal days, the visible light from the sun itself is enough for your eyes to trigger pupillary constriction and blink reflex in order to give you at least partial protection. But when you look at an eclipsed sun, the light and energy from the infrared region will be more than the light from visible region. So no pupil constriction and blink reflex to save you. And the energy from IR rays will burn your eyes. So it is unsafe to watch an eclipsed sun even with sunglasses, whether they have UV protection or not.
{ "source": [ "https://physics.stackexchange.com/questions/352914", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/166951/" ] }
352,929
I want to know whether every kind of energy is converted to mass like kinetic and potential energy? Is it only possible for particles like electrons and protons or even either for big objects like a car or bus. Please consider the minutest / smallest change in mass even 0.000000000000000000000001 kg of gain in mass? I mean "relativistic mass" in case when the object is moving and "rest mass" in case when the object is at rest. Can Rest mass too can increase when the gravitational potential energy of an object increases
The damage to your eyes comes from the total energy from the visible and near - infrared region even when you wear a 100% UV blocked sunglasses. When you look at the sun in normal days, the visible light from the sun itself is enough for your eyes to trigger pupillary constriction and blink reflex in order to give you at least partial protection. But when you look at an eclipsed sun, the light and energy from the infrared region will be more than the light from visible region. So no pupil constriction and blink reflex to save you. And the energy from IR rays will burn your eyes. So it is unsafe to watch an eclipsed sun even with sunglasses, whether they have UV protection or not.
{ "source": [ "https://physics.stackexchange.com/questions/352929", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/166967/" ] }
353,080
If a thousand people whisper inaudibly, will the resulting sound be audible? (...assuming they are whispering together.) I believe the answer is "yes" because the amplitudes would simply add and thus reach an audible threshold. Is this right? if possible, please provide an explanation simple enough for non-physics people
Yes, always. I would like to disagree with stafusa's answer here, expanding on Rod's comment. Interference will not occur, since for whispering the sources of sound will be statistically independent . For demonstration, let us look at two people. Person 1 produces a whisper that can be characterized by a propagating sound field $E_1(\vec{r},t)$ , where $\vec{r}$ is the position in space and $t$ is time. Similarly person 2 produces a whisper $E_2(\vec{r},t)$ . The overall field at a point in space is then simply $$E_\mathrm{tot}(\vec{r},t) = E_1(\vec{r},t) + E_2(\vec{r},t)$$ since sound waves are approximately linear (at least for wave amplitudes achievable by voices). What you perceive as 'volume' (I will call it $I$ for intensity) is the time average of the magnitude of the total signal $$I = \langle E^*_\mathrm{tot}(\vec{r},t)E_\mathrm{tot}(\vec{r},t)\rangle.$$ That is, your ear is averaging over very short fluctuations in the signal. We can then expand this in terms of the two people's signals to get $$I = \langle E^*_{1}(\vec{r},t)E_{1}(\vec{r},t)\rangle + \langle E^*_{2}(\vec{r},t)E_{2}(\vec{r},t)\rangle + 2\langle E^*_{1}(\vec{r},t)E_{2}(\vec{r},t)\rangle.$$ So far, this is completely general. Now we assume statistical independence of the sources, which makes the last term zero: $$I = \langle E^*_{1}(\vec{r},t)E_{1}(\vec{r},t)\rangle + \langle E^*_{2}(\vec{r},t)E_{2}(\vec{r},t)\rangle.$$ So the overall intensity is simply the addition of the two whisper intensities.
{ "source": [ "https://physics.stackexchange.com/questions/353080", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/126502/" ] }
353,142
I am thinking about a detector that would beep if light passes through it. Is it possible?
It is indeed possible, as demonstrated by the group of Serge Haroche in 1999 using so-called quantum non-demolition Ramsey interferometry. The idea was to observe the presence or absence of a photon in a cavity by observing its interaction with atoms. This beautiful experiment relies heavily on the behaviour of quantum superposition of atomic states. A simplified explanation is that the presence of a photon in the cavity results in an additional relative phase shift in one term of the superposition of atomic states, and this additional phase shift can be detected. Since all the measurements are done on atomic rather than photonic states, one can infer (and thus detect) the presence of the photon without actually absorbing it.
{ "source": [ "https://physics.stackexchange.com/questions/353142", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167078/" ] }
353,143
What does absolute humidity do in a large space with a temperature gradient? Situation: Enclosed space, approximately 8 meters in height. The space is nearly leak-proof, essentially no air being transferred between the inside and the outside. There is a significant temperature difference between the floor and the ceiling. The temperature of the air is around 35 degrees C at the floor, and around 85 degrees C at the ceiling. There is no liquid water in the space, and no condensation occurs. There is some air movement in the space, but it is minimal and only caused through natural methods, no forced air movements with fans. I am aware that if there was no temperature gradient the absolute humidity (gr/m^3) would stay the same and relative humidity would change depending on the temperature (for this instance also assuming that the outer envelope is not entirely leak-proof, so the pressure stays the same with the increase of temperature). But what happens when there is a temperature gradient in the space? My initial thought would be that absolute humidity would be the same at every height, making it easy to calculate the relative humidity at each height. But my knowledge on this subject is limited, and I have not been able to find any information to confirm or contradict my thought. The following question seems to come closest to my own question, albeit with two connected rooms and not a single space with a temperature gradient: Which is the same between two connected rooms, relative or absolute humidity? The answer provided by dmckee states “ Thus the absolute fraction of water will be the same at both ends and the relative humidity will vary.” But the response from the user asking the questions ends with “…temperatures mean different mass densities as well. If this approach isn't wrong, then both absolute and relative humidity’s in the two bottles will be different.” So I am uncertain what this actually means for my situation. Even if the absolute humidity is not the same at different heights, can somebody provide with some information about the absolute humidity in the space? Would the absolute humidity at the ceiling be higher, the same or lower than the absolute humidity near the floor and would there be some way of calculating this?
It is indeed possible, as demonstrated by the group of Serge Haroche in 1999 using so-called quantum non-demolition Ramsey interferometry. The idea was to observe the presence or absence of a photon in a cavity by observing its interaction with atoms. This beautiful experiment relies heavily on the behaviour of quantum superposition of atomic states. A simplified explanation is that the presence of a photon in the cavity results in an additional relative phase shift in one term of the superposition of atomic states, and this additional phase shift can be detected. Since all the measurements are done on atomic rather than photonic states, one can infer (and thus detect) the presence of the photon without actually absorbing it.
{ "source": [ "https://physics.stackexchange.com/questions/353143", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167072/" ] }
353,253
While I was reading a book on mechanics, when introducing the vector multiplication the author stated that multiplying two vectors can produce a vector, a scalar, or some other quantity . 1.4 Multiplying Vectors Multiplying one vector by another could produce a vector, a scalar, or some other quantity. The choice is up to us. It turns out that two types of vector multiplication are useful in physics. An Introduction to Mechanics , Daniel Kleppner and Robert Kolenkow The authors then examine the scalar or "dot product" and the vector or "cross product" (the latter not shown in the above link; can be seen on Amazon's preview) but seem to make no mention of any other method. My concern is not about vector multiplication here, but what can be that quantity which is neither a scalar nor a vector. The author has explicitly remarked the quantity as neither a scalar nor a vector. What I think is that, when we define a vectors and scalars, we propose the definition in terms of direction. In one direction is considered and in another it is not considered. Then, how can this definition leave space for any other quantity being as none of the two? I would be obliged if someone could explain me if the statement is correct and how is it so. Also it would be great if you can substantiate your argument using examples.
If you have two vectors $\mathbf{a}$ and $\mathbf{b}$, the inner product $\mathbf{a} \cdot \mathbf{b}$ is a scalar, the cross product $\mathbf{a} \times \mathbf{b}$ is a vector and the dyadic product $\mathbf{a} \otimes \mathbf{b}$ is a matrix. It is defined as $$\mathbf{a}\otimes\mathbf{b} = \mathbf{a b}^\mathrm{T} = \begin{pmatrix} a_1 \\ a_2 \\ a_3 \end{pmatrix}\begin{pmatrix} b_1 & b_2 & b_3 \end{pmatrix} = \begin{pmatrix} a_1b_1 & a_1b_2 & a_1b_3 \\ a_2b_1 & a_2b_2 & a_2b_3 \\ a_3b_1 & a_3b_2 & a_3b_3 \end{pmatrix} $$ It occurs a lot in the formalism of quantum mechanics where it is written as $|a \rangle \langle b|$ (using the so-called bra-ket notation by Dirac). With regard to direction: if you apply a matrix to a vector, the vector may get stretched / compressed along multiple axes. So in contrast to a vector, a matrix involves multiple directions.
{ "source": [ "https://physics.stackexchange.com/questions/353253", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/120609/" ] }
353,624
Let's say we have a emitter, emitting light that has frequency f, less than the threshold frequency of a metal. If you leave light shining onto that metal, for long enough, does the energy of the individual photons accumulate, on the electrons, so eventually they will ionize, or does this not happen? What am I missing?
For simplicity let's consider the photoelectric effect in a thin metal foil: The first step in the photoelectric effect is when a photon strikes an electron in the metal and transfers all its energy to it. The electron energy is now equal to the photon energy $h\nu$. If this energy is greater then the work function $\phi$ the electron can escape the metal and will emerge with a kinetic energy: $$ \tfrac{1}{2}mv^2 = h\nu- \phi $$ However the $h\nu \lt \phi$ the electron will in effect bounce off the metal-air interface back into the metal: and the electron will start rattling around inside the metal. The trouble is that the metal has some resistance to the motion of electrons and the electron will very quickly lose its energy and come to a halt. By very quickly I mean less than a nanosecond. So if a second photon strikes the electron before the electron has slowed to a halt, and while the electron is travelling in the right direction then yes the second photon could add enough energy to eject the electron. So in that case we would have photoelectrons ejected by absorbing two photons. However this process is very unlikely as the two photons would have to be absorbed within a very short time. In practice the rate at which photoelectrons are ejected by two (or more) photon absorption is very slow though it can be observed in special cases. For example the paper Double-Quantum Photoelectric Emission from Sodium Metal by M. C. Teich, J. M. Schroeer, and G. J. Wolga, Phys. Rev. Lett. 13, 611, 1964 reports observation of exactly this effect in sodium.
{ "source": [ "https://physics.stackexchange.com/questions/353624", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/147876/" ] }
353,649
Suppose a polarised light has intensity $I$. When it is passed through a polariser (also known as Polaroid) whose transmission axis is at angle $\theta$ with E vector the intensity becomes $I\cos^2\theta$. According to law of Malus. So the intensity and the amplitude decreases so where does this energy go?
The missing energy is dissipated in the material of the polarizer, similar to where the energy goes to when you shine light on an opaque object.
{ "source": [ "https://physics.stackexchange.com/questions/353649", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/130203/" ] }
353,652
If I am not mistaken, during phase change/transition the internal energy is also changing, but for adiabatic free expansion the change in internal energy is zero. So if phase change can occur during adiabatic free expansion, how is that possible?
The missing energy is dissipated in the material of the polarizer, similar to where the energy goes to when you shine light on an opaque object.
{ "source": [ "https://physics.stackexchange.com/questions/353652", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167375/" ] }
353,795
Let's say that I have a motor that's spinning really fast. I really want to know the angular speed of the motor. Using a stopclock definitely won't work as no one can time such fast rotations. So how would I find the rotational frequency in such a case?
There's a very interesting way to find the angular velocity of a wheel that's spinning so fast that you can't measure using a stopclock. We'll be using a strobe light (a light that flashes on and off repeatedly) and a very interesting concept known as the Wagon Wheel effect under stroboscopic conditions. A little bit about the concept: The Wagon Wheel effect is a phenomenon in which a spinning wheel may appear to be stationary under a strobe light. The reason why this happens is quite simple, the rotational frequency of the spinning wheel is an integral multiple of the the strobe light's on-and-off frequency. As a result, every time the strobe light flashes, the wheel comes to the same position as before. This creates the illusion that the wheel was stationary. But how will we use this concept to find the rpm of a spinning wheel? Let's find out. The experiment: You'll only need a strobe light (you can download strobe light apps for Android and probably for iOS as well) and your spinning wheel. In this answer, I'll be using a fidget spinner to demonstrate. Keep the room as dark as possible, and set your wheel to motion. Turn on the strobe light and start with a high flash frequency and gradually lower the frequency until you see the wheel become stationary. We do this because don't want other integral multiples of the frequency to match up with the spinning wheel. Take a note of the strobe light frequency $\nu$ . In my case, the fidget spinner appears stationary at a frequency of 13.3 Hz. As we mentioned before, the wheel appears stationary only when the frequencies match. So, the frequency of the strobe light is the rotational frequency of the wheel. So, I can say that my fidget spinner makes 13.3 revolutions per second. And of course, it's rpm would be 798. I hope you enjoyed this fun experiment. If you have any queries, please drop them at the comments. If you have a better way to find the angular velocity, don't hesitate to write an answer.
{ "source": [ "https://physics.stackexchange.com/questions/353795", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/148444/" ] }
354,098
Lets say we have a closed system with states in a Hilbert space $\mathcal{H}$. Every state can be expressed as a sum of energy eigenstates. In a closed system, like a box of atoms, entropy will increase until the entire system is in thermal equilibrium. However, when we write a state as a sum of energy eigenstates, time evolution merely contributes a phase to each basis state. It doesn't seem to me as though states get any more "disordered" over time, so how can the entropy increase? Let me phrase my question differently. I recall a professor once writing $$S = k \log (\dim \mathcal{H})$$ on the board. Certainly, the dimensionality of a Hilbert space does not change under time evolution. So how can entropy increase under time evolution? Is the above equation only correct under certain conditions? There must be a big flaw in my reasoning/memory.
The total entropy of an isolated system indeed does not change under Schrodinger time evolution. To see this, note that (assuming for simplicity that the Hamiltonian does not depend explicitly on time) the system's density matrix satisfies the Von Neumann equation $\rho(t) = e^{-i H t / \hbar}\, \rho(0)\, e^{i H t / \hbar}$, so $\rho(t)$ and $\rho(0)$ are always unitarily equivalent, and therefore have the same eigenvalue spectra. Therefore any entropy measure that depends only on the density matrix weights (which, practically speaking, is all of them), is constant in time. But the entanglement entropy between subsystems can indeed increase, because the subsystems are not isolated. So if the system, say, starts in a product state with no spatial entanglement between its subsystems, then generically Schrodinger time-evolution will lead to increasing entanglement between the subsystems, so the local entropy associated with each little piece of the whole system will indeed increase, even as the total entropy remains constant. This fact relies on a very non-classical feature of Von Neumann entropy, which is that the sum of the entropies of the subsystems can be greater than the entropy of the system as a whole. (Indeed, in studying the entanglement of the ground state, we often consider systems where the subsystems have very large entanglement entropy, but the system as a whole is in a pure state and so has zero entropy!) The subfields of "eigenstate thermalization", "entanglement propagation", and "many-body localization" - which are all under very active research today - study the ways in which the Schrodinger time-evolution of various systems do or do not lead to increasing entanglement entropy of subsystems, even as the entropy of the system as a whole always stays the same.
{ "source": [ "https://physics.stackexchange.com/questions/354098", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/157704/" ] }
354,159
I've studied a bit of frequency analysis with FFT and optimal phase binning and was taught that we can represent any composite waveform as the sum of its component frequencies. I understand the maths works and gives meaningful results that we can use for design or to solve problems, but does this mean that sine waves are a natural 'element', like particles are for matter but in the time domain (4th dimension) - something that occurs in nature? Or are they a mathematical construct that helps us interpret nature? Do pure, single frequencies occur through natural phenomena or processes? I was taught about tuning forks but (without having tested it) I assume they will produce some harmonics as the straight bars have more than one mode of vibration. Then I thought about the rotation of the planets but they are not pure sinusoids either since the gravity of other planets affects their rotation. Finally I thought about light, but only lasers have a single frequency and as far as I know they don't occur naturally. I assume I'm not the first human asking this question. Are you aware of any academic work on this matter?
Since no phenomenon is completely periodic (nothing keeps repeating from minus infinity to infinity), you could say that sine waves never occur in nature. Still, they are a good approximation in many cases and that is usually enough to consider something physical. Or are they a mathematical construct that helps us interpret nature? I would even go further and say that it is reasonable that everything in physics is a mathematical construct that helps us interpret nature , but that would lead to the philosophical debate of what is nature an so on. After all, almost everything in physics breaks down or at least becomes problematic at some regime: the notion of particles in strongly-interacting theories, energy in general relativity, the notion of a sound wave at the atomic scale...
{ "source": [ "https://physics.stackexchange.com/questions/354159", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/166433/" ] }
354,160
Recently, I learned in my class about harmonic motion and the difference in phase. According to the wikipedia and many other sources, you find that difference by subtracting the phases. Take a look at this: $4cos(20t+10)$ $-4 \cdot 20sin(20t + 10)$ $-4 \cdot20^2cos(20t + 10)$ By changing the sine to cosine, the second one is $\pi /2$ different from the first, and the third one is a $\pi$ different. But it occurred to me that why would you need to change the sine to cosine, or minus cosine to cosine? Isn't only subtracting the phase enough, and whatever number before it doesn't matter?
Since no phenomenon is completely periodic (nothing keeps repeating from minus infinity to infinity), you could say that sine waves never occur in nature. Still, they are a good approximation in many cases and that is usually enough to consider something physical. Or are they a mathematical construct that helps us interpret nature? I would even go further and say that it is reasonable that everything in physics is a mathematical construct that helps us interpret nature , but that would lead to the philosophical debate of what is nature an so on. After all, almost everything in physics breaks down or at least becomes problematic at some regime: the notion of particles in strongly-interacting theories, energy in general relativity, the notion of a sound wave at the atomic scale...
{ "source": [ "https://physics.stackexchange.com/questions/354160", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167623/" ] }