source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
143,642
Acceleration is defined as the rate of change of velocity with time. Jerk is defined as the rate of change of acceleration with time. What is the jerk due to gravity with ascent?
The precise theorem is the following, cf. e.g. Ref. 1. Theorem 1: Given a non-positive (=attractive) potential $V\leq 0$ with negative spatial integral $$ v~:=~\int_{\mathbb{R}^n}\! d^n r~V({\bf r}) ~<~0 ,\tag{1} $$ then there exists a bound state $^1$ with energy $E<0$ for the Hamiltonian $$\begin{align} H~=~&K+V, \cr K~=~& -\frac{\hbar^2}{2m}{\bf \nabla}^2\end{align}\tag{2} $$ if the spatial dimension $\color{Red}{n\leq 2}$ is smaller than or equal to two. The theorem 1 does not hold for dimensions $n\geq3$ . E.g. it can be shown that already a spherically symmetric finite well potential does not $^2$ always have a bound state for $n\geq3$ . Proof of theorem 1: Here we essentially use the same proof as in Ref. 2, which relies on the variational method . We can for convenience use the constants $c$ , $\hbar$ and $m$ to render all physical variables dimensionless, e.g. $$\begin{align} V~\longrightarrow~& \tilde{V}~:=~\frac{V}{mc^2}, \cr {\bf r}~\longrightarrow~&\tilde{\bf r}~:=~ \frac{mc}{\hbar}{\bf r},\end{align}\tag{3} $$ and so forth. The tildes are dropped from the notation from now on. (This effectively corresponds to setting the constants $c$ , $\hbar$ and $m$ to 1.) Consider a 1-parameter family of trial wavefunctions $$\begin{align} \psi_{\varepsilon}(r)~=~&e^{-f_{\varepsilon}(r)}~\nearrow ~e^{-1}\cr &\text{for}\quad \varepsilon ~\searrow ~0^{+} , \end{align}\tag{4}$$ where $$\begin{align} f_{\varepsilon}(r)~:=~& (r+1)^{\varepsilon} ~\searrow ~1\cr &\text{for}\quad \varepsilon ~\searrow ~0^{+}\end{align} \tag{5} $$ $r$ -pointwise. Here the $\nearrow$ and $\searrow$ symbols denote increasing and decreasing limit processes, respectively. E.g. eq. (4) says in words that for each radius $r \geq 0$ , the function $\psi_{\varepsilon}(r)$ approaches monotonically the limit $e^{-1}$ from below when $\varepsilon$ approaches monotonically $0$ from above. It is easy to check that the wavefunction (4) is normalizable: $$\begin{align}0~\leq~~&\langle\psi_{\varepsilon}|\psi_{\varepsilon} \rangle\cr ~=~~& \int_{\mathbb{R}^n} d^nr~|\psi_{\varepsilon}(r)|^2 \cr ~\propto~~& \int_{0}^{\infty} \! dr ~r^{n-1} |\psi_{\varepsilon}(r)|^2\cr ~\leq~~& \int_{0}^{\infty} \! dr ~(r+1)^{n-1} e^{-2f_{\varepsilon}(r)} \cr ~\stackrel{f=(1+r)^{\varepsilon}}{=}&~ \frac{1}{\varepsilon} \int_{1}^{\infty}\!df~f^{\frac{n}{\varepsilon}-1} e^{-2f}\cr ~<~~&\infty,\qquad \varepsilon~> ~0.\end{align}\tag{6} $$ The kinetic energy vanishes $$\begin{align} 0~\leq~~&\langle\psi_{\varepsilon}|K|\psi_{\varepsilon} \rangle \cr ~=~~& \frac{1}{2}\int_{\mathbb{R}^n}\! d^nr~ |{\bf \nabla}\psi_{\varepsilon}(r) |^2\cr ~=~~& \frac{1}{2}\int_{\mathbb{R}^n}\! d^nr~ \left|\psi_{\varepsilon}(r)\frac{df_{\varepsilon}(r)}{dr} \right|^2 \cr ~\propto~~& \varepsilon^2\int_{0}^{\infty}\! dr~ r^{n-1} (r+1)^{2\varepsilon-2}|\psi_{\varepsilon}(r)|^2\cr ~\leq~~&\varepsilon^2 \int_{0}^{\infty} \!dr ~ (r+1)^{2\varepsilon+n-3}e^{-2f_{\varepsilon}(r)}\cr ~\stackrel{f=(1+r)^{\varepsilon}}{=}&~ \varepsilon \int_{1}^{\infty}\! df ~ f^{1+\frac{\color{Red}{n-2}}{\varepsilon}} e^{-2f}\cr ~\searrow ~~&0\quad\text{for}\quad \varepsilon ~\searrow ~0^{+},\end{align} \tag{7}$$ when $\color{Red}{n\leq 2}$ , while the potential energy $$\begin{align}0~\geq~&\langle\psi_{\varepsilon}|V|\psi_{\varepsilon} \rangle\cr ~=~& \int_{\mathbb{R}^n} \!d^nr~|\psi_{\varepsilon}(r)|^2~V({\bf r}) \cr ~\searrow ~& e^{-2}\int_{\mathbb{R}^n} \!d^nr~V({\bf r})~<~0 \cr &\text{for}\quad \varepsilon ~\searrow ~0^{+} ,\end{align}\tag{8} $$ remains non-zero due to assumption (1) and Lebesgue's monotone convergence theorem . Thus by choosing $ \varepsilon \searrow 0^{+}$ smaller and smaller, the negative potential energy (8) beats the positive kinetic energy (7), so that the average energy $\frac{\langle\psi_{\varepsilon}|H|\psi_{\varepsilon}\rangle}{\langle\psi_{\varepsilon}|\psi_{\varepsilon}\rangle}<0$ eventually becomes negative for the trial function $\psi_{\varepsilon}$ . A bound state $^1$ can then be deduced from the variational method . Note in particular that it is absolutely crucial for the argument in the last line of eq. (7) that the dimension $\color{Red}{n\leq 2}$ . $\Box$ Simpler proof for $\color{Red}{n<2}$ : Consider an un-normalized (but normalizable) Gaussian test/trial wavefunction $$\psi(x)~:=~e^{-\frac{x^2}{2L^2}}, \qquad L~>~0.\tag{9}$$ Normalization must scale as $$||\psi|| ~\stackrel{(9)}{\propto}~ L^{\frac{n}{2}}.\tag{10}$$ The normalized kinetic energy scale as $$0~\leq~\frac{\langle\psi| K|\psi \rangle}{||\psi||^2} ~\propto ~ L^{-2}\tag{11}$$ for dimensional reasons. Hence the un-normalized kinetic scale as $$0~\leq~\langle\psi| K|\psi \rangle ~\stackrel{(10)+(11)}{\propto} ~ L^{\color{Red}{n-2}}.\tag{12}$$ Eq. (12) means that $$\begin{align}\exists L_0>0 \forall L\geq L_0:~~0~\leq~& \langle\psi|K|\psi\rangle\cr ~ \stackrel{(12)}{\leq} ~&-\frac{v}{3}~>~0\end{align}\tag{13}$$ if $\color{Red}{n<2}$ . The un-normalized potential energy tends to a negative constant $$\begin{align}\langle\psi| V|\psi \rangle ~\searrow~&\int_{\mathbb{R}^n} \! \mathrm{d}^nx ~V(x)~=:~v~<~0\cr &\quad\text{for}\quad L~\to~ \infty.\end{align}\tag{14}$$ Eq. (14) means that $$\exists L_0>0 \forall L\geq L_0:~~ \langle\psi| V|\psi\rangle ~\stackrel{(14)}{\leq}~ \frac{2v}{3} ~<~ 0.\tag{15}$$ It follows that the average energy $$\begin{align}\frac{\langle\psi|H|\psi\rangle}{||\psi||^2} ~=~~&\frac{\langle\psi|K|\psi\rangle+\langle\psi|V|\psi\rangle}{||\psi||^2}\cr ~\stackrel{(13)+(15)}{\leq}&~ \frac{v}{3||\psi||^2}~<~0\end{align}\tag{16}$$ of trial function must be negative for a sufficiently big finite $L\geq L_0$ if $\color{Red}{n<2}$ . Hence the ground state energy must be negative (possibly $-\infty$ ). $\Box$ References: K. Chadan, N.N. Khuri, A. Martin and T.T. Wu, Bound States in one and two Spatial Dimensions, J.Math.Phys. 44 (2003) 406 , arXiv:math-ph/0208011 . K. Yang and M. de Llano, Simple variational proof that any two‐dimensional potential well supports at least one bound state, Am. J. Phys. 57 (1989) 85 . -- $^1$ The spectrum could be unbounded from below. $^2$ Readers familiar with the correspondence $\psi_{1D}(r)=r\psi_{3D}(r)$ between 1D problems and 3D spherically symmetric $s$ -wave problems in QM may wonder why the even bound state $\psi_{1D}(r)$ that always exists in the 1D finite well potential does not yield a corresponding bound state $\psi_{3D}(r)$ in the 3D case? Well, it turns out that the corresponding solution $\psi_{3D}(r)=\frac{\psi_{1D}(r)}{r}$ is singular at $r=0$ (where the potential is constant), and hence must be discarded.
{ "source": [ "https://physics.stackexchange.com/questions/143642", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62965/" ] }
143,652
I have been having trouble distinguishing these two equations and figuring out which one is correct. I have watched a video that says that $E^2=(mc^2)^2+(pc)^2$ is correct, but I do not know why. It says that $E=mc^2$ is the equation for objects that are not moving and that$ E^2=(mc^2)^2+(pc)^2$is for objects that are moving. Here is the link to the video: http://www.youtube.com/watch?v=NnMIhxWRGNw
Let me clarify some confusions in the notation that other answers have alluded to but not clearly mentioned. Historically, physicists liked to talk about two different definitions of mass The first is the rest mass of a particle $m_0$. This is the mass of the particle when it is at rest. For example, the rest mass of the electron is $(m_0)_{electron} = 9.1 \times 10^{-31}~Kg$. This is an absolute constant that is independent of the speed of the particle. The second is the relativistic mass $m$. This is the apparent mass of the particle when it is moving with speed $v$. It is related to the rest mass via the relation $$ m = \gamma m_0 = \frac{m_0}{\sqrt{1-v^2/c^2}} $$ Note that the relativistic mass is NOT a constant. It depends on $v$. In this historical notation, Einstein's famous formula that is completely correct in all frames is $$ E = m c^2 $$ However, it turns out via a series of algebraic manipulations that this equation also implies $$ E^2 = ( m_0 c^2)^2 + (pc)^2 $$ Let us prove this. $p$ is the momentum of the particle defined by $p = m v = \gamma m_0 v$. Thus $$ (m_0 c^2)^2 + (pc)^2 = m_0^2 c^4 + \gamma^2 m_0^2 v^2 c^2 = m_0^2 c^4 \left( 1 + \frac{\gamma^2 v^2}{c^2} \right) $$ Now, we have the property $$ 1 + \frac{\gamma^2 v^2}{c^2} = 1 + \frac{\frac{v^2}{c^2}}{\left( 1 - \frac{v^2}{c^2} \right) } = \frac{1}{ \left( 1 - \frac{v^2}{c^2} \right) } = \gamma^2 $$ Thus $$ (m_0 c^2)^2 + (pc)^2 = m_0^2 c^4 \gamma^2 = (\gamma m_0)^2 c^4 = m^2 c^4 = (mc^2)^2 = E^2 $$ Thus, in summary, in the historical notation, we have two completely equivalent formulae $$ \boxed{ E^2 = (m c^2 )^2 = (m_0 c^2)^2 + (pc)^2} $$ In modern day notation , physicists have decided to drop discussion of the relativistic mass $m$ since it is not an absolute constant and depends on the speed of the particle. Nowadays, we only talk about the rest mass, $m_0$. However, in a confusing notational change physicists today decided to use $m$ for the rest mass (which in today's notation is not confusing at all, since we don't talk about relativistic mass, but it is often confusing to students who try to compare Einstein's original papers with books written today). Following modern day notation then, we only have ONE equation, namely $$ \boxed{ E^2 = (m c^2)^2 + (pc)^2 } $$ where in the above equation $m$ is now the rest mass.
{ "source": [ "https://physics.stackexchange.com/questions/143652", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/34331/" ] }
143,823
I've noticed that when I apply the front brakes on my bike it stops quite quickly. If I apply the back brakes at the same speed, it skids rather than stopping quickly. Why?
Using the brakes on the front of the bike causes your weight to shift forward. Additional weight allows more force before the tire will slip (skid). If you brake hard enough the back tire of your bike will lift up and at that point all of the mass is distributed on the front tire. Remember the maximum force is $F_{max} = \mu F_{normal}$ and $F_{normal}$ is proportional to the distribution of weight on the tire so as weight shifts forward $F_{normal}$ increases and therefor the maximum stopping force $F_{max}$ increases. Braking with your back tire doesn't shift weight onto the back tire so its stopping power doesn't increase. Wikipedia explains this in great detail in the Bicycle and motorcycle dynamics article.
{ "source": [ "https://physics.stackexchange.com/questions/143823", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/41158/" ] }
143,998
Why do helicopter blades make this pulsing, oscilating, slapping(?) sound? Since their movement is smooth, shouldn't the sound be a similar, constant shush, perhaps increasing or decreasing in frequency as the speed of the blades changes? update: Won't the sound heard depend on the location of the listener? Perhaps a person standing below (or above;) the helicopter would hear a more constant and uniform sound wave, while a listener standing to the side will hear the mentioned pulsing sound?
In start-up and hover each blade produces more or less constant sound. But the sound is attenuated by distance and may not be the same in all directions. Therefore you hear it differently depending on the blade's position relative to you. So as the blades rotate, the sound you hear pulsates because the blades alternately get to positions where you hear them more or less strongly. In this video showing helicopter start-up from cockpit you can clearly hear the swish of each blade as it passes overhead with the pulsing increasing in frequency as the rotor spins up. The blade tips also move quite fast, often more than half of speed of sound, so Doppler effect is adding more variation to the sound if you are standing to the side. In cruise flight additionally the advancing blade moves faster relative to air than the retreating one, so even the generated sound changes as the rotor turns. This effect increases as the helicopter accelerates. If it overspeeds, blade tips on the advancing side may (depending on helicopter type) get close to the speed of sound and shockwaves start to form on that side that add even more pulsating sound. In some cases (turns at high speed, descent) the blades may also be hitting the wake vortex shed by the previous blade resulting in sharp increase in the puslating sound called "blade slapping". The reason is the blades only hits the vortex when it passes one particular place on the rotor disk, usually on the advancing side. Apparently it is rather complex; I found there is a paper about it (not read it; it is behind paywall).
{ "source": [ "https://physics.stackexchange.com/questions/143998", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63240/" ] }
144,294
All the introductions I've found to Pauli matrices so far simply state them and then start using them. Accompanying descriptions of their meaning seem frustratingly incomplete; I, at least, can't understand Pauli matrices after reading them at all. My current understanding and confusion is demonstrated below. I'd be ever so grateful if someone could fill in all the holes, or poke new ones where appropriate. Spinors looks like column vectors, i.e. $$s = \left(\begin{matrix}1\\0\\1\end{matrix}\right)$$ and are used so that rotation in three dimensions (using complex numbers) can be linearly transformed. What does the example spinor above mean? A spin value of 1 in the x and z directions? How can spin-$\frac{1}{2}$ be represented with just 1s then? A three dimensional vector is used to construct the Pauli matrix for each dimension. E.g., for spin-$\frac{1}{2}$, the vectors used for x, y and z are $v_x =(1,0,0)$, $v_y=(0,1,0)$ and $v_z=(0,0,1)$. You transform them each to the relevant Pauli matrix by the following equation, using dimension x for demonstration, $$ P^x=\left(\begin{matrix} v_3^x&v_1^x - i v_2^x\\ v_1^x+i v_2^x&-v_3^x \end{matrix}\right) $$ where superscript denotes dimension, not power. Once you have these matrices, you operate on the spinors with them. What does this do? You can also find the eigenvalues and eigenvectors for the matrix, which can be used to find the probability that a particle, if measured to have a certain spin in one dimension, when measured next will have spin in another dimension that you choose. I don't understand how this works. What does the eigenvalue and eigenvector in this sense physically represent, and how does spin up and down fit into this? E.g. If you had a spin-1 particle that you knew was spin up in the x direction, what would you do to find the probability of it having spin up or down in the z or y dimension when next measured? Concrete examples would probably help my understanding a lot.
Let me first remind you of (or perhaps introduce you to) a couple of aspects of quantum mechanics in general as a model for physical systems. It seems to me that many of your questions can be answered with a better understanding of these general aspects followed by an appeal to how spin systems emerge as a special case. General remarks about quantum states and measurement. The state of a quantum system is modeled as a unit-length element $|\psi\rangle$ of a complex Hilbert space $\mathcal H$, a special kind of vector space with an inner product. Every observable quantity (like momentum or spin) associated with such a system whose value one might want to measure is represented by a self-adjoint operator $O$ on that space. If one builds a device to measure such an observable, and if one uses that device to make a measurement of that observable on the system, then the machine will output an eigenvalue $\lambda$ of that observable. Moreover, if the system is in a state $|\psi\rangle$, then the probability that the result of measuring that quantity will be the eigenvalue of the observable is \begin{align} p(\lambda) = |\langle \lambda|\psi\rangle|^2 \end{align} where $|\lambda\rangle$ is the normalized eigenvector corresponding to the eigenvalue $\lambda$. Specialization to spin systems. Suppose, now, that the system we are considering consists of the spin of a particle. The Hilbert space that models the spin state of a system with spin $s$ is a $2s+1$ dimensional Hilbert space. Elements of this vector space are often called "spinors," but don't let this distract you, they are just like any other vector in a Hilbert space whose job it is to model the quantum state of the system. The primary observables whose measurement one usually discusses for spin systems are the cartesian components of the spin of the system. In other words, there are three self-adjoint operators conventionally called $S_x, S_y, S_z$ whose eigenvalues are the possible values one might get if one measures one of these components of the system's spin. The spectrum (set of eigenvalues) of each of these operators is the same. For a system of spin $s$, each of their spectra consists of the following values: \begin{align} \sigma(S_i) = \{m_i\hbar\,|\, m_i=-s,-s+1,\dots, s-1,s\} \end{align} where in my notation $i=x,y,z$. So for example, if you build a machine to measure the $z$ component of the spin of a spin-$1$ system, then the machine will yield one of the values in the set $\{-\hbar, 0, \hbar\}$ every time. Corresponding to each of these eigenvalues, each spin component operator has a normalized eigenvector $|S_i, m_i\rangle$. As indicated by the general remarks above, if the state of the system is $|\psi\rangle$, and one wants to know the probability that the measurement of the spin component $S_i$ will yield a certain value $m_i\hbar$, then one simply computes \begin{align} |\langle S_i, m_i |\psi\rangle|^2. \end{align} For example, if the system has spin-$1$, and if one wants to know the probability that a measurement of $S_y$ will yield the eigenvalue $-\hbar$, then one computes \begin{align} |\langle S_y, -1|\psi\rangle|^2 \end{align} Spinors. In the above context, spinors are simply the matrix representations of states of a particular spin system in a certain ordered basis, and the Pauli spin matrices are, up to a normalization, the matrix representations of the spin component operators in that basis specifically for a system with spin-$1/2$. Matrix representations often facilitate computation and conceptual understanding which is why we use them. More explicitly, suppose that one considers a spin-$1/2$ system, and one chooses to represent states and observables in the basis $B =(|S_z, -1/2\rangle, |S_z, 1/2\rangle)$ consisting of the normalized eigenvectors of the $z$ component of spin, then one would find the following matrix representations in that basis \begin{align} [S_x]_B &= \frac{\hbar}{2}\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} = \frac{\hbar}{2}\sigma_x\\ [S_y]_B &= \frac{\hbar}{2}\begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} = \frac{\hbar}{2}\sigma_y\\ [S_z]_B &= \frac{\hbar}{2}\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} =\frac{\hbar}{2}\sigma_z\\ \end{align} Notice that these representations are precisely the Pauli matrices up to the extra $\hbar/2$ factor. Moreover, each state of the system would be represented by a $2\times 1$ matrix, or "spinor" \begin{align} [|\psi\rangle]_B = \begin{pmatrix} a \\ b\end{pmatrix}. \end{align} And one could use these representations to carry out the computations referred to above.
{ "source": [ "https://physics.stackexchange.com/questions/144294", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58082/" ] }
144,393
Say it is winter and the outside temperature is 0 degrees f. I set my thermostat to 74 degrees. When the temperature inside my home reaches 72 degrees, the furnace will kick on and warm the house to 74. Now, if I set my thermostat to 68 degrees to save energy, when my house reaches 66 degrees, the furnace will kick on and warm the house to 68. In both instances, the same amount of energy was expended to raise the temperature by 2 degrees. So how is it that keeping your thermostat at 68 will save energy?
The rate at which your home loses heat is proportional to the difference of temperature between the inside and the outside. This is Newton's law of cooling . Hence, a higher temperature home will lose heat faster. This means that if your thermostat is set lower, the furnace will need to turn on less often.
{ "source": [ "https://physics.stackexchange.com/questions/144393", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63260/" ] }
144,406
On Wikipedia I saw that the average orbital speed of planet Earth around the Sun is a whopping $29 783\text{ m/s}$, and it made me wonder are there bodies (planets, meteorites, asteroids) that move faster? My question is not about small photons or other (small-ish) particles and their speed (speed of light), or even about solar winds ($750\text{ km/s}$) but about meteorites, planets or other materials and their speed around the sun or other fixed point.
The maximum speed of an object that orbits the Sun at a certain distance $r$ is known as the escape velocity : $$ v_\text{esc} = \sqrt{\frac{2GM_\odot}{r}}, $$ where $M_\odot$ is the mass of the Sun. If the object would have a greater speed, it would eventually leave the solar system. So I'd say that the absolute maximum possible speed of any object in the solar system would be the escape velocity at the radius of the Sun $R_\odot$: $$ v_\max = \sqrt{\frac{2GM_\odot}{R_\odot}}, $$ which, as you can find in the wiki article, is $617.5\;\text{km/s}$. A comet that slams into the Sun, which occasionally happens, would have a speed close to this maximum. Alas, it's also the last speed it'll have before it meets its doom :-) Update If you want to know the fastest object in the solar system that didn't crash into the Sun, then the best candidates are sungrazing comets , i.e. comets with very eccentric orbits that pass very close to the Sun. One particular group are the Kreutz Sungrazers . The comet C/2011 W3 (Lovejoy) mentioned by hobbs in the comments belongs to this group, but there was another of these comets that passed the Sun even closer: the Great Comet of 1843 . This comet has a perihelion of only 0.005460 AU (where 1 Astronomical Unit is 149 597 871 km). This means it came to within less than 121 000 km of the surface of the Sun, and amazingly it survived (most comets break up when they come this close). So what is its velocity at perihelion? The general formula is (see this link ) $$ v_p = \sqrt{\frac{\mu}{a}\frac{1+e}{1-e}}, $$ with $$ a = \frac{r_p + r_a}{2} $$ the semi-major axis, $r_p$ and $r_a$ the peri- and aphelion, $$ e = \frac{r_a-r_p}{r_a+r_p} $$ the eccentricity, and $\mu = GM_\odot$ the standard gravitational parameter of the Sun. So we can rewrite this as $$ v_p = \sqrt{\frac{2GM_\odot}{r_p}\left(\frac{r_a}{r_a+r_p}\right)}. $$ As you can see, this reduces indeed to the formula for the escape velocity if $r_a$ goes to infinity. For our comet, $r_p = 0.005460$ AU and $r_a = 156$ AU, and we find $$ v_p = 570\;\text{km/s}. $$
{ "source": [ "https://physics.stackexchange.com/questions/144406", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/45414/" ] }
144,546
I know Newton's third law of motion might be the answer for this but still I am wondering how the rockets could thrust in the empty space and move in the opposite direction. I guess an astronaut wouldn't be able to push in the empty space with his hands or legs to move himself, but with a rocket engine it's possible. How? What might be the explanation for this in General Relativity?
Newton's third law is pretty near to the mark. All of the phenomena you cite stem from the principle of conservation of momentum in an isolated system, itself ultimately a result (through Noether's theorem) of the fact the physical description of that isolated system is unchanged if we shift the spatial origin of our co-ordinate system. So, if you're in deep space and you throw something with mass $m$ in one direction at a speed $v$, its momentum is $m\,v$ in that direction. The initial total momentum of the system (you + the thrown thing) is nought. So that means that the final total momentum for the system must be nought. Therefore, your own momentum must be $m\,v$ in the direction opposite to the thrown thing. If your mass is $M$, then your speed is $m\,v/M$ in the direction opposite to the thrown thing. Note that, even though you can't shift your centre of mass without throwing anything (and in any case, the centre of mass of the whole system i.e. you+the thrown thing stays put), you can rotate and shift your orientation without violating conservation of angular momentum by cyclically shifting your shape; this is the same method a cat uses to flip over as it falls. See my answer here to the question "Is there a way for an astronaut to rotate?" and also my article "Of Cats and Their Most Wonderful Righting Reflex" General relativity doesn't describe rockets (well, almost so, see my caveat below) in the way you might think. General relativity describes the locally freefalling frames and their so called Lie dragging by the system of geodesics defined by the solutions to the Einstein Field Equations. In less jargon: General Relativity tells you what kinds of motions are in keeping with Newton's first law; it tells you the motions within spacetime that something "isolated" (not experiencing a force) will undergo: anything different to this calls for a force to accelerate something relative to these freefalling frames. Chemistry describes the burning of fuel and Newton's third law the production of force from throwing this fuel to allow your rocket to deviate from the freefalling motion given by General Relativity. To be precise, as the rocket throws its fuel out, the mass-energy distribution and the momentum fluxes (pressure distributions) of the system is changing, and this strictly speaking would need to be taken into account in the Einstein Field equations (this would alter the "source" term, the so-called stress-energy tensor). Thus the rocket's action would, to a fantastically small degree, alter the spacetime around it. But this is a tiny technicality. The main gig is simply that chemical energy allows you to throw fuel and produce a force to let you deviate from a locally freefalling (inertial) frame.
{ "source": [ "https://physics.stackexchange.com/questions/144546", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10612/" ] }
144,587
Assume we have a red hot cannonball in space. It starts off with mass M at 1000K. Later it has cooled by radiation to 100K. Has the mass decreased?
MSalters already said "yes". I would like to expand on that by computing the change. Let's take a 10 kg cannon ball, made of lead. Heat capacity of 0.16 J/g/K means that in dropping from 1000 K to 100 K it has lost $10000\cdot 900 \cdot 0.16 \approx 1.4 MJ$. This corresponds (by $E=mc^2$) to a mass of $1.6 \cdot 10^{-11} kg$ or one part in $6\cdot 10^{11}$. I cannot think of an experiment that will allow you to measure that mass change on an object in outer space. UPDATE if you think of temperature as "a bunch of atoms moving", I was wondering whether the relativistic mass increment would be sufficient to explain this mass change. The velocity of atoms in a solid is hard to compute - so I'm going to make the cannonball out of helium atoms (just because I can) in a thin shell. The mean kinetic energy is $\frac32 kT$, so mean velocity $v = \sqrt{\frac{3kT}{m}}\approx 2500 m/s$. When things cool down to 100 K, velocity will drop by $\sqrt{10}$ to about 800 m/s. Now at 2500 m/s the relativistic factor $\gamma = \frac{1}{\sqrt{1-\frac{v^2}{c^2}}} \approx 1 + \frac{v^2}{2c^2}$. It is encouraging that this scales with $v^2$, just as $T$ scales with $v^2$. Writing this all for one atom: $$\Delta m = m (\gamma - 1) = m \frac{v^2}{2c^2}$$ Now putting $\frac12 m v^2 = \frac32 kT$, we get $$\Delta m = \frac{3kT}{2c^2}\\ \Delta m c^2 = \frac32kT$$ The change in mass really does scale with temperature! So even though I was using the average velocity of the atoms, it seems that this is sufficient to explain a real (if hard to measure) change in mass... relativity works. I love it when that happens.
{ "source": [ "https://physics.stackexchange.com/questions/144587", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
144,694
What is the most efficient way to store data that is currently hypothesized? Is it theoretically possible to store more than one bit per atom? Until Floris' comment, I hadn't considered that efficiency is dependent on what you're trying to optimize. When I wrote the question, I had matter efficiency in mind. Are there other orthogonal dimensions that one may want to optimize? And to be clear, by matter efficiency, I mean representing the most data (presumably bits) with the least amount of matter.
It sounds as though you may be groping for the Bekestein Bound (see Wiki page of this name) from the field of Black Hole Thermodynamics. This bound postulates that the total maximum information storage capacity (in bits) for a spherical region in space $R$ containing total energy $E$ is: $$I\leq \frac{2\,\pi\,R\,E}{\hbar\,c\,\log 2}\tag{1}$$ where $I$ is the number of bits contained in quantum states of that region of space. If you try to shove too much energy into a region of space, a Schwarzschild horizon and a black hole will form, to the Bekenstein bound implies a maximum information storage capacity for a region of space independent of $E$; the limit will be reached when $E$ is so high that $R$ becomes the Schwarzschild radius (radius of the event horizon) for that black hole; from this notion we get: $$E\leq \frac{R\,c^4}{2\,G}\tag{2}$$ to prevent a black hole forming, with equality holding at the threshold of creating an horizon. From (1) and (2) we find: $$I\leq \frac{\pi\,R^2\,c^3}{\hbar\,G\,\log\,2}\tag{3}$$ which is indeed the entropy of a black hole in bits: this is Hawking's famous formula $I = A/(4\,\log 2)$ bits, where $A$ is the black hole Schwartzschild horizon's area (but expressed in Planck units). Bekenstein derived these bounds by (1) postulating that the second law of thermodynamics stays true for systems containing black holes and then showing that (2) the second law can be made "safe" only if these bounds hold. Otherwise, one can imagine thought experiments that would violate the second law by throwing things into black holes. More on the grounding of the bounds can be found on the Scholarpedia page for the Bekenstein bound . One gets truly colossal storage capacities for these formulas. For a region of space of 5 centimetres radius (the size of a tennis ball), we get $4.3\times10^{67}$ bits from (3). This is to be compared with the estimated total storage of Earth's computer systems of about $10^{23}$ bits in 2013 (see the Wikipedia Zettabyte Page ). A one and a half kilogram human brain can store about $3\times 10^{42}$ bits and the mass of Earth roughly $10^{75}$ bits. These last two are more indicative of "normal" matter because the tennis ball example assumes we've packed so much energy in that a black hole is about to form. So the tennis ball would be made of ultracompressed matter like neutron star material. From the human brain example, lets assume we have $(1500/12)\times 10^{24}$ atoms. (roughly Avagadro's number times the number of moles of carbon in that mass). The informational content worked out above would amount to more like $10^{16}$ bits per atom. None of these bounds talk about the actual realisation of data storage. But it would be trivial to store more than one bit per atom theoretically by choosing an element with, say, three or four stable isotopes, and lining up the atoms in a lattice. You code your data by placing the appropriate isotope at each given position in the lattice, and retrieve your bits by reading which isotope is present at each position of the lattice. For example, Silicon has three stable isotopes: you code your message in a lattice of silicon like this, and your storage is $\log_2 3 \approx 1.58$ bits per atom. Edit in answer to question by OP: " since this is, as far as I can tell, relativistic/macro-scale physics, is there room for significant change when/if quantum physics is incorporated? (I.e. will this likely stay the same or change when the unifying theory is discovered? Or is it entirely independent of the unifying theory problem?) " Yes it is macro-scale physics, but it will not improve when quantum effects are incorporated IF the second law of thermodynamics applies to black hole systems and my feeling is that many physicists who study quantum gravity believe it does. Macroscopic ensembles of quantum systems still heed the second law when you measure the entropy of a mixed states with the von Neumann entropy: this is the quantum extension of the Gibbs entropy. And, if you're talking about the second law, you are always talking about ensemble / large system behaviour: entropy fluctuates up and down: negligibly for macro systems but significantly for systems of small numbers of quantum particles. If you think about it though, it is the macro behaviour that is probably most interesting to you because you want to know how much information is stored on average per quantum particle. As I understand it, a great deal of quantum gravity theory is grounded on the assumption that black hole systems do indeed follow the second law. In causal set theory, for example, the assumed "atoms" of spacetime causally influence one another and you of course have pairs of these atoms that are entangled (causally influence one another) but which lie on either side of the Schwarzschild horizon: one of the pair is insidethe black hole and therefore cannot be probed from the outside, whilst the other pair member is in our universe. It is entangled and thus causally linked to the pair member inside the black hole which we cannot see. The outside-horizon pair member observable in our universe therefore has "hidden" state variables, i.e. encoded in the state of the pair member inside the horizon that add to its von Neumann entropy, as we would perceive it outside the horizon. This is why causal set theory foretells an entropy proportional to the horizon area (the famous Hawking equation $S = k\,A/4$) because it is the area that is proportional to the number of such pairs that straddle the horizon. Links with Jerry Schirmer's Answer after discussions with Users Charles and user27542 ; see also Charles's question "How big is an excited hydrogen atom?" Jerry Schirmer correctly (IMO) states that one can theoretically encode an infinite number of bits in the eigenstates of an excited hydrogen atom; this is of course if we can measure energy infinitely precisely and tell the states apart; since the spacing between neighbouring energy levels varies as $1/n^3$ where $n$ is the principal quantun number, we'd need to be willing to wait longer and longer to read our code as we try to cram more and more data into our hydrogen atom. Even if we are willing to do this, the coding scheme does not even come close to violating the Bekenstein bound because the size of the higher energy state orbitals increase, theoretically without bound, with the principal quantum number. I calculate the mean radius $\bar{r}$ of an orbital with principal quantum number $n$ in my answer here and the answer is $\bar{r}\approx n\,(n+\frac{3}{2})\,a \sim n^2$. Also, the angular momentum quantum numbers are bounded by $\ell \in1,\,2,\,\cdots,\,n-1$ and $m\in-\ell,\,-\ell+1,\,\cdots,\,\ell-1,\,\ell$, therefore the total number of eigenstates with principal quantum number $n$ is $1+3+5+\cdots 2n-1 = n^2$ and so the total number $N(n)$ of energy eigenstates with principal quantum number $n$ or less is $N(n)=1^2+2^2+\cdots+n^2 \approx n^3/3$. So $\bar{r}\propto n^2$ and $N \propto n^3$ thus $N\propto \sqrt{\bar{r}^3}$ and $I = \log N \approx A + \frac{3}{2}\,\log_2\,\bar{r}$ where $I$ is the encodable information content in bits and $A$ a constant.
{ "source": [ "https://physics.stackexchange.com/questions/144694", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/42293/" ] }
144,736
I have read different speeds of Earth in different sources. $382\;{\rm km}/{\rm s}$, $12\;{\rm m}/{\rm s}$ and even $108,000\;{\rm km}/{\rm h}$. Basically, it's moving too fast around the Sun. And the Solar System is moving too. So why don't we feel it and why doesn't it harm us in any way? Inertia can only be a part of it. But what's the whole reason?
Speed doesn't kill us, but acceleration does. When astronauts go into space at launch and when fighter pilots turn very tight turns at high speed they experience 'high g forces' - their bodies are accelerated very fast as they accelerate and gain speed to go into space or as the direction of their speed changes. One of the problems with this is that for fighter pilots the blood can rush to the feet (black out) or to the head (red out). Too much acceleration makes people pass out and could at extremes be fatal I guess. To go around the sun in (nearly) a circular path we are acclerated by the gravity from the sun. The acceleration can be calculated by $v^2/r$ where $v$ is our speed and $r$ is the distance to the centre of the sun. This acceleration turns out to be $\sim~0.006~m/s^2$. By contrast the acceleration that we feel here at the surface due to the gravitational pull of the earth on us is $\sim~10~m/s^2$. So the acceleration due to travelling around the sun is so small we don't notice it. We do notice the pull of gratvity from the earth on us, but our bodies are used to it and can cope with it. To think about it another way we can go very fast in a car on a motorway/highway without noticing it, the big danger is having to stop very quickly or crashing when we change speeds very rapidly - acceleration is the rate of change of speed so changing speed very rapidly is equivalent to a very high acceleration - in a car we might call this deceleration. [for calculation above $v=3 \times 10^4~m/s$ and $r=1.5 \times 10^{11}m$] after good comment from hdhoundt - For astronauts in orbit (e.g. in the space station) they can cope with the acceleration they experience, which holds them in orbit around the earth. Indeed they feel weightless because they are not held by the gravity of earth on the surface. Instead they and their surroundings are in 'constant free fall'. The speed of the space station in orbit is $7.71 km/s$, which is $\sim~ 17,000 ~mph$. Full discussion of this topic might venture into relativity, but I think that is beyond the scope of the question. after good comment from Mooing Duck - Perhaps even more dangerous than acceleration is jerk , which is the rate of change of acceleration and other higher order terms. Jerk would be very severe in the case of car collisions. - But also if the driver of a car or bus has to 'brake' and slow down very suddenly it can be very uncomfortable for the passengers. After good comment from Jim (and Cory)- Good point raised about acceleration and/or jerk on a human body. If every part (and every particle) of the body experience the same acceleration or jerk then the body will suffer significantly less (possibly no) damage compared to when one part of the body is accelerated of jerked and the acceleration or jerk is transmitted to other parts of the body by the structure of the body. The classic example here is 'whiplash' neck injury, where a jerk on the body is transmitted to the head through the neck. To reduce the damage this may cause seats in cars generally hare head rests that will support the back of the head and for people who are involved in motor sports (e.g. car racing) may wear a neck brace/support that prevents the head from swinging backwards and forwards on the neck in the event of a collision. Another aspect of acceleration to all part of the body concerns rocket launch for astronauts. The rockets will be designed so that as much as possible all part of the body are equally supported and the body lies 'flat with respect to the acceleration' so that the blood in the astronaut's body does not rush to the feet or head. This is a serious consideration and Memory Foam came from research by NASA into safety for aircraft cushions and helped cushion astronauts in rockets.
{ "source": [ "https://physics.stackexchange.com/questions/144736", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63438/" ] }
144,741
How fast does the electron move in the experiment? If the electron moved nearly the speed of light and without any other forces, it must move in the straight line because Newton's first law of motion. If it not then what other forces effect on the electron? If the electron shoot in the straight line, then how can it go through both slit? Does distance between an electron shooter and the slit matter in this experiment? What happen if the slit closer to the electron shooter? Thanks,
Speed doesn't kill us, but acceleration does. When astronauts go into space at launch and when fighter pilots turn very tight turns at high speed they experience 'high g forces' - their bodies are accelerated very fast as they accelerate and gain speed to go into space or as the direction of their speed changes. One of the problems with this is that for fighter pilots the blood can rush to the feet (black out) or to the head (red out). Too much acceleration makes people pass out and could at extremes be fatal I guess. To go around the sun in (nearly) a circular path we are acclerated by the gravity from the sun. The acceleration can be calculated by $v^2/r$ where $v$ is our speed and $r$ is the distance to the centre of the sun. This acceleration turns out to be $\sim~0.006~m/s^2$. By contrast the acceleration that we feel here at the surface due to the gravitational pull of the earth on us is $\sim~10~m/s^2$. So the acceleration due to travelling around the sun is so small we don't notice it. We do notice the pull of gratvity from the earth on us, but our bodies are used to it and can cope with it. To think about it another way we can go very fast in a car on a motorway/highway without noticing it, the big danger is having to stop very quickly or crashing when we change speeds very rapidly - acceleration is the rate of change of speed so changing speed very rapidly is equivalent to a very high acceleration - in a car we might call this deceleration. [for calculation above $v=3 \times 10^4~m/s$ and $r=1.5 \times 10^{11}m$] after good comment from hdhoundt - For astronauts in orbit (e.g. in the space station) they can cope with the acceleration they experience, which holds them in orbit around the earth. Indeed they feel weightless because they are not held by the gravity of earth on the surface. Instead they and their surroundings are in 'constant free fall'. The speed of the space station in orbit is $7.71 km/s$, which is $\sim~ 17,000 ~mph$. Full discussion of this topic might venture into relativity, but I think that is beyond the scope of the question. after good comment from Mooing Duck - Perhaps even more dangerous than acceleration is jerk , which is the rate of change of acceleration and other higher order terms. Jerk would be very severe in the case of car collisions. - But also if the driver of a car or bus has to 'brake' and slow down very suddenly it can be very uncomfortable for the passengers. After good comment from Jim (and Cory)- Good point raised about acceleration and/or jerk on a human body. If every part (and every particle) of the body experience the same acceleration or jerk then the body will suffer significantly less (possibly no) damage compared to when one part of the body is accelerated of jerked and the acceleration or jerk is transmitted to other parts of the body by the structure of the body. The classic example here is 'whiplash' neck injury, where a jerk on the body is transmitted to the head through the neck. To reduce the damage this may cause seats in cars generally hare head rests that will support the back of the head and for people who are involved in motor sports (e.g. car racing) may wear a neck brace/support that prevents the head from swinging backwards and forwards on the neck in the event of a collision. Another aspect of acceleration to all part of the body concerns rocket launch for astronauts. The rockets will be designed so that as much as possible all part of the body are equally supported and the body lies 'flat with respect to the acceleration' so that the blood in the astronaut's body does not rush to the feet or head. This is a serious consideration and Memory Foam came from research by NASA into safety for aircraft cushions and helped cushion astronauts in rockets.
{ "source": [ "https://physics.stackexchange.com/questions/144741", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62684/" ] }
144,758
In parts per million in the Earth's crust Uranium is around 1.8ppm and Gold 0.003ppm. Given that it takes far more energy to create Uranium than Gold, why is this?
Since gold is much more abundant in the universe than is uranium (by a factor of about 20:1) 1 , why is the situation reversed in the Earth's crust (by a factor of about 1:600) 2 ? The answer lies in chemistry. Uranium is chemically active. It readily oxidizes (pitchblende) and it readily combines with silicates. Uranium is a lithophile (literally, rock-loving) element 3 . It does not dissolve all that well in molten iron, and thus tended not migrate to the center of the Earth when the Earth differentiated. Uranium is a "high field strength element", one of the two classes of trace elements that are incompatible with the minerals that form the upper mantle 4 . When upper mantle rock undergoes a partial melt, incompatible elements such as uranium preferentially join the silicate melt rather than staying with the solid minerals. Over time, this magnifies the amount of uranium in the crust compared to that in the upper mantle 5 . Gold on the other hand is rather inert chemically. It has little affinity to oxygen or sulfur. It does however readily dissolve in molten iron. Gold is a siderophile (literally, iron-loving) element 3 . Of the tiny bit of gold currently found in the crust, hardly any is primordial. Almost all of the primordial gold sank to the Earth's core when the planet differentiated. The gold currently found in the crust instead arrived in meteors that hit the Earth after the Earth had finished forming 6 . The above assumes that the Bulk Silicate Earth (BSE) models of the Earth is basically correct, that the Earth formed from protoplanets and planetary embryos that had formed from material in the inner solar system, and that the proto-Earth differentiated into a core and primitive mantle. One prediction of these models is that the differentiation that created the Earth's core made the core strongly enhanced in siderophile elements and strongly depleted of lithophile elements, particularly so with regard to high refractory lithophile elements such as thorium and uranium. An opposing (not well accepted) model says that rather than being depleted of uranium, the Earth's core is uranium-enhanced, and to such an extent that there is a large georeactor at the very center of the Earth. These are testable hypotheses. Recent studies of geo-neutrinos are consistent with the BSE hypothesis, and simultaneously reject the possibility of a large georeactor at the center of the Earth 7 . Footnotes Based on Lodders, "Solar system abundances of the elements." Principles and Perspectives in Cosmochemistry , Springer Berlin Heidelberg, 379-417 (2010) , the abundance of gold to uranium by mass in chondritic meteorites is 18.1:1, 25:1 for the sun's photosphere. To one significant digit, this ratio becomes 20:1. From Lide, editor, CRC Handbook of Chemistry and Physics, 88th edition , the crustal ratio of uranium to gold is 675:1. From online resources such as web elements.com, I get ratios ranging from over 400:1 to over 600:1. I used 600:1. Victor M. Goldschmidt developed the concept of classifying elements as siderophile ("iron loving"), lithophile ("rock loving"), chalcophile (literally "ore loving", but Goldschmidt implied "sulphur loving"), and atmophiles ("air loving") in the 1920s. While Goldschmidt's initial concept of a siderophilic core surrounded by a chalcophilic layer surrounded in turn by a lithophilic outer layer didn't pan out, his classification scheme lives on. That uranium is a lithophile and gold is a siderophile is basic chemistry. There are two key classes of "incompatible elements": Those with an abnormally large ionic radius, and those with an abnormally large field strength. Uranium and thorium fall into the latter class. While the "incompatible elements" are lithophiles based on chemistry, they don't fit nicely in the crystalline structures that comprise typical rock. In rock undergoing a partial melt, incompatible elements such as uranium tend to migrate to the melt because of this structural incompatibility. Over time, plate tectonics has made the incompatible elements migrate to the Earth's crust. This is the conclusion of Willbold, et al., "The tungsten isotopic composition of the Earth/'s mantle before the terminal bombardment." Nature 477.7363: 195-198 (2011). Others disagree. One thing is certain: Gold is an extremely rare element in the Earth's crust. For example, see Bellini, et al., "Observation of geo-neutrinos." Physics Letters B 687.4:299-304 (2010) , Fiorentini, et al., "Geo-neutrinos and earth's interior." Physics Reports 453.5:117-172 (2007) , and a host of other recent papers on this topic.
{ "source": [ "https://physics.stackexchange.com/questions/144758", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
144,936
The portion of the electromagnetic spectrum that is visible to humans are wavelengths between 380 and 750 nanometers. I am aware that animals have different capacities than humans, but the EM spectrum where they see colors is very close, e.g. 300 - 590 for bees. I am aware that some humans can see in quadrichromy, but what they see is actually two greens rather than one. As all animals see around this visible light, it implies that it is in this EM band that the most information about matter can be gathered. This band is therefore the best to distinguish between objects. Even color-blind people see shades of gray at these very wavelengths. So it seems that matter has some special properties at the wavelengths of visible light that it doesn't at higher or lower frequencies. It therefore seems plausible that there is a physcial phenomenon behind these, e.g. imperfections in compound matter could have sizes mostly corresponding to the visible wavelengths. Is it actually the case? Edit: added a few complementary questions to helps breakdown all the different aspects Do most energy level transitions in matter of every day objects correspond quite precisely to the wavelenghts of visible light? If no electronic transitions happened in the band of visible light, would we still be able to use this band to see? If no, what would be the most efficient ways to see?
Mostly, you see things because they reflect light. They absorb some of it, which gives them their color, but you will also see them, if you shine infrared or ultraviolett light at them. So: Whatever light you shine at them, a large part of this light will be reflected and you can detect this light to "see" matter. Your argumentation therefore seems backward. The most plausible idea is that most light on earth is of a given wavelength and therefore, most animals' eyes adapted to this wavelength. More precisely, have a look at the sun's spectrum : As you can see (yellow part), the radiation is most intense in the area of the visible light. This is due to the fact that the sun is a near ideal black body of the temperature of its surface. Now, the light that reaches the surface is not all of the light of the sun, since some wavelengths are blocked by the atmosphere (red remains), which is due to the fact that elements absorb certain levels of radiation. Now, light detection is more difficult, if there is less light (you can't see very well in the dark), hence it's easier to detect intensive light - thus it's a good idea to adjust your eyes to the area where light is most intensive. There are a few other aspects worth mentioning though: Note also that higher energy "light" can create other difficulties. Much of organic matter becomes transparent for gamma radiation (some even for x-rays - that's why tomographys works), which also means that it is much harder to detect x-rays with organic material, so it would be even harder to build an organic eye to "see" and make use of low intensities of gamma radiation. Still: with a good detector and enough intensive x-rays, I could probably also see a good picture of my surroundings. The same holds in the other direction: radio waves have very long wavelengths. A simple eye is not big enough to see them. The upshot of all of this is: Seeing the whole spectrum requires a much larger variety of detectors, one type of "eye" will simply not be enough. Light on earth is most abundant in a narrow band of the electromagnetic spectrum This does not explain why we only see a certain band of the electromagnetic spectrum, unless you want biological economy. EDIT: So why do some animals see UV and none see IR light? Unlike I previously claimed, this seems to be more a biological problem: you'd probably need a very different "eye", similarly to what I hinted at when saying we need a larger variety of detectors: The only animals with really confirmed IR vision are snakes , who don't use their eyes to "see" IR light. On the other hand, all animals with confirmed UV senses use their eyes, they have just a slightly different window of visibility shifted to the ultraviolett, or simply another type of receptors (some birds apparently have up to five different color receptors , which also spread a larger band of wavelenghts). I did not include a more complete survey of the biology - this is, after all, a question about the physics. See also Thomas' answer for a more complete argument of some biological arguments showing that it is probably not beneficial to have multiple eyes. EDIT 2: There were some questions added for clarification, so let me try to answer those: Do most energy level transitions in matter of every day objects correspond quite precisely to the wavelenghts of visible light? Answer: No they don't. Let's have a look at the emission spectrum of hydrogen, the most abundant element in the universe and also very present on earth (albeit normally bound): Hydrogen spectrum and in particular this Wikipedia picture . We can see many lines, only a few of which are visible (four lines in the Balmer series). The NIST has a database of spectral lines for every element (see http://physics.nist.gov/PhysRefData/ASD/lines_form.html ), where you can see that there is an abundancy of lines that are not visible. However, I don't know how probable all those transitions are. The Balmer lines for hydrogen are of course very probable transition. If no electronic transitions happened in the band of visible light, would we still be able to use this band to see? If no, what would be the most efficient ways to see? Assuming that we had a device to actually detect the light in these frequencies without using electron transitions (this is more a biophysical question and beyond my capabilities): We would be able to use this band, precisely because of what I said in my original answer: Most of the light we see is reflected sunlight, not absorbed and reemitted or just emitted light. Since sunlight is abundand precisely in the visible spectrum (and this has nothing to do with the emission spectra of atoms), we would see very well. However, colours will be problematic: Sunlight is white and the colours result from an absorption of certain parts of this light, while others are simply reflected. The absorption process is linked to the spectral lines, but I don't feel that I know enough to make this connection more precise. So it might be that the lack of any absorption in this part of the spectrum will make our world rather colourless - we'd see black and white.
{ "source": [ "https://physics.stackexchange.com/questions/144936", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63633/" ] }
145,110
If the speed of light is always constant then light should escape from a black hole because if directed radially outwards it only needs to travel a finite distance to escape, and at a speed of $c$ it will do this in a finite time. Since light cannot escape from the black hole that means the speed of light must be less than $c$ near the black hole. How can a black hole reduce light's speed?
Have a look at the question Speed of light in a gravitational field? as this shows you in detail how to calculate the speed of light in a gravitational field. I haven't flagged this as a duplicate because I'd guess you're not so interested in the details but rather how the speed of light can change at all. You've probably heard that the speed of light is a constant, so it's a fair question to ask why it isn't constant near a black hole. The answer turns out to be quite subtle. In special relativity the speed of light is a global constant in that all observers everywhere will measure the same value of $c$. In general relativity this is still true only if spacetime is flat. If spacetime is curved then all observers everywhere will measure the same value of $c$ if the measurement is done locally . This means that if I measure the speed of light at my location I will always get the value $c$, and this is true whether I'm sitting still, riding around on a rocket, falling into a black hole or whatever. But if I measure the speed of light at some point that is distant from me I will generally get a value different from $c$. Specifically if I'm sitting well away from a black hole and I measure the speed of light near it's surface I will get a value less than $c$. So the answer to your question is that in GR the speed light travels isn't always $c$.
{ "source": [ "https://physics.stackexchange.com/questions/145110", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/25568/" ] }
145,158
I've just been watching a doco where some guys cut a hole in a frozen lake, the ice is 3 feet thick, I was surprised that the water rose to the level of the ice. My friend assures me it's because water equalises/levels with ice in the same way it equalises with itself... Is this right or is there another reason for the water rising to the same level as the surface of the ice.
Suppoose you put an ice cube into water, then it's going to float with about 92% of it underwater. This is shown in diagram (a) below: But now suppose I make my ice cube a different shape. I'm going to shape it like a disk with a hole cut out of the centre, or you could describe it as a flattened doughnut. When I put my oddly shaped ice cube into the water it's also going to float with about 92% of it underwater. This is shown in diagram (b) below: But (b) is just your frozen lake with a hole in it. So if you cut a hole in the ice on a frozen lake you should expect the water to come 92% of the way up the thickness of the ice i.e. you should be left with a lip of about 8% of the ice thickness. An objection to my argument is that in (b) the ice isn't frozen to the sides of the bowl, while in a lake the ice sheet is frozen to the lake shore. However ice is quite flexible and over a large distance like the lake it will bend and act as if it's floatingly freely and not connected to the shore. If you took some small container like an oil drum or bath tub, with the surface frozen to the sides, and cut a hole in the ice the water wouldn't come up so far.
{ "source": [ "https://physics.stackexchange.com/questions/145158", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63717/" ] }
145,400
The situation is the following: I'm inside a vehicle (plane or a car, it doesn't matter) and I need to know if the vehicle is moving at a constant speed BUT I cannot perceive any external change like visual changes, vibration, etc. How can I know if the vehicle is moving? Do I really can know? Additional question Can I know my speed?
You cannot tell moving with constant speed apart from standing still. This is the principle of Galilean relativity .
{ "source": [ "https://physics.stackexchange.com/questions/145400", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63818/" ] }
145,404
It seems that when capturing and emitting EM waves matter proceeds differently depending on the wavelenght. According to an answer from another question , EM energy is captured following three modes: electronic transition (e.g. for visible light), rotational and vibrational absorption. To which we can at least add induction phenomena (e.g. radio wave) which works in an unrelated way. Morover diffraction limit imposes restrictions on the finer details observable. How the frequency and energy capture process used to observe an object affects the resolution at which we can observe the spatial features of an object? A corollary question is: are there regions of the EM spectrum that cannot be observed because there are no possible instrument sensitive to these frequencies?
You cannot tell moving with constant speed apart from standing still. This is the principle of Galilean relativity .
{ "source": [ "https://physics.stackexchange.com/questions/145404", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63633/" ] }
145,413
I am currently rather uneducated on the subject, but I was thinking of a general relativity thought experiment. Say I take a charge from infinity and give it velocity to orbit a planet in a circle. Now from the earth's perspective the charge is accelerating thus it should radiate EM waves. But Einstein says there is no experiment you can perform (locally) to determine if you are in free fall in a gravitational field as opposed to floating in free space. But if what I described was the case then there would be an experiment you could perform; you could check for the emission of a photon. Where is the problem?
You cannot tell moving with constant speed apart from standing still. This is the principle of Galilean relativity .
{ "source": [ "https://physics.stackexchange.com/questions/145413", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
145,416
Does there exist a scalar that can describe how anisotropic the elasticity of a crystal is? What about other tensors such as the permittivity or susceptibility? I found a Wikipedia article that was particularly illuminating: Fractional anisotropy is a scalar value between zero and one that describes the degree of anisotropy of a diffusion process. A value of zero means that diffusion is isotropic, i.e. it is unrestricted (or equally restricted) in all directions. A value of one means that diffusion occurs only along one axis and is fully restricted along all other directions._ Could this be extended to $C_{ijkl}$? If so, how do I construct this parameter that is between 0 and 1? I'm assuming it starts by somehow contracting the elastic tensor. This can be very useful if you have a bimaterial system in which a particular physical phenomena emerges from the mismatch of this anisotropic parameter.
You cannot tell moving with constant speed apart from standing still. This is the principle of Galilean relativity .
{ "source": [ "https://physics.stackexchange.com/questions/145416", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/15936/" ] }
146,003
In popular physics articles and even some physics classes I've been to, the vacuum of space is described as being constantly full of quantum fluctuations. Supposedly, all sorts of particle-antiparticle pairs at all scales are constantly appearing and disappearing. We end up with a mental image of the vacuum as a roiling, choppy sea with all sorts of things going on, rather than a calm, placid background. However, the vacuum, being the lowest-energy state of a theory, should be an energy eigenstate—which means it is time-invariant (except for a physically-irrelevant phase factor). So it seems the vacuum really should not be seen as a dynamic entity with all kinds of stuff happening in it, as we're led to believe. Jess Riedel wrote in a blog post that A “vacuum fluctuation” is when the ground state of a system is measured in a basis that does not include the ground state; it’s merely a special case of a quantum fluctuation. So it sounds as if the existence of vacuum fluctuations is contingent on measuring the vacuum—in particular, measuring something that doesn't commute with energy (such as, I guess, the value of a field at a point). How much truth is there to the idea that vacuum fluctuations are constantly happening everywhere, all the time? Is that really a useful way to think about it, or just a myth that has been propagated by popularizations of physics?
Particles do not constantly appear out of nothing and disappear shortly after that. This is simply a picture that emerged from taking Feynman diagrams literally. Calculating the energy of the ground state of the field, i.e. the vacuum, involves calculating its so-called vacuum expectation value. In perturbation theory, you achieve this by adding up Feynman diagrams. The Feynman diagrams involved in this process contain internal lines, which are often referred to as "virtual particles". This however does not mean that one should view this as an actual picture of reality. See my answer to this question for a discussion of the nature of virtual particles in general.
{ "source": [ "https://physics.stackexchange.com/questions/146003", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/19917/" ] }
146,063
If you could boil water in a sealed container until it became vapor and you still kept applying heat to it would something happen? Maybe gas to super-gas? This has been on my mind for a long time I really hope someone can help out.
Plasma is described as the 4th state of matter, which is what you get if you give so much heat energy that the molecules begin to break up and ionize into positively and negatively charged fragments. Another Claim on the title '4th State of matter' is a 'supercritical fluid' . Sometimes people draw phase diagrams with it to show this '4th state of matter'. (Strictly speaking Supercritical fluids may be a different phase from liquid, solid and gas and more '4th phase of matter' than '4th state of matter' as pointed out in the good comment from Jim. - the state of a supercritical fluid is gas - whether supercritical fluid is a phase or state maybe something that is up for debate at the moment.) phase diagram of carbon dioxide showing supercritical phase (from wikipedia) As can be see from the diagram the supercritical state exists above the critical temperature and pressure. Supercritical fluids have many interesting properties. For example supercritical water dissolves organic molecules (such as organic solvents), which would not normally dissolve in water. It is also very acid and very alkaline at the same time because there is a high concentration in the liquid of both H $^+$ and OH $^-$ ions. Supercritical carbon dioxide is used to remove caffeine from coffee beans. Finally to go back to your question, if you took water and heated it in a sealed container it would 'go supercritical' if it did not break the container first - please don't do this at home, because if the container breaks then you would have alot of very hot dangerous steam --- people who research this for science tend to work with very small volumes for safety.
{ "source": [ "https://physics.stackexchange.com/questions/146063", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/59078/" ] }
146,166
What would happen if we decided to build a giant ring that managed to wrap around the whole world, end to end that was supported with pillars all along the ring and then the supports all suddenly removed? Would the ring float in place? or if it fell, what direction would it fall?
Why not try this at home by using the Coulomb force or magnetism instead of gravity? Although all those forces are different in nature, mathematically they are the same: $F(r) = -\frac{const.}{r^2}$ Edit: Magnetism is different as pointed out in the comments, but it still works for illustrating the problem. In fact, the result is the same for any $\frac{-1}{r^c}$ force with $c>1$. Take a small magnet and place it inside a keyring on top of a table. You will find that even despite the friction helping you to keep both parts concentric, the slightest asymmetry will result in a non-zero force and make them snap together. To compute the force on a ring, I drop all constants, set the ring radius to 1 and place the earth at the origin, also I use complex numbers to represent the vectors. The ring is parametrized as $e^{i \varphi}, \varphi \in [0,2 \pi]$, The force on a concentric ring is then given by $$F = \int_0^{2 \pi} \frac{e^{i \varphi}}{|e^{i \varphi}|^3} d\varphi$$ this is trivial and the solution is zero, now we shift the ring slightly to the right $$F(\varepsilon) = \int_0^{2 \pi} \frac{\varepsilon + e^{i \varphi}}{|\varepsilon +e^{i \varphi}|^3} d\varphi$$ I imagined this to be not that much harder to solve, but 20 minutes later Maple went up to 400 MB of memory usage and still hasn't produced a result so I evaluated it numerically instead As soon as the ring is displaced, it experiences a force in the same direction which results in further displacement (up to $\varepsilon<1$, then it hits the ground, earlier if the earth has a non-zero radius)
{ "source": [ "https://physics.stackexchange.com/questions/146166", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/40606/" ] }
146,427
For example, would it be possible to excite a hydrogen atom so that it's the size of a tennis ball? I'm thinking the electron would break free at some point, or it just gets practically harder to keep the electron at higher states as it gets more unstable. What about in theoretical sense? What I know is that the atomic radius is related to the principal quantum number $n$. There seems to be no upper limit as to what $n$ could be (?), which is what led me to this question.
Atoms with electrons at very large principle quantum number ($n$) are called Rydberg atoms . Just by coincidence the most recent Physics Today reports on a paper about the detection of extra-galactic Rydberg atoms with $n$ as high as 508(!), which makes them roughly 250,000 times the size of the same atom in the ground state. That is larger than a micrometer. The paper is Astrophys. J. Lett. 795, L33, 2014. and the abstract reads Carbon radio recombination lines (RRLs) at low frequencies ($\lesssim 500 \,\mathrm{MHz}$) trace the cold, diffuse phase of the interstellar medium, which is otherwise difficult to observe. We present the detection of carbon RRLs in absorption in M82 with the Low Frequency Array in the frequency range of $48-64 \,\mathrm{MHz}$. This is the first extragalactic detection of RRLs from a species other than hydrogen, and below $1,\mathrm{GHz}$. Since the carbon RRLs are not detected individually, we cross-correlated the observed spectrum with a template spectrum of carbon RRLs to determine a radial velocity of $219 \,\mathrm{km \,s^{–1}}$. Using this radial velocity, we stack 22 carbon-$\alpha$ transitions from quantum levels $n = 468$–$508$ to achieve an $8.5\sigma$ detection. The absorption line profile exhibits a narrow feature with peak optical depth of $3 \times 10^{–3}$ and FWHM of $31 \,\mathrm{km \, s^{–1}}$. Closer inspection suggests that the narrow feature is superimposed on a broad, shallow component. The total line profile appears to be correlated with the 21 cm H I line profile reconstructed from H I absorption in the direction of supernova remnants in the nucleus. The narrow width and centroid velocity of the feature suggests that it is associated with the nuclear starburst region. It is therefore likely that the carbon RRLs are associated with cold atomic gas in the direction of the nucleus of M82.
{ "source": [ "https://physics.stackexchange.com/questions/146427", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/64231/" ] }
146,585
As I study Jackson, I am getting really confused with some of its key definitions. Here is what I am getting confused at. When we substituted the electric field and magnetic field in terms of the scalar and vector potential in the inhomogeneous Maxwell's equations, we got two coupled inhomogeneous wave equations in terms of $\mathbf{A}$ and $\phi$. So, the book states that to uncouple them, which definitely makes our equations simpler to solve, we introduced gauge transformations as adding a gradient to $\mathbf{A}$ and adding a constant to $\phi$ would not affect their meaning. My question is which one is a gauge and why in the expression for a gauge transformation $$\mathbf{A'}=\mathbf{A+\nabla \gamma}.$$ Somewhere in the internet, I read that $\gamma$ is a gauge function. So, is $\gamma$ a gauge, if yes, then why? Basically: What is a gauge?
In normal usage, a gauge is a particular choice, or specification, of vector and scalar potentials $\mathbf A$ and $\phi$ which will generate a given set of physical force fields $\mathbf E$ and $\mathbf B$. More specifically, a physical situation is specified by the electric and magnetic fields, $\mathbf E$ and $\mathbf B$. A set of potentials $\mathbf A$ and $\phi$ generates the force fields if it obeys the equations \begin{align} \mathbf B & =\nabla\times\mathbf A \\ \mathbf E & = -\nabla\phi-\frac{\partial \mathbf A}{\partial t}. \end{align} As you know, for a given set of force fields, the potentials are not unique. A gauge is a specific, additional requirement on the potentials. One good example of a gauge is the Coulomb gauge, which is mostly embodied by the requirement that $\mathbf A$ also be divergenceless, $$\nabla \cdot\mathbf A=0.$$ "The Coulomb gauge" refers to the set of potentials which satisfy this. Gauges are usually thought of as specifying the potentials uniquely. This is not really true, but they do tend to specify the potentials "uniquely up to reasonable physical assumptions". The Coulomb gauge is a good example of this: the gauge transformation to \begin{align} \mathbf A'&=\mathbf A+\nabla \chi(\mathbf r)\\ \phi'&=\phi \end{align} preserves the physical fields, and if $$\nabla^2 \chi(\mathbf r)=0$$ then it also preserves the gauge condition that $\nabla \cdot\mathbf A'=0$. This is not great for unicity, because there are a lot of harmonic functions that satisfy the above condition. However, for a function to really be harmonic throughout all of space - with no exceptions and no singularities - then it must diverge at infinity, which is not really palatable in most cases. Because of that, saying that $\mathbf A$ is the vector potential in the Coulomb gauge usually means that $\nabla \cdot\mathbf A=0$ and that such 'infinite-self-energy' terms have been set to zero; this is usually a unique set of potentials in situations where the energy in the physical fields themselves is not infinite. It is worth noting that, in certain situations, the word gauge can be naturally free of this ambiguity. In my field, strong-field physics, the words 'length gauge' and 'velocity gauge' are taken to mean that the total energy of an electron interacting with a laser field, at position $\mathbf r$ and with momentum $\mathbf p$, is of the form $$E=\tfrac1{2m}\mathbf p^2-e\mathbf r\cdot \mathbf E$$ and $$E=\tfrac1{2m}\left(\mathbf p-e\mathbf A\right)^2,$$ respectively. For a uniform field (i.e. in the 'dipole approximation') the two energies are equivalent via a gauge transformation. However, here the word 'gauge' is completely unambiguous except for a total constant energy which can very safely be ignored. Thus far for technical matters. I think, though, that a lot of what worries you is the word 'gauge' itself, which is indeed a weird choice. In everyday usage, a gauge is a generic form of meter or dial. The phrase 'gauge invariance' seems to have come into physics via German, in Hermann Weyl's use of the word 'Eichinvarianz', which loosely means 'scale invariance' or 'gauge invariance' (in the sense that a choice of measuring instrument (gauge) determines the measured physical values in a given setting, i.e. determines the scale). This invariance under changes of scale is exactly (part of) the (technical) gauge invariance in general relativity, which is invariant under coordinate transformations. Note, though, that my source for this history is Wikipedia , so if someone can chime in with a better source it would be fantastic.
{ "source": [ "https://physics.stackexchange.com/questions/146585", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/14087/" ] }
147,341
I am trying to write the Lagrangian and Hamiltonian for the forced Harmonic oscillator before quantizing it to get to the quantum picture. For EOM $$m\ddot{q}+\beta\dot{q}+kq=f(t),$$ I write the Lagrangian $$ L=\frac{1}{2}m\dot{q}^{2}-\frac{1}{2}kq^{2}+f(t)q$$ with Rayleigh dissipation function as $$ D=\frac{1}{2}\beta\dot{q}^{2}$$ to put in Lagrangian EOM $$0 = \frac{\mathrm{d}}{\mathrm{d}t} \left ( \frac {\partial L}{\partial \dot{q}_j} \right ) - \frac {\partial L}{\partial q_j} + \frac {\partial D}{\partial \dot{q}_j}. $$ On Legendre transform of $L$, I get $$H=\frac{1}{2m}{p}^{2}+\frac{1}{2}kq^{2}-f(t)q.$$ How do I include the dissipative term to get the correct EOM from the Hamiltonian's EOM?
Problem: Given Newton's second law $$\begin{align} m\ddot{q}^j~=~&-\beta\dot{q}^j-\frac{\partial V(q,t)}{\partial q^j}, \cr j~\in~&\{1,\ldots, n\}, \end{align}\tag{1} $$ for a non-relativistic point particle in $n$ dimensions, subjected to a friction force, and also subjected to various forces that have a total potential $V(q,t)$ , which may depend explicitly on time. I) Conventional approach: There is a non-variational formulation of Lagrange equations $$\begin{align} \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}^j}\right)-\frac{\partial L}{\partial q^j}~=~&Q_j, \cr j~\in~&\{1,\ldots, n\},\end{align}\tag{2} $$ where $Q_j$ are the generalized forces that do not have generalized potentials. In our case (1), the Lagrangian in eq. (2) is $L=T-V$ , with $T=\frac{1}{2}m\dot{q}^2$ ; and the force $$ Q_j~=~-\beta\dot{q}^j\tag{3} $$ is the friction force. It is shown in e.g. this Phys.SE post that the friction force (3) does not have a potential. As OP mentions, one may introduce the Rayleigh dissipative function , but this is not a genuine potential. Conventionally, we additionally demand that the Lagrangian is of the form $L=T-U$ , where $T=\frac{1}{2}m\dot{q}^2$ is related to the LHS of EOMs (1) (i.e. the kinematic side), while the potential $U$ is related to the RHS of EOMs (1) (i.e. the dynamical side). With these additional requirements, the EOM (1) does not have a variational formulation of Lagrange equations $$\begin{align} \frac{d}{dt} \left(\frac{\partial L}{\partial \dot{q}^j}\right)-\frac{\partial L}{\partial q^j}~=~&0,\cr j~\in~&\{1,\ldots, n\},\end{align}\tag{4} $$ i.e. Euler-Lagrange equations . The Legendre transformation to the Hamiltonian formulation is traditionally only defined for a variational formulation (4). So there is no conventional Hamiltonian formulation of the EOM (1). II) Unconventional approaches: Trick with exponential factor $^1$ : Define for later convenience the function $$ e(t)~:=~\exp(\frac{\beta t}{m}). \tag{5}$$ A possible variational formulation (4) of Lagrange equations is then given by the Lagrangian $$\begin{align} L(q,\dot{q},t)~:=~&e(t)L_0(q,\dot{q},t), \cr L_0(q,\dot{q},t)~:=~&\frac{m}{2}\dot{q}^2-V(q,t).\end{align}\tag{6}$$ The corresponding Hamiltonian is $$ H(q,p,t)~:=~\frac{p^2}{2me(t)}+e(t)V(q,t).\tag{7}$$ One caveat is that the Hamiltonian (7) does not represent the traditional notion of total energy. Another caveat is that this unconventional approach cannot be generalized to the case where two coupled sectors of the theory require different factors (5), e.g. where each coordinate $q^j$ has individual friction-over-mass-ratios $\frac{\beta_j}{m_j}$ , $j\in\{1, \ldots, n\}$ . For this unconventional approach to work, it is crucial that the factor (5) is an overall common multiplicative factor for the Lagrangian (6). This is an unnatural requirement from a physics perspective. Imposing EOMs via Lagrange multipliers $\lambda^j$ : A variational principle for the EOMs (1) is $$\begin{align}L ~=~& m\sum_{j=1}^n\dot{q}^j\dot{\lambda}^j\cr &-\sum_{j=1}^n\left(\beta\dot{q}^j+\frac{\partial V(q,t)}{\partial q^j}\right)\lambda^j.\end{align}\tag{8}$$ (Here we have for convenience "integrated the kinetic term by parts" to avoid higher time derivatives.) Classical Schwinger/Keldysh "in-in" formalism : The variables are doubled up. See e.g. eq. (20) in C.R. Galley, arXiv:1210.2745 . Ignoring boundary terms $^2$ the Lagrangian reads $$\begin{align} \widetilde{L}(q_{\pm},\dot{q}_{\pm},t) ~=~&\left. L(q_1,\dot{q}_1,t)\right|_{q_1=q_+ + q_-/2}\cr ~-~&\left. L(q_2,\dot{q}_2,t)\right|_{q_2=q_+ - q_-/2}\cr ~+~&Q_j(q_+,\dot{q}_+,t)q^j_-\end{align}\tag{9}. $$ The initial conditions $$\left\{\begin{array}{rcl} q^j_+(t_i)&=&q^j_i,\cr\dot{q}^j_+(t_i)&=&\dot{q}^j_i\end{array}\right.\tag{10} $$ implement the system's underlying initial values. The final conditions $$\begin{align}\left\{\begin{array}{rcl} q^j_-(t_f)&=&0\cr \dot{q}^j_-(t_f)&=&0 \end{array}\right. & \cr\cr\qquad\Downarrow&\qquad\cr\cr \left.\frac{\partial \widetilde{L}}{\partial \dot{q}^j_+}\right|_{t=t_f}~=~&0 \end{align}\tag{11} $$ implement the physical limit solution $q_-^j= 0$ . The doubling trick (9) is often effectively the same as introducing Lagrange multipliers (8). Gurtin-Tonti bi-local method: See e.g. this Phys.SE post. -- $^1$ Hat tip: Valter Moretti . $^2$ The variational problem (9)+(10)+(11) needs an appropriate initial term, which might not always exist! In particular, since we already imposed $4n$ boundary conditions (10)+(11), it would be too much to also impose the initial condition $$ q^j_-(t_i)~=~0. \qquad (\leftarrow\text{Wrong!})$$ Example: If $L=\frac{1}{2}m\dot{q}^2$ , then $\widetilde{L}=m\dot{q}_+\dot{q}_-$ , and one should add an initial term $m\dot{q}_+(t_i)q_-(t_i)$ to the action $\widetilde{S}$ .
{ "source": [ "https://physics.stackexchange.com/questions/147341", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
147,346
This idea came to me while playing Kerbal Space Program. I noticed that the larger my parachute was, the slower my rocket would fall back down to Kerbin. I would like to know if it is possible to create a parachute so large in the real world that it might stop all velocity, essentially making whatever is attached to it float in mid-air. Common sense is telling me "no," but I could always be wrong, and I would love some explanation behind whether or not it is possible.
No. All parachutes, whether they are drag-only (round) or airfoil (rectangular) will sink. Some airflow is needed to stay inflated, and that airflow comes from the steady descent. Whether your net descent rate is positive or negative is a different question. It is quite easy to be under a parachute and end up rising (I have done it myself), you just need an updraft in excess of your descent rate. Never lasts though, as a permanently floating parachute would violate a couple of laws of nature.
{ "source": [ "https://physics.stackexchange.com/questions/147346", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/64624/" ] }
147,433
The metrology world is currently in the middle of overhauling the definitions of the SI units to reflect the recent technological advances that enable us to get much more precise values for the fundamental constants of nature than were possible when the SI was drawn up. This has already happened to the second and the meter, which are defined in terms of a caesium transition and the speed of light, and it is being extended to the other units. Thus, in the new system, known as the 'new SI' , four of the SI base units, namely the kilogram, the ampere, the kelvin and the mole, will be redefined in terms of invariants of nature; the new definitions will be based on fixed numerical values of the Planck constant ( $h$ ), the elementary charge ( $e$ ), the Boltzmann constant ( $k$ ), and the Avogadro constant ( $N_A$ ), respectively. The proposed draft of the SI brochure gives more details, but it stops short of describing the recommended mises en pratique . For example, for the kilogram, the definition goes The SI unit of mass, the kilogram The kilogram, symbol kg, is the SI unit of mass; its magnitude is set by fixing the numerical value of the Planck constant to be exactly 6.626 069 57 10 −34 when it is expressed in the SI unit for action J s = kg m 2 s -1 . Thus we have the exact relation h = 6.626 069 57 10 −34 kg m 2 s -1 = 6.626 069 57 10 −34 J s. Inverting this equation gives an exact expression for the kilogram in terms of the three defining constants $h$ , $\Delta \nu$ ( 133 Cs) hfs and $c$ : $$ \mathrm{kg} =\left(\frac{h}{6.626 069 57\times10^{−34}}\right)\mathrm{m}^{2}\:\mathrm s^{-1} =1.475521\ldots\times 10^{40}\frac{h \Delta \nu\left(^{133}\mathrm{Cs}\right)_\mathrm{hfs}}{c^2} $$ The Planck constant is a constant of nature, whose value may be expressed as the product of a number and the unit joule second, where J s = kg m 2 s -1 . The effect of this definition is to define the unit kg m 2 s -1 (the unit of both the physical quantities action and angular momentum), and thus together with the definitions of the second and the metre this leads to a definition of the unit of mass expressed in terms of the value of the Planck constant $h$ . Note that macroscopic masses can be measured in terms of $h$ , using the Josephson and quantum-Hall effects together with the watt balance apparatus, or in terms of the mass of a silicon atom, which is accurately known in terms of $h$ using the x-ray crystal density approach. However, the brochure is pretty scant as to what the specific realizations through watt balances actually imply in terms of a route from measured physical quantities to values of fundamental constants or to inferred masses. For the specific case of the watt balance, for example, the physical constants at play are much more naturally the Josephson and von Klitzing constants, $K_J=2e/h$ and $R_K=h/e^2$ , if I understand correctly, so there is some re-shuffling of experimental results to be done. The SI brochure is similarly vague for the other three base unit / fundamental constant pairs. This brings me, then, to my specific questions. For each of these four base unit / fundamental constant pairs, what are the proposed experimental realizations, what are the basics of their operation, and what physical effects do they rely on? what other fundamental constants are used to go from experimentally measured values to inferred parameters? (i.e. the meter depends on the second. Does the kilogram depend on the value of the electric charge?) what specific natural constants are measured by the experiment, and how are they reshuffled to obtain final results? Additionally, what is the dependency tree between the different definitions of the base units? What units depend on what others, either directly or indirectly?
So the BIPM has now released drafts for the mises en pratique of the new SI units, and it's rather more clear what the deal is. The drafts are in the New SI page at the BIPM, under the draft documents tab. These are drafts and they are liable to change until the new definitions are finalized at some point in 2018 . At the present stage the mises en pratique have only recently cleared consultative committee stage, and the SI brochure draft does not yet include any of that information. The first thing to note is that the dependency graph is substantially altered from what it was in the old SI, with significantly more connections. A short summary of the dependency graph, both new and old, is below. $\ \ $ In the following I will explore the new definitions, unit by unit, and the dependency graph will fill itself as we go along. The second The second will remain unchanged in its essence, but it is likely that the specific reference transition will get changed from the microwave to the optical domain. The current definition of the second reads The second, symbol $\mathrm{s}$ , is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency $\Delta\nu_\mathrm{Cs}$ , the unperturbed ground-state hyperfine splitting frequency of the caesium 133 atom, to be $9\,192\,631\,770\:\mathrm{Hz}$ , where the SI unit $\mathrm{Hz}$ is equal to $\mathrm{s}^{–1}$ for periodic phenomena. That is, the second is actually implemented as a frequency standard: we use the resonance frequency of a stream of caesium atoms to calibrate microwave oscillators, and then to measure time we use electronics to count cycles at that frequency. In the new SI, as I understand it the second will not change, but on a slightly longer timescale it will change from a microwave transition to an optical one, with the precise transition yet to be decided. The reason for the change is that optical clocks work at higher frequencies and therefore require less time for comparable accuracies, as explained here , and they are becoming so much more stable than microwave clocks that the fundamental limitation to using them to measure frequencies is the uncertainty in the standard itself, as explained here . In terms of practical use, the second will change slightly, because now the frequency standard is in the optical regime, whilst most of the clocks we use tend to want electronics that operate at microwave or radio frequencies which are easier to control, so you want a way to compare your clock's MHz oscillator with the ~500 THz standard. This is done using a frequency comb : a stable source of sharp, periodic laser pulses, whose spectrum is a series of sharp lines at precise spacings that can be recovered from interferometric measurements at the repetition frequency. One then calibrates the frequency comb to the optical frequency standard, and the clock oscillator against the interferometric measurements. For more details see e.g. NIST or RP photonics . The meter The meter will be left completely unchanged, at its old definition: The metre, symbol $\mathrm{m}$ , is the SI unit of length. It is defined by taking the fixed numerical value of the speed of light in vacuum $c$ to be $299\,792\,458\:\mathrm{m/s}$ . The meter therefore depends on the second, and cannot be implemented without access to a frequency standard. It's important to note here that the meter was originally defined independently, through the international prototype meter , until 1960, and it was to this standard that the speed of light of ${\sim}299\,792\,458 \:\mathrm{m/s}$ was measured. In 1983, when laser ranging and similar light-based technologies became the most precise ways of measuring distances, the speed of light was fixed to make the standard more accurate and easier to implement, and it was fixed to the old value to maintain consistency with previous measurements. It would have been tempting, for example, to fix the speed of light at a round $300\,000\,000 \:\mathrm{m/s}$ , a mere 0.07% faster and much more convenient, but this would have the effect of making all previous measurements that depend on the meter incompatible with newer instruments beyond their fourth significant figure. This process - replacing an old standard by fixing a constant at its current value - is precisely what is happening to the rest of the SI, and any concerns about that process can be directly mapped to the redefinition of the meter (which, I might add, went rather well). The ampere The ampere is getting a complete re-working, and it will be defined (essentially) by fixing the electron charge $e$ at (roughly) $1.602\,176\,620\times 10^{–19}\:\mathrm C$ , so right off the cuff the ampere depends on the second and nothing else. The current definition is couched on the magnetic forces between parallel wires: more specifically, two infinite wires separated by $1\:\mathrm m$ carrying $1\:\mathrm{A}$ each will attract each other (by definition) by $2\times10^{-7}\:\mathrm{N}$ per meter of length, which corresponds to fixing the value of the vacuum permeability at $\mu_0=4\pi\times 10^{-7}\mathrm{N/A^2}$ ; the old standard depends on all three MKS dynamical standards, with the meter and kilogram dropped in the new scheme. The new definition also shifts back to a charge-based standard, but for some reason (probably to not shake things up too much, but also because current measurements are much more useful for applications) the BIPM has decided to keep the ampere as the base unit. The BIPM mise en pratique proposals are a varied range. One of them implements the definition directly, by using a single-electron tunnelling device and simply counting the electrons that go through. However, this is unlikely to work beyond very small currents, and to go to higher currents one needs to involve some new physics. In particular, the proposed standards at reasonable currents also make use of the fact that the Planck constant $h$ will also have a fixed value of (roughly) $6.626\,069\times 10^{−34}\:\mathrm{kg\:m^2\:s^{-1}}$ , and this fixes the value of two important constants. One is the Josephson constant $K_J=2e/h=483\,597.890\,893\:\mathrm{GHz/V}$ , which is the inverse of the magnetic flux quantum $\Phi_0$ . This constant is crucial for Josephson junctions , which are thin links between superconductors that, among other things, when subjected to an AC voltage of frequency $\nu$ will produce discrete jumps (called Shapiro steps) at the voltages $V_n=n\, \nu/K_J$ in the DC current-voltage characteristic: that is, as one sweeps a DC voltage $V_\mathrm{DC}$ past $V_n$ , the resulting current $I_\mathrm{DC}$ has a discrete jump. (For further reading see here , here or here .) Moreover, this constant gives way directly to a voltage standard that depends only on a frequency standard, as opposed to a dependence on the four MKSA standards as in the old SI. This is a standard feature of the new SI, with the dependency graph completely shaken for the entire set of base plus derived units, with some links added but some removed. The current mise en pratique proposals include stabs at most derived units, like the farad, henry, and so on. The second constant is the von Klitzing constant $R_K = h/e^2= 25\,812. 807\,557 \:\Omega$ , which comes up in the quantum Hall effect : at low temperatures, an electron gas confined to a surface in a strong magnetic field, the system's conductance becomes quantized, and it must come as integer (or possibly fractional) multiples of the conductance quantum $G_0=1/R_K$ . A system in the quantum Hall regime therefore provides a natural resistance standard (and, with some work and a frequency standard, inductance and capacitance standards). These two constants can be combined to give $e=K_J/2R_K$ , or in more practical terms one can implement voltage and resistance standards and then take the ampere as the current that will flow across a $1\:\Omega$ resistor when subjected to a potential difference of $1\:\mathrm V$ . In more wordy language, this current is produced at the first Shapiro voltage step of a Josephson junction driven at frequency $483.597\,890\,893\:\mathrm{THz}$ , when it is applied to a resistor of conductance $G=25\,812. 807\,557\,G_0$ . (The numbers here are unrealistic, of course - that frequency is in the visible range, at $620\:\mathrm{nm}$ - so you need to rescale some things, but it's the essentials that matter. It's important to note that, while this is a bit of a roundabout way to define a current standard, it does not depend on any additional standards beyond the second. It looks like it depends on the Planck constant $h$ , but as long as the Josephson and von Klitzing constants are varied accordingly then this definition of the current does not actually depend on $h$ . Finally, it is also important to remark that as far as precision metrology goes, the redefinition will change relatively little, and in fact it represents a conceptual simplification of how accurate standards are currently implemented. For example, NPL is quite upfront in stating that, in the current metrological chain, All electrical measurements below 10 MHz at NPL are traceable to two quantum standards: the quantum Hall effect (QHE) resistance standard and the Josephson voltage standard (JVS). That is, modern practical electrical metrology has essentially been implementing conventional electrical units all along - units based on fixed 'conventional' values of $K_J$ and $R_K$ that were set in 1990, denoted as $K_{J\text{-}90}$ and $R_{K\text{-}90}$ and which have the fixed values $K_{J\text{-}90} = 483.597\,9\:\mathrm{THz/V}$ and $R_{K\text{-}90} = 25\,812.807\:\Omega$ . The new SI will actually heal this rift, by providing a sounder conceptual foundation to the pragmatic metrological approach that is already in use. The kilogram The kilogram is also getting a complete re-working. The current kilogram - the mass $M_\mathrm{IPK}$ of the international prototype kilogram - has been drifting slightly for some time , for a variety of reasons. A physical-constant-based definition (as opposed to an artefact-based definition) has been desired for some time, but only now does technology really permit a constant-based definition to work as an accurate standard. The kilogram, as mentioned in the question, is defined so that the Planck constant $h$ has a fixed value of (roughly) $6.626\,069\times 10^{−34}\:\mathrm{kg\:m^2\:s^{-1}}$ , so as such the SI kilogram will depend on the second and the meter, and will require standards for both to make a mass standard. (In practice, since the meter depends directly on the second, one only needs a time standard, such as a laser whose wavelength is known, to make this calibration.) The current proposed mise en pratique for the kilogram contemplates two possible implementations of this standard, of which the main is via a watt balance . This is a device which uses magnetic forces to hold up the weight to be calibrated, and then measures the electrical power it's using to determine the weight. For an interesting implementation, see this LEGO watt balance built by NIST . To see how these devices can work, consider the following sketch, with the "weighing mode" on the right. Image source: arXiv:1412.1699 . Good place to advertise their facebook page . Here the weight is attached to a circular coil of wire of length $L$ that is immersed in a magnetic field of uniform magnitude $B$ that points radially outwards, with a current $I$ flowing through the wire, so at equilibrium $$mg=F_g=F_e=BLI.$$ This gives us the weight in terms of an electrical measurement of $I$ - except that we need an accurate value of $B$ . This can be measured by removing the weight and running the balance on "velocity mode", shown on the left of the figure, by moving the plate at velocity $v$ and measuring the voltage $V=BLv$ that this movement induces. The product $BL$ can then be cancelled out, giving the weight as $$mg=\frac{IV}{v},$$ purely in terms of electrical and dynamical measurements. (This requires a measurement of the local value of $g$ , but that is easy to measure locally using length and time standards.) So, on one level, it's great that we've got this nifty non-artefact balance that can measure arbitrary weights, but how come it depends on electrical quantities, when the new SI kilogram is meant to only depend on the kinematic standards for length and time? As noted in the question, this requires a bit of reshuffling in the same spirit as for the ampere. In particular, the Josephson effect gives a natural voltage standard and the quantum Hall effect gives a natural resistance standard, and these can be combined to give a power standard, something like the power dissipated over a resistor of conductance $G=25\,812. 807\,557G_0$ by a voltage that will produce AC current of frequency $483.597\,890\,893\:\mathrm{THz}$ when it is applied to a Josephson junction (with the same caveats on the actual numbers as before) and as before this power will actually be independent of the chosen value of $e$ as long as $K_J$ and $R_K$ are changed appropriately. Going back shortly to our NIST-style watt balance, we're faced with measuring a voltage $V$ and a current $I$ . The current $I$ is most easily measured by passing it through some reference resistor $R_0$ and measuring the voltage $V_2=IR_0$ it creates; the voltages will then produce frequencies $f=K_JV$ and $f_2=K_JV_2$ when passed over Josephson junctions, and the reference resistor can be compared to a quantum Hall standard to give $R_0=rR_K$ , in which case $$ m =\frac{1}{rR_KK_J^{2}}\frac{ff_2}{gv} =\frac{h}{4}\frac{ff_2}{rgv}, $$ i.e. a measurement of the mass in terms of Planck's constant, kinematic measurements, and a resistance ratio, with the measurements including two "artefacts" - a Josephson junction and a quantum Hall resistor - which are universally realizable. The Mole The mole is has always seemed a bit of an odd one to me as a base unit, and the redefined SI makes it somewhat weirder. The old definition reads The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in $12\:\mathrm{g}$ of carbon 12 with the caveat that when the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles. The mole is definitely a useful unit in chemistry, or in any activity where you measure macroscopic quantities (such as energy released in a reaction) and you want to relate them to the molecular (or other) species you're using, in the abstract, and to do that, you need to know how many moles you were using. To a first approximation, to get the number of moles in a sample of, say, benzene ( $\mathrm{ {}^{12}C_6H_6}$ ) you would weigh the sample in grams and divide by $12\times 6+6=78$ . However, this fails because the mass of each hydrogen atom is bigger than $1/12$ of the carbon atoms by about 0.7% , mostly because of the mass defect of carbon. This would make amount-of-substance measurements inaccurate beyond their third significant figure, and it would taint all measurements based on those. To fix that, you invoke the molecular mass of the species you're using, which is in turn calculated from the relative atomic mass of its components, and that includes both isotopic effects and mass defect effects. The question, though, is how does one measure these masses, and how accurately can one do so? To determine that the relative atomic mass of ${}^{16}\mathrm O$ is $15.994\,914\, 619\,56 \:\mathrm{Da}$ , for example, one needs to get a hold of one mole of oxygen as given by the definition above, i.e. as many oxygen atoms as there are carbon atoms in $12\:\mathrm g$ of carbon. This one is relatively easy: burn the carbon in an isotopically pure oxygen atmosphere, separate the uncombusted oxygen, and weigh the resulting carbon dioxide. However, doing this to thirteen significant figures is absolutely heroic, and going beyond this to populate the entire periodic table is obviously going to be a long exercise in bleeding accuracy to long chemical metrology traceability chains. Now, as it happens, there can in fact be more accurate ways to do this, and they are all to do with the Avogadro project : the creation of a shiny sphere of silicon with a precisely determined number of ${}^{28}\mathrm{Si}$ atoms. This is done by finding the volume (by measuring the diameter, and making sure that the sphere is really round via optical interferometry), and by finding out the spacing between individual atoms in the crystal. The cool part happens in that last bit, because the spacing is found via x-ray diffraction measurements, and those measure naturally not the spacing but instead the constant $$\frac{h}{m({}^{28}\mathrm{Si})}$$ where $h$ is Planck's constant. And to top this up, the $h/m(X)$ combination can be measured directly, for example by measuring the recoil shift in atomic spectroscopy experiments (as reported e.g. here ). This then lets you count the number of silicon atoms in the sphere without weighing it, or alternatively it lets you measure the mass of the sphere directly in terms of $h$ (which is itself measured via the prototype kilogram). This gives a mise en pratique of the new SI kilogram (where the measured value of $h$ is replaced by its new, fixed value) but that one seems rather impractical to me. More importantly, though, this gives you a good determination of the Avogadro constant: the number $N_A$ of elementary entities in a mole. And this is what enables you to redefine the mole directly as $N_A$ elementary entities, with a fixed value for $N_A$ , while keeping a connection to the old standard: by weighing the silicon sphere you can measure the relative atomic mass of silicon, and this connects you back to the old chemical-metrological chain of weighing different species as they react with each other. In addition to that, a fixed value of $N_A$ enables a bunch of ways to measure the amount of substance by coupling it with the newly-fixated values of other constants, which are detailed in the proposed mises en pratique . For example, you can couple it with $e$ to get the exactly-known value of the electrical charge of one mole of electrons, $eN_A$ , and then do electrolysis experiments against a current standard to get accurate counts on electrons and therefore on the aggregated ions. Alternatively, you can phrase the ideal gas law as $pV=nRT=n(N_Ak_B)T$ and use the newly-fixed value of the Boltzmann constant (see below) and a temperature measurement to get a fix on the number of moles in the chamber. More directly, the number $n$ of moles of a substance $\mathrm X$ in a high-purity sample of mass $m$ can still be determined via $$n=\frac{m}{Ar(\mathrm X)M_u}$$ where $Ar(\mathrm X)$ is the relative mass of the species (determined as before, by chemical means, but unaffected because it's a mass ratio) and $$M_u=m_uN_A$$ is the molar mass constant, which ceases to be fixed and obtains the same uncertainty as $m_u$ , equal to $1/12$ of the mass of $N_A$ carbon-12 atoms. As to the dependency of the standards, it's clear that the mole depends only on the chosen value of $N_A$ . However, to actually implement it one needs a bunch of additional technology, which brings in a whole host of metrological issues and dependence on additional standards, but which ones come in depends exactly on which way you want to measure things. Finally, in terms of why the mole is retained as a dimensional base unit - I'm personally even more lost than before. Under the new definition, saying "one mole of X" is exactly equivalent to saying "about 602,214,085 quadrillion entities of X", saying "one joule per mole" is the same as "one joule per 602,214,085 quadrillion particles", and so on, so to me it feels like the radian and the steradian: a useful unit, worth its salt and worthy of SIness, but still commensurate with unity. But BIPM probably have their reasons. The kelvin Continuing with the radical do-overs, the kelvin gets completely redefined. Originally defined, in the current SI, as $1/273.16$ of the thermodynamic temperature $T_\mathrm{TPW}$ of the triple point of water , in the new SI the kelvin will be defined by fixing the value of the Boltzmann constant to (roughly) $k_B=1.380\,6\times 10^{-23}\mathrm{J/K}$ . In practice, the shift will be mostly semantic in many areas. At reasonable temperatures near $T_\mathrm{TPW}$ , for example, the proposed mises en pratique state that The CCT is not aware of any thermometry technology likely to provide a significantly improved uncertainty on $T_\mathrm{TPW}$ . Consequently, there is unlikely to be any change in the value of $T_\mathrm{TPW}$ in the foreseeable future. On the other hand, the reproducibility of $T_\mathrm{TPW}$ , realised in water triple point cells with isotopic corrections applied, is better than $50\:\mathrm{µK}$ . Experiments requiring ultimate accuracy at or close to $T_\mathrm{TPW}$ will continue to rely on the reproducibility of the triple point of water. In other words, nothing much changes, except a shift in the uncertainty from the determination of $k_B$ to the determination of $T_\mathrm{TPW}$ . It seems that this currently the case across the board of temperature ranges , and the move seems to be to future-proof against the emergence of accurate primary thermometers , defined as follows: Primary thermometry is performed using a thermometer based on a well-understood physical system, for which the equation of state describing the relation between thermodynamic temperature $T$ and other independent quantities, such as the ideal gas law or Planck's equation, can be written down explicitly without unknown or significantly temperature-dependent constants. Some examples of this are acoustic gas thermometry, where the speed of sound $u$ in a gas is related to the average mass $m$ and the heat capacity ratio $\gamma$ as $u^2=\gamma k_BT/m$ , so characterizing the gas and measuring the speed of sound yields the thermodynamic temperature, or radiometric thermometry, using optical pyrometers and Planck's law to deduce the temperature of a body from its blackbody radiation. Both of these are direct measurements of $k_BT$ , and therefore yield directly the temperature in the new kelvin. However, the latter is the only standard in use in ITS-90 , so it seems that the only direct effect of the shift is that pyrometers no longer need to be calibrated against temperature sources. Since the definition depends on the joule, the new kelvin obviously depends on the full dynamical MKS triplet. Metrologically, of course, matters are much more complicated - thermometry is by far the hardest branch of metrology, and it leans on a huge range of technologies and systems, and on a bunch of empirical models which are not entirely understood. The candela Thankfully, the candela remains completely untouched. Given that it depends on the radiated power of the standard candle, it depends on the full dynamical MKS triplet. I do want to take this opportunity, however, to remark that the candela has full rights to be an SI base unit, as I've explained before . The definition looks very innocuous: The candela, symbol $\mathrm{cd}$ , is the SI unit of luminous intensity in a given direction. It is defined by taking the fixed numerical value of the luminous efficacy $K_\mathrm{cd}$ of monochromatic radiation of vacuum wavelength $555\:\mathrm{nm}$ to be $K_\mathrm{cd}=683\:\mathrm{cd/(W\:sr^{-1})}$ . However, the thing that slips past most people is that luminous intensity is as perceived by a (standardized) human eye , ditto for luminous efficacy , and more generally that photometry and radiometry are very different beasts. Photometric quantities require access to a human eye, in the same way that dynamical quantities like force, energy and power are inaccessible to kinematical measurements that only implement the meter and the second. Further reading The current SI seen from the perspective of the proposed New SI. Barry N Taylor. J. Res. Natl. Inst. Stand. Technol. 116 , 797-807 (2011) ; NIST eprint .
{ "source": [ "https://physics.stackexchange.com/questions/147433", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/8563/" ] }
147,437
EDIT: Some answerers pointed out that the title of this question is in contrast with what is actually been asked in the body of the question, so I changed the title accordingly. The original title was "How to theoretically define a measure of length?". This formulation has a mathematical answer, namely that the length of an object is the maximum distance between two of its points, where the notion of distance derives from the particular metric the space is equipped with. The more physical-flavoured question follows. Is there a way of defining the "length" of some object? We agree that the length of an object is the distance between two points $A$ and $B$. The naive approach would be using a meter stick in the usual way: this method does not take into account a possible problem, that of the simultaneity of the measurement; that is, information from point $A$ could be obtained in a different time with respect to information from point $B$. So, the next logical thing to do is setting up some detector in a point $C$ such that information travelling from point $A$ and $B$ will be received at $C$ in the same moment. Is there a way of doing this? In relativity such measurements are obtained via flashes of light. How do we know that this is the best method? I am really confused, up to the point that I am not sure of what exact meaning to attach to the term "length".
So the BIPM has now released drafts for the mises en pratique of the new SI units, and it's rather more clear what the deal is. The drafts are in the New SI page at the BIPM, under the draft documents tab. These are drafts and they are liable to change until the new definitions are finalized at some point in 2018 . At the present stage the mises en pratique have only recently cleared consultative committee stage, and the SI brochure draft does not yet include any of that information. The first thing to note is that the dependency graph is substantially altered from what it was in the old SI, with significantly more connections. A short summary of the dependency graph, both new and old, is below. $\ \ $ In the following I will explore the new definitions, unit by unit, and the dependency graph will fill itself as we go along. The second The second will remain unchanged in its essence, but it is likely that the specific reference transition will get changed from the microwave to the optical domain. The current definition of the second reads The second, symbol $\mathrm{s}$ , is the SI unit of time. It is defined by taking the fixed numerical value of the caesium frequency $\Delta\nu_\mathrm{Cs}$ , the unperturbed ground-state hyperfine splitting frequency of the caesium 133 atom, to be $9\,192\,631\,770\:\mathrm{Hz}$ , where the SI unit $\mathrm{Hz}$ is equal to $\mathrm{s}^{–1}$ for periodic phenomena. That is, the second is actually implemented as a frequency standard: we use the resonance frequency of a stream of caesium atoms to calibrate microwave oscillators, and then to measure time we use electronics to count cycles at that frequency. In the new SI, as I understand it the second will not change, but on a slightly longer timescale it will change from a microwave transition to an optical one, with the precise transition yet to be decided. The reason for the change is that optical clocks work at higher frequencies and therefore require less time for comparable accuracies, as explained here , and they are becoming so much more stable than microwave clocks that the fundamental limitation to using them to measure frequencies is the uncertainty in the standard itself, as explained here . In terms of practical use, the second will change slightly, because now the frequency standard is in the optical regime, whilst most of the clocks we use tend to want electronics that operate at microwave or radio frequencies which are easier to control, so you want a way to compare your clock's MHz oscillator with the ~500 THz standard. This is done using a frequency comb : a stable source of sharp, periodic laser pulses, whose spectrum is a series of sharp lines at precise spacings that can be recovered from interferometric measurements at the repetition frequency. One then calibrates the frequency comb to the optical frequency standard, and the clock oscillator against the interferometric measurements. For more details see e.g. NIST or RP photonics . The meter The meter will be left completely unchanged, at its old definition: The metre, symbol $\mathrm{m}$ , is the SI unit of length. It is defined by taking the fixed numerical value of the speed of light in vacuum $c$ to be $299\,792\,458\:\mathrm{m/s}$ . The meter therefore depends on the second, and cannot be implemented without access to a frequency standard. It's important to note here that the meter was originally defined independently, through the international prototype meter , until 1960, and it was to this standard that the speed of light of ${\sim}299\,792\,458 \:\mathrm{m/s}$ was measured. In 1983, when laser ranging and similar light-based technologies became the most precise ways of measuring distances, the speed of light was fixed to make the standard more accurate and easier to implement, and it was fixed to the old value to maintain consistency with previous measurements. It would have been tempting, for example, to fix the speed of light at a round $300\,000\,000 \:\mathrm{m/s}$ , a mere 0.07% faster and much more convenient, but this would have the effect of making all previous measurements that depend on the meter incompatible with newer instruments beyond their fourth significant figure. This process - replacing an old standard by fixing a constant at its current value - is precisely what is happening to the rest of the SI, and any concerns about that process can be directly mapped to the redefinition of the meter (which, I might add, went rather well). The ampere The ampere is getting a complete re-working, and it will be defined (essentially) by fixing the electron charge $e$ at (roughly) $1.602\,176\,620\times 10^{–19}\:\mathrm C$ , so right off the cuff the ampere depends on the second and nothing else. The current definition is couched on the magnetic forces between parallel wires: more specifically, two infinite wires separated by $1\:\mathrm m$ carrying $1\:\mathrm{A}$ each will attract each other (by definition) by $2\times10^{-7}\:\mathrm{N}$ per meter of length, which corresponds to fixing the value of the vacuum permeability at $\mu_0=4\pi\times 10^{-7}\mathrm{N/A^2}$ ; the old standard depends on all three MKS dynamical standards, with the meter and kilogram dropped in the new scheme. The new definition also shifts back to a charge-based standard, but for some reason (probably to not shake things up too much, but also because current measurements are much more useful for applications) the BIPM has decided to keep the ampere as the base unit. The BIPM mise en pratique proposals are a varied range. One of them implements the definition directly, by using a single-electron tunnelling device and simply counting the electrons that go through. However, this is unlikely to work beyond very small currents, and to go to higher currents one needs to involve some new physics. In particular, the proposed standards at reasonable currents also make use of the fact that the Planck constant $h$ will also have a fixed value of (roughly) $6.626\,069\times 10^{−34}\:\mathrm{kg\:m^2\:s^{-1}}$ , and this fixes the value of two important constants. One is the Josephson constant $K_J=2e/h=483\,597.890\,893\:\mathrm{GHz/V}$ , which is the inverse of the magnetic flux quantum $\Phi_0$ . This constant is crucial for Josephson junctions , which are thin links between superconductors that, among other things, when subjected to an AC voltage of frequency $\nu$ will produce discrete jumps (called Shapiro steps) at the voltages $V_n=n\, \nu/K_J$ in the DC current-voltage characteristic: that is, as one sweeps a DC voltage $V_\mathrm{DC}$ past $V_n$ , the resulting current $I_\mathrm{DC}$ has a discrete jump. (For further reading see here , here or here .) Moreover, this constant gives way directly to a voltage standard that depends only on a frequency standard, as opposed to a dependence on the four MKSA standards as in the old SI. This is a standard feature of the new SI, with the dependency graph completely shaken for the entire set of base plus derived units, with some links added but some removed. The current mise en pratique proposals include stabs at most derived units, like the farad, henry, and so on. The second constant is the von Klitzing constant $R_K = h/e^2= 25\,812. 807\,557 \:\Omega$ , which comes up in the quantum Hall effect : at low temperatures, an electron gas confined to a surface in a strong magnetic field, the system's conductance becomes quantized, and it must come as integer (or possibly fractional) multiples of the conductance quantum $G_0=1/R_K$ . A system in the quantum Hall regime therefore provides a natural resistance standard (and, with some work and a frequency standard, inductance and capacitance standards). These two constants can be combined to give $e=K_J/2R_K$ , or in more practical terms one can implement voltage and resistance standards and then take the ampere as the current that will flow across a $1\:\Omega$ resistor when subjected to a potential difference of $1\:\mathrm V$ . In more wordy language, this current is produced at the first Shapiro voltage step of a Josephson junction driven at frequency $483.597\,890\,893\:\mathrm{THz}$ , when it is applied to a resistor of conductance $G=25\,812. 807\,557\,G_0$ . (The numbers here are unrealistic, of course - that frequency is in the visible range, at $620\:\mathrm{nm}$ - so you need to rescale some things, but it's the essentials that matter. It's important to note that, while this is a bit of a roundabout way to define a current standard, it does not depend on any additional standards beyond the second. It looks like it depends on the Planck constant $h$ , but as long as the Josephson and von Klitzing constants are varied accordingly then this definition of the current does not actually depend on $h$ . Finally, it is also important to remark that as far as precision metrology goes, the redefinition will change relatively little, and in fact it represents a conceptual simplification of how accurate standards are currently implemented. For example, NPL is quite upfront in stating that, in the current metrological chain, All electrical measurements below 10 MHz at NPL are traceable to two quantum standards: the quantum Hall effect (QHE) resistance standard and the Josephson voltage standard (JVS). That is, modern practical electrical metrology has essentially been implementing conventional electrical units all along - units based on fixed 'conventional' values of $K_J$ and $R_K$ that were set in 1990, denoted as $K_{J\text{-}90}$ and $R_{K\text{-}90}$ and which have the fixed values $K_{J\text{-}90} = 483.597\,9\:\mathrm{THz/V}$ and $R_{K\text{-}90} = 25\,812.807\:\Omega$ . The new SI will actually heal this rift, by providing a sounder conceptual foundation to the pragmatic metrological approach that is already in use. The kilogram The kilogram is also getting a complete re-working. The current kilogram - the mass $M_\mathrm{IPK}$ of the international prototype kilogram - has been drifting slightly for some time , for a variety of reasons. A physical-constant-based definition (as opposed to an artefact-based definition) has been desired for some time, but only now does technology really permit a constant-based definition to work as an accurate standard. The kilogram, as mentioned in the question, is defined so that the Planck constant $h$ has a fixed value of (roughly) $6.626\,069\times 10^{−34}\:\mathrm{kg\:m^2\:s^{-1}}$ , so as such the SI kilogram will depend on the second and the meter, and will require standards for both to make a mass standard. (In practice, since the meter depends directly on the second, one only needs a time standard, such as a laser whose wavelength is known, to make this calibration.) The current proposed mise en pratique for the kilogram contemplates two possible implementations of this standard, of which the main is via a watt balance . This is a device which uses magnetic forces to hold up the weight to be calibrated, and then measures the electrical power it's using to determine the weight. For an interesting implementation, see this LEGO watt balance built by NIST . To see how these devices can work, consider the following sketch, with the "weighing mode" on the right. Image source: arXiv:1412.1699 . Good place to advertise their facebook page . Here the weight is attached to a circular coil of wire of length $L$ that is immersed in a magnetic field of uniform magnitude $B$ that points radially outwards, with a current $I$ flowing through the wire, so at equilibrium $$mg=F_g=F_e=BLI.$$ This gives us the weight in terms of an electrical measurement of $I$ - except that we need an accurate value of $B$ . This can be measured by removing the weight and running the balance on "velocity mode", shown on the left of the figure, by moving the plate at velocity $v$ and measuring the voltage $V=BLv$ that this movement induces. The product $BL$ can then be cancelled out, giving the weight as $$mg=\frac{IV}{v},$$ purely in terms of electrical and dynamical measurements. (This requires a measurement of the local value of $g$ , but that is easy to measure locally using length and time standards.) So, on one level, it's great that we've got this nifty non-artefact balance that can measure arbitrary weights, but how come it depends on electrical quantities, when the new SI kilogram is meant to only depend on the kinematic standards for length and time? As noted in the question, this requires a bit of reshuffling in the same spirit as for the ampere. In particular, the Josephson effect gives a natural voltage standard and the quantum Hall effect gives a natural resistance standard, and these can be combined to give a power standard, something like the power dissipated over a resistor of conductance $G=25\,812. 807\,557G_0$ by a voltage that will produce AC current of frequency $483.597\,890\,893\:\mathrm{THz}$ when it is applied to a Josephson junction (with the same caveats on the actual numbers as before) and as before this power will actually be independent of the chosen value of $e$ as long as $K_J$ and $R_K$ are changed appropriately. Going back shortly to our NIST-style watt balance, we're faced with measuring a voltage $V$ and a current $I$ . The current $I$ is most easily measured by passing it through some reference resistor $R_0$ and measuring the voltage $V_2=IR_0$ it creates; the voltages will then produce frequencies $f=K_JV$ and $f_2=K_JV_2$ when passed over Josephson junctions, and the reference resistor can be compared to a quantum Hall standard to give $R_0=rR_K$ , in which case $$ m =\frac{1}{rR_KK_J^{2}}\frac{ff_2}{gv} =\frac{h}{4}\frac{ff_2}{rgv}, $$ i.e. a measurement of the mass in terms of Planck's constant, kinematic measurements, and a resistance ratio, with the measurements including two "artefacts" - a Josephson junction and a quantum Hall resistor - which are universally realizable. The Mole The mole is has always seemed a bit of an odd one to me as a base unit, and the redefined SI makes it somewhat weirder. The old definition reads The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in $12\:\mathrm{g}$ of carbon 12 with the caveat that when the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles. The mole is definitely a useful unit in chemistry, or in any activity where you measure macroscopic quantities (such as energy released in a reaction) and you want to relate them to the molecular (or other) species you're using, in the abstract, and to do that, you need to know how many moles you were using. To a first approximation, to get the number of moles in a sample of, say, benzene ( $\mathrm{ {}^{12}C_6H_6}$ ) you would weigh the sample in grams and divide by $12\times 6+6=78$ . However, this fails because the mass of each hydrogen atom is bigger than $1/12$ of the carbon atoms by about 0.7% , mostly because of the mass defect of carbon. This would make amount-of-substance measurements inaccurate beyond their third significant figure, and it would taint all measurements based on those. To fix that, you invoke the molecular mass of the species you're using, which is in turn calculated from the relative atomic mass of its components, and that includes both isotopic effects and mass defect effects. The question, though, is how does one measure these masses, and how accurately can one do so? To determine that the relative atomic mass of ${}^{16}\mathrm O$ is $15.994\,914\, 619\,56 \:\mathrm{Da}$ , for example, one needs to get a hold of one mole of oxygen as given by the definition above, i.e. as many oxygen atoms as there are carbon atoms in $12\:\mathrm g$ of carbon. This one is relatively easy: burn the carbon in an isotopically pure oxygen atmosphere, separate the uncombusted oxygen, and weigh the resulting carbon dioxide. However, doing this to thirteen significant figures is absolutely heroic, and going beyond this to populate the entire periodic table is obviously going to be a long exercise in bleeding accuracy to long chemical metrology traceability chains. Now, as it happens, there can in fact be more accurate ways to do this, and they are all to do with the Avogadro project : the creation of a shiny sphere of silicon with a precisely determined number of ${}^{28}\mathrm{Si}$ atoms. This is done by finding the volume (by measuring the diameter, and making sure that the sphere is really round via optical interferometry), and by finding out the spacing between individual atoms in the crystal. The cool part happens in that last bit, because the spacing is found via x-ray diffraction measurements, and those measure naturally not the spacing but instead the constant $$\frac{h}{m({}^{28}\mathrm{Si})}$$ where $h$ is Planck's constant. And to top this up, the $h/m(X)$ combination can be measured directly, for example by measuring the recoil shift in atomic spectroscopy experiments (as reported e.g. here ). This then lets you count the number of silicon atoms in the sphere without weighing it, or alternatively it lets you measure the mass of the sphere directly in terms of $h$ (which is itself measured via the prototype kilogram). This gives a mise en pratique of the new SI kilogram (where the measured value of $h$ is replaced by its new, fixed value) but that one seems rather impractical to me. More importantly, though, this gives you a good determination of the Avogadro constant: the number $N_A$ of elementary entities in a mole. And this is what enables you to redefine the mole directly as $N_A$ elementary entities, with a fixed value for $N_A$ , while keeping a connection to the old standard: by weighing the silicon sphere you can measure the relative atomic mass of silicon, and this connects you back to the old chemical-metrological chain of weighing different species as they react with each other. In addition to that, a fixed value of $N_A$ enables a bunch of ways to measure the amount of substance by coupling it with the newly-fixated values of other constants, which are detailed in the proposed mises en pratique . For example, you can couple it with $e$ to get the exactly-known value of the electrical charge of one mole of electrons, $eN_A$ , and then do electrolysis experiments against a current standard to get accurate counts on electrons and therefore on the aggregated ions. Alternatively, you can phrase the ideal gas law as $pV=nRT=n(N_Ak_B)T$ and use the newly-fixed value of the Boltzmann constant (see below) and a temperature measurement to get a fix on the number of moles in the chamber. More directly, the number $n$ of moles of a substance $\mathrm X$ in a high-purity sample of mass $m$ can still be determined via $$n=\frac{m}{Ar(\mathrm X)M_u}$$ where $Ar(\mathrm X)$ is the relative mass of the species (determined as before, by chemical means, but unaffected because it's a mass ratio) and $$M_u=m_uN_A$$ is the molar mass constant, which ceases to be fixed and obtains the same uncertainty as $m_u$ , equal to $1/12$ of the mass of $N_A$ carbon-12 atoms. As to the dependency of the standards, it's clear that the mole depends only on the chosen value of $N_A$ . However, to actually implement it one needs a bunch of additional technology, which brings in a whole host of metrological issues and dependence on additional standards, but which ones come in depends exactly on which way you want to measure things. Finally, in terms of why the mole is retained as a dimensional base unit - I'm personally even more lost than before. Under the new definition, saying "one mole of X" is exactly equivalent to saying "about 602,214,085 quadrillion entities of X", saying "one joule per mole" is the same as "one joule per 602,214,085 quadrillion particles", and so on, so to me it feels like the radian and the steradian: a useful unit, worth its salt and worthy of SIness, but still commensurate with unity. But BIPM probably have their reasons. The kelvin Continuing with the radical do-overs, the kelvin gets completely redefined. Originally defined, in the current SI, as $1/273.16$ of the thermodynamic temperature $T_\mathrm{TPW}$ of the triple point of water , in the new SI the kelvin will be defined by fixing the value of the Boltzmann constant to (roughly) $k_B=1.380\,6\times 10^{-23}\mathrm{J/K}$ . In practice, the shift will be mostly semantic in many areas. At reasonable temperatures near $T_\mathrm{TPW}$ , for example, the proposed mises en pratique state that The CCT is not aware of any thermometry technology likely to provide a significantly improved uncertainty on $T_\mathrm{TPW}$ . Consequently, there is unlikely to be any change in the value of $T_\mathrm{TPW}$ in the foreseeable future. On the other hand, the reproducibility of $T_\mathrm{TPW}$ , realised in water triple point cells with isotopic corrections applied, is better than $50\:\mathrm{µK}$ . Experiments requiring ultimate accuracy at or close to $T_\mathrm{TPW}$ will continue to rely on the reproducibility of the triple point of water. In other words, nothing much changes, except a shift in the uncertainty from the determination of $k_B$ to the determination of $T_\mathrm{TPW}$ . It seems that this currently the case across the board of temperature ranges , and the move seems to be to future-proof against the emergence of accurate primary thermometers , defined as follows: Primary thermometry is performed using a thermometer based on a well-understood physical system, for which the equation of state describing the relation between thermodynamic temperature $T$ and other independent quantities, such as the ideal gas law or Planck's equation, can be written down explicitly without unknown or significantly temperature-dependent constants. Some examples of this are acoustic gas thermometry, where the speed of sound $u$ in a gas is related to the average mass $m$ and the heat capacity ratio $\gamma$ as $u^2=\gamma k_BT/m$ , so characterizing the gas and measuring the speed of sound yields the thermodynamic temperature, or radiometric thermometry, using optical pyrometers and Planck's law to deduce the temperature of a body from its blackbody radiation. Both of these are direct measurements of $k_BT$ , and therefore yield directly the temperature in the new kelvin. However, the latter is the only standard in use in ITS-90 , so it seems that the only direct effect of the shift is that pyrometers no longer need to be calibrated against temperature sources. Since the definition depends on the joule, the new kelvin obviously depends on the full dynamical MKS triplet. Metrologically, of course, matters are much more complicated - thermometry is by far the hardest branch of metrology, and it leans on a huge range of technologies and systems, and on a bunch of empirical models which are not entirely understood. The candela Thankfully, the candela remains completely untouched. Given that it depends on the radiated power of the standard candle, it depends on the full dynamical MKS triplet. I do want to take this opportunity, however, to remark that the candela has full rights to be an SI base unit, as I've explained before . The definition looks very innocuous: The candela, symbol $\mathrm{cd}$ , is the SI unit of luminous intensity in a given direction. It is defined by taking the fixed numerical value of the luminous efficacy $K_\mathrm{cd}$ of monochromatic radiation of vacuum wavelength $555\:\mathrm{nm}$ to be $K_\mathrm{cd}=683\:\mathrm{cd/(W\:sr^{-1})}$ . However, the thing that slips past most people is that luminous intensity is as perceived by a (standardized) human eye , ditto for luminous efficacy , and more generally that photometry and radiometry are very different beasts. Photometric quantities require access to a human eye, in the same way that dynamical quantities like force, energy and power are inaccessible to kinematical measurements that only implement the meter and the second. Further reading The current SI seen from the perspective of the proposed New SI. Barry N Taylor. J. Res. Natl. Inst. Stand. Technol. 116 , 797-807 (2011) ; NIST eprint .
{ "source": [ "https://physics.stackexchange.com/questions/147437", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63093/" ] }
147,539
On the question why certain velocities (i.e. phase velocity) can be greater than the speed of light, people will say something like: since no matter or "information" is transferred, therefore the law of relativity is not violated. What does information mean exactly in this context? It may help to consider the following scenarios: If a laser is swept across a distant object, the spot of laser light can easily be made to move across the object at a speed greater than $c$. Similarly, a shadow projected onto a distant object can be made to move across the object faster than $c$. In neither case does the light travel from the source to the object faster than $c$, nor does any information travel faster than light. Read more: https://www.physicsforums.com/threads/phase-velocity-and-group-velocity.693782/
In the case of relativity, "information" refers to a signal that enforces causality. That is, if event A causes event B, then some signal must travel from A to B. Otherwise, how would B "know" that A had occurred? Some examples: Light (signal) from a candle (A) hits your eye (B), causing you to see it. Electricity (signal) flows from a connected switch (A) to a light bulb (B), turning it on. Your friend (A) throws a wad of paper (signal) that hits you (B) in the back of the head, causing you to turn around to see who's trying to get your attention. In all of these cases, the effect (B) comes after the cause (A) because there must be some signal from A that interacts with B to cause B to happen. The technical term for this is "locality." Over the centuries of studying how the universe works, scientists have found that all causes are local to their effects; nothing happens at a distance without something (light, sound, matter, etc.) acting as a go-between. [1] If you want to interact with some distant object (a friend, a planet, an enemy target), you either have to go there yourself or send something in your place (a letter, a satellite, a missile). Let's consider the case of a laser beam swept across the face of the Moon. Let's further imagine that there are two astronauts, Alice and Bob, on the surface of the Moon with a large distance between them. The laser spot sweeps across the Moon and falls upon both Alice and later Bob, with the spot moving at faster than the speed of light. So, the question is, does that spot constitute a causality signal from Alice to Bob? The answer is no, because nothing Alice does will affect how the spot moves or when it moves or even if it moves. The cause of the light is on Earth and is not local to Alice. Nothing Alice does will change the spot that Bob sees. There is a way that Alice can use the spot. She can hold up a mirror and reflect the laser beam towards Bob. The reflected laser beam is a causality signal because its origin is local to Alice. Alice can choose whether or not to reflect the beam at Bob. But, notice that this signal travels at the speed of light. It will arrive after the laser spot sweeping across the surface. [1] This is why Einstein and others objected to quantum entanglement weirdness. It looks like signaling at a distance, but it's really not. Various mathematical and experimental discoveries show that not even entanglement's "spooky action at a distance" can transmit information faster than light. Quantum teleportation has been demonstrated in the lab, but there must be a slower-than-light signal between the sender and receiver to make the system work. There's far too much detail to go into here.
{ "source": [ "https://physics.stackexchange.com/questions/147539", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/56452/" ] }
147,826
I'm reading a book where in one scene a wizard/alchemist teleports a scroll after reading. He folded the parchment carefully and muttered a single cantrip. The note vanished with a small plop of displaced air , joining the others in a safe place. That made me wonder, how much air would needed to be displaced so the air rushing-in creates any sound at all. Is an audible " plop " sound even possible?
Sound intensity is measured on the dB scale, which is a logarithmic scale of pressure. The "threshold of hearing" is given by the graph below: which tells you (approximately) that 0 dB is about "as low as you go" - the "threshold of hearing". Note that sound signal drops off with distance - we will have to take that into account in what follows. If you suddenly create a vacuum of a certain volume V, then air rushing in to fill the void will create a (negative) pressure wave traveling out - for simplicity's sake let's make the void spherical, and "listen" to the plop at a distance of 1 m (where the observer might be standing when the parchment disappears). The problem we run into is that the pressure "step" is not a single frequency tone, it's in effect the sum of many frequencies (think Fourier transform) - so we would need to estimate what percentage of the energy is in the audible range. That's hard to do, and we are talking about magic here - so I am going to simplify. A pressure level of 0 dB corresponds to $2\times 10^{-5} Pa$ - that's a really small pressure. Parchment is thick - let's say 0.2 mm, or about double the thickness of conventional paper (a stack of 500 sheets is about 5 cm thick, so I estimate that at 0.1 mm per sheet). For a letter size piece of paper, 30 x 20 cm 2 , the volume is 12 cm 3 . If that was a sphere, that sphere would have a radius of ${\frac{12 cm}{(4/3) \times \pi}}^{1/3}$ = 1.4 cm. If that sphere was suddenly "gone", an equal volume of air would have to rush in. At a distance of 1 m, the apparent pressure drop would be $$\begin{align}\\ \Delta P &= \frac{r_1^3}{r_0^3}\times P_{ambient}\\ &=0.3 Pa\\ \end{align}$$ That is a Very Loud Pop - about 80 dB. Even if we argue that only a small fraction of this pressure ends up in the audible range there is no doubt in my mind you would hear "something". So yes, you can hear that parchment disappearing. No problem. Even if some of my approximations are off by a factor 10 or greater. We have about 5 orders of magnitude spare. AFTERTHOUGHT If you have ever played with a "naked" loudspeaker (I mean outside of the enclosure, so something like this one from greatplainsaudio.com): you will have noticed that the membrane moves visibly when music is playing - and as you turn the volume down, the movement becomes imperceptible while you can still hear the sound. That, in essence, is what you are doing here. The sound level you are getting would be similar to the sound level recorded when you move a loudspeaker membrane by by about 0.2 mm. I can guarantee you would hear it. Might be fun to do the experiment... I'll have to see if I have an old one lying around and I might try it myself. UPDATE no time to play with loudspeakers, but thought I would do the calculation "what is the smallest movement of air that results in a sound the human ear can hear?". Again this is going to be approximate. Let's assume an in-ear headphone with a 8 mm membrane coupling into a 3 mm ear hole. Just from the ratio of areas, we can see that sound levels will amplify - a movement of $x$ by the membrane will move the air in the earhole by $x\left(\frac{8}{3}\right)^2$. The equation that connects the movement of the membrane to the pressure produced is: $$\Delta p = (c\rho\omega )s$$ In words: the change in pressure is the product of speed of sound, density of air, frequency, and amplitude of vibration. Using $c = 340 m/s$, $\rho = 1.3\ kg/m^3$, $\omega = 2\pi\times1\ kHz$, and $\Delta p = 2\times10^{-5} Pa$ (the limit of audible sound at 1 kHz), we find that $$s = 7.2\times10^{-12}m$$ And that's before I take the factor $\left(\frac{8}{3}\right)^2$ into account, which would lower the required amplitude to a staggering $1.0\times10^{-12} m$ - that's smaller than the movement of an atom. You can see the derivation of the above at http://www.insula.com.au/physics/1279/L14.html and if you look for problem # W4 on that page you will find the calculation for a pressure level of 28 mPa at 1 kHz giving 11 nm displacement amplitude. Given that the limit of detectable sound level is about 1000x smaller, my numbers above are quite reasonable. So the real answer to your "headline" question ("how much air needs to be displaced to generate an audible sound") is The equivalent of one layer of atoms is more than enough Impressive, how sensitive the ear is. And bats and dogs have even better hearing, I'm told.
{ "source": [ "https://physics.stackexchange.com/questions/147826", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/64864/" ] }
147,896
Is it possible to melt diamond? And if possible while let it cool will it became diamond again?
While I agree in principle with David Lynch's answer, I think it's good to take a closer look at the phase diagram (adapted from http://upload.wikimedia.org/wikipedia/commons/4/46/Carbon_basic_phase_diagram.png ): I added the arrows to show possible paths you might follow. Red path: diamond would become graphite before melting; the molten carbon becomes diamond just above 10 GPa, and you cool it down while maintaining the pressure. Once the diamond is cool enough it can be depressurised slowly without changing phase (the hashed region has to be traversed carefully). Blue path: if you just heat your diamond, it will turn to graphite and then vaporize (sublimate) around 4000 K. The green path shows the only "sure" way to melt diamond - starting at a very high pressure, then raising the temperature; above 5000 K one could either continue raising the temperature, or lower the pressure. Note that there is a real problem with doing this - there are no containers that I know of that will allow that combination of temperature and pressure to be maintained. Synthetic diamonds have been made, but typically not by growing from the melt...
{ "source": [ "https://physics.stackexchange.com/questions/147896", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62310/" ] }
147,906
Suppose you have an interaction term of the form $$\mathcal{L}_{int} = \frac{h g}{3!}\phi^3\partial^2\phi$$ where $h $ and $g$ are both couplings. Now if I draw a diagram of the form given in the figure (please ignore the $\lambda$s and $J$'s.) Then suppose that the propagator is the field with the the derivative on: what would the amplitude be in this case? I know how to compute vertex factors, but this one confuses me. Any suggestions? Had there be no propagator involved I would simply write $$S_{int} = i\frac{hg}{3!}\int d^4x \phi^3(x)\partial^2\phi(x)\\= i\frac{hg}{3!}\int d^4xDK\widetilde{\phi_{1234}}\exp[i(k_1+\dots +k_4)](ik_4)^2$$ which would then yield the vertex factor $V$ of (taking care of the $3!$ by permuting the three $\phi$'s) $$V = -i hg k_4^2.$$ I've taken all momenta as incoming into the vertex and $DK = \frac{d^4 k_1d^4 k_2d^4 k_3d^4 k_4}{(2\pi...2\pi)^4}$ the tilde is the fourier transform of each separate field. But this is for a vertex, how about if there's a propagator involved?
While I agree in principle with David Lynch's answer, I think it's good to take a closer look at the phase diagram (adapted from http://upload.wikimedia.org/wikipedia/commons/4/46/Carbon_basic_phase_diagram.png ): I added the arrows to show possible paths you might follow. Red path: diamond would become graphite before melting; the molten carbon becomes diamond just above 10 GPa, and you cool it down while maintaining the pressure. Once the diamond is cool enough it can be depressurised slowly without changing phase (the hashed region has to be traversed carefully). Blue path: if you just heat your diamond, it will turn to graphite and then vaporize (sublimate) around 4000 K. The green path shows the only "sure" way to melt diamond - starting at a very high pressure, then raising the temperature; above 5000 K one could either continue raising the temperature, or lower the pressure. Note that there is a real problem with doing this - there are no containers that I know of that will allow that combination of temperature and pressure to be maintained. Synthetic diamonds have been made, but typically not by growing from the melt...
{ "source": [ "https://physics.stackexchange.com/questions/147906", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/31965/" ] }
147,908
Talking about gravity with my 9 y/o she asked when do we start "falling upward" to the Moon. What is the distance at which the Moon's gravitational attraction is higher than that of the Earth and thus makes you accelerate towards it, and how to get to that answer?
The main plot below shows the potential energy of a mass in the Earth-Moon system under the unrealistic assumption that the system is not rotating . i.e. This mirrors (at present) all but one of the 4 answers given, in assuming that this point is defined where the gravitational force on a mass due to the Earth and the Moon are equal and opposite (i.e. at the point where the total potential energy [red curve] is at a maximum, because force is of course the gradient of the potential, and I show this as a black line). This is wrong , because it neglects the centrifugal potential caused by the orbital motion. Whilst the inclusion of this potential only changes the third significant figure of the amount of energy it takes to get something to the moon, it moves the point at which a co-rotating object starts to fall towards the moon significantly closer to the earth. In the plot I used the mean Earth-Moon distance of 384,000 km. The point P where the force (neglecting centrifugal force) is zero is about 344,000 km . Including the centrifugal potential (see the plot below: credit NASA) in the co-rotating frame and calculating the "L1 point" where the potential is actually maximised, is described here and involves solving a quintic function. However as the moon mass is much less than the Earth mass we can use the "Hill sphere" approximation, that the L1 point is separated from the moon by $r= R (M_2/3M_1)^{1/3}$, where $R$ is the Earth-Moon separation and $M_2/M_1$ is the Moon/Earth mass ratio. Putting in the numbers gives $R-r=$ 323,000 km , so this is not a small correction. Note however that a body that passes through the L1 point that was previously orbiting the earth cannot simply fall onto the moon. It has too much angular momentum. The L1 point marks the point where it stops orbiting the earth and starts orbiting the moon. In that sense it is "falling" towards the moon. Edit: Final complications are that (i) the Earth-Moon distance is not constant and so neither is the L1 point. In fact a better wat to quote the solution is that gravitational force balance is achieved at 90% of the Earth-Moon distance, whilst the distance at which the object falls towards the moon is about 84% of the Earth-Moon distance. (ii) The Earth-Moon system is not isolated and the gravity of the Sun plays a role. I also note that this was part of the mission concept for the SMART-1 mission to the moon, where an orbit was designed so that the satellite spiralled outward from the Earth to the L1 point and was then captured by the moon. It "passed through a position 310,000 km from the Earth and 90,000 km from the Moon in free drift". Including the effects of the centrifugal potential.
{ "source": [ "https://physics.stackexchange.com/questions/147908", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/34663/" ] }
147,927
Suppose we have a Gaussian wave function and amplitude distribution function $$\psi(x) = (\frac{2}{\pi a^{2}})^{1/4}e^{-x^{2}/a^{2}}e^{ik_{0}x}, \qquad \phi(k) = (\frac{a^{2}}{2\pi})^{1/4}e^{-a^{2} (k-k_{0})^{2}/4}.$$ Now, according to my textbook, when $x$ and $k$ vary from $0$ and $k_{0}$ to $\pm \Delta x$ and $k_{0} \pm \Delta k$, the functions $|\psi(x)|^{2}$ and $|\phi(k)|^{2}$ drop to $e^{-1/2}$. I'm having trouble seeing why that is the case, as $e^{-1/2}$ is clearly not half the amplitude, which I would expect to be 0.5.
The main plot below shows the potential energy of a mass in the Earth-Moon system under the unrealistic assumption that the system is not rotating . i.e. This mirrors (at present) all but one of the 4 answers given, in assuming that this point is defined where the gravitational force on a mass due to the Earth and the Moon are equal and opposite (i.e. at the point where the total potential energy [red curve] is at a maximum, because force is of course the gradient of the potential, and I show this as a black line). This is wrong , because it neglects the centrifugal potential caused by the orbital motion. Whilst the inclusion of this potential only changes the third significant figure of the amount of energy it takes to get something to the moon, it moves the point at which a co-rotating object starts to fall towards the moon significantly closer to the earth. In the plot I used the mean Earth-Moon distance of 384,000 km. The point P where the force (neglecting centrifugal force) is zero is about 344,000 km . Including the centrifugal potential (see the plot below: credit NASA) in the co-rotating frame and calculating the "L1 point" where the potential is actually maximised, is described here and involves solving a quintic function. However as the moon mass is much less than the Earth mass we can use the "Hill sphere" approximation, that the L1 point is separated from the moon by $r= R (M_2/3M_1)^{1/3}$, where $R$ is the Earth-Moon separation and $M_2/M_1$ is the Moon/Earth mass ratio. Putting in the numbers gives $R-r=$ 323,000 km , so this is not a small correction. Note however that a body that passes through the L1 point that was previously orbiting the earth cannot simply fall onto the moon. It has too much angular momentum. The L1 point marks the point where it stops orbiting the earth and starts orbiting the moon. In that sense it is "falling" towards the moon. Edit: Final complications are that (i) the Earth-Moon distance is not constant and so neither is the L1 point. In fact a better wat to quote the solution is that gravitational force balance is achieved at 90% of the Earth-Moon distance, whilst the distance at which the object falls towards the moon is about 84% of the Earth-Moon distance. (ii) The Earth-Moon system is not isolated and the gravity of the Sun plays a role. I also note that this was part of the mission concept for the SMART-1 mission to the moon, where an orbit was designed so that the satellite spiralled outward from the Earth to the L1 point and was then captured by the moon. It "passed through a position 310,000 km from the Earth and 90,000 km from the Moon in free drift". Including the effects of the centrifugal potential.
{ "source": [ "https://physics.stackexchange.com/questions/147927", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63858/" ] }
148,028
After reading many questions, like this and this , I wonder: Is it possible to consider the other fundamental forces too, the electroweak interaction and the strong interaction or ultimately the unification of these, to be fictitious forces like gravity in the framework of general relativity? If we want a final unification of all fundamental forces, hasn't this feature of gravity to become a feature of the other forces as well?
The classical theory of electrodynamics can indeed be written as a geometrical theory in a similar way to general relativity. As it happens there is a question and answer addressing just this, but it's in the Maths SE: Electrodynamics in general spacetime . Classical electrodynamics is an example of a class of theories called classical Yang-Mills gauge theories, though Maxwell didn't realise this as the Yang-Mills theories were first described in 1954 . These are geometric theories like general relativity, though note that GR is not a Yang-Mills theory - if it was we'd probably have quantised it by now. There are various introductions to Yang-Mills theory around, and a quick Google found this introduction (350KB PDF) that seems pretty good. The theories use a curvature tensor, though this is different to the Riemann tensor used in GR. Quantising the Yang-Mills classical theory gives quantum electrodynamics i.e. the quantum field theory describing electrodynamics. The weak and strong forces are also quantum Yang-Mills theories, though in these two cases there is no useful classical theory. Christoph points out in a comment that there is an alternative route to a geometric theory of electrodynamics. In 1919 Theodor Kaluza pointed out that if general relativity was formulated in 5 dimensions (4 space and 1 time) the theory incorporated electrodynamics. This approach was built upon by Oskar Klein and is now known as Kaluza-Klein theory . However the theory requires there to be extra dimensions of space, and in any case the electrodynamic bit of the theory is really a Yang-Mills theory in disguise.
{ "source": [ "https://physics.stackexchange.com/questions/148028", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/26127/" ] }
148,116
My question is regarding a vector space and Lie algebra . Why is it that whenever I read advanced physics texts I always hear about Lie algebra? What does it mean to "endow a vector space with a lie algebra"? I'm assuming it is the same Lie from the Lie groups ? My current knowledge is that the Lie groups are "to do with rotations of molecules". I'm not after lots of detail but would like a basic understanding of what this means and why it is such a prevalent idea.
It's an enormous subject, but briefly: Lie groups are smooth groups. Technically , Lie groups are sets that are both a smooth manifold, like a sphere for instance, and also have a group structure (multiplication operator, inverses, and an identity). The group multiplication and inverse must be smooth (differentiable) functions on the manifold. As you mentioned, the group of rotations in 3-dimensional space, called SO(3), is an example of a Lie group. Rotations have a group structure because you can compose or invert rotations and get other rotations, and they are also a smooth manifold because you can smoothly vary the axis or angle and so move continuously from one rotation to another. There are many other examples of Lie groups. Many types of geometric transformations on different spaces form Lie groups. They show up in physics in transformations of spacetime (the Poincaré group , most generally), and in so-called "internal symmetries" that transform different quantum fields into each other (often special unitary groups of various dimensions). Another example is diffeomorphism groups , which show up in general relativity and string theory and are also Lie groups. As for Lie algebras , they are closely related to Lie groups. A Lie algebra basically consists of the "infinitesimal elements" of a Lie group, i.e. the "elements infinitesimally close to the identity". (I put that in scare quotes because in standard analysis, infinitesimal elements don't really exist—technically, a Lie algebra is defined on the tangent space of the Lie group at the identity. Still, the picture of infinitesimal elements is a useful and intuitive way of thinking about this.) For example, in the case of rotations, we would be talking about rotations about any axis by infinitesimal angles. When you multiply two group elements that are very close to the identity, the group multiplication looks like a vector sum—basically the same way that $(1+\delta)(1+\epsilon) \approx 1 + (\delta + \epsilon)$ when $\delta$ and $\epsilon$ are small. Similarly, $(1+\epsilon)^{-1} \approx 1 - \epsilon$ and so group inversion looks like vector negation. So the Lie algebra inherits its operations from those of the underlying Lie group, but it doesn't itself look like a group—instead, it looks like a vector space. The Lie algebra also has, in addition to the standard vector space operations, a bilinear operation called the Lie bracket, denoted $[x, y]$ (where $x, y$ are two vectors in the Lie algebra and $[x, y]$ generates another vector). This operation measures "how noncommutative" the Lie group is; roughly speaking, it corresponds to the commutator $[A, B] = AB - BA$ of the Lie group. Now, the funny and interesting thing about a Lie algebra is that even though it's derived from just an infinitesimal slice of a Lie group, it contains, encoded within it, almost everything there is to know about the group that it came from! You can actually reconstruct the entire Lie group, starting from just the Lie algebra, by using the exponential map —a generalization of the ordinary exponential function. (It's almost because there are some cases where different Lie groups have the same Lie algebra, but different global structures—for example, SO(3) and SU(2).) And since Lie algebras are vector spaces, and the Lie bracket is a bilinear operation, all you really need to have is a set of basis vectors for the Lie algebra and know what the Lie bracket does to each pair of basis vectors. Such a set of basis vectors is called a set of generators of the group. If you apply the Lie bracket to each pair of generators and write down the resulting vectors as coordinates in the same basis, the set of numbers you obtain are called structure constants . From the generators and the structure constants, you can generate the Lie algebra and thence the entire Lie group (except for ambiguities of global structure as mentioned above)! This makes Lie algebras a very powerful tool for understanding the Lie groups that show up in physics. For example, in particle physics, the gauge bosons (photon, W, Z, gluons) are closely related to the generators of internal symmetry groups; momentum and angular momentum are related to the generators of the Poincaré group, and so on. There's much more that could be said about this—I haven't even mentioned representations!—but this is probably enough for one answer, so I'll stop here. :)
{ "source": [ "https://physics.stackexchange.com/questions/148116", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63886/" ] }
148,177
I'm trying to understand quantum physics. I'm pretty familiar with it but I can't decide what counts as observing to cause particle behave (at least when it's about lights). So the question is what do we see with our eye-balls ? We point a laser (or any kind of light source) to the wall. We see its way from point A to B. Do I "see" a particle or a wave? Let's see an average object. It pretty much looks like their "pieces" a observing that influences their behave. Does this means while we're watching a light it acts like particle in quantum level?
You are seeing particles. However there's more to this than meets the eye so I need to explain exactly what I mean by this. Light is neither a particle nor a wave. Instead it is a quantum field. As a general rule while light is travelling it appears as a wave, but when the light quantum field is exchanging energy with anything it does so in quanta that appear as particles i.e. photons. You see because light excites electrons in rhodopsin molecules in the cells in your retina. Since this is an energy exchange (from the quantum field to the rhodopsin molecule) the interaction looks like absorption of a photon. So you are seeing particles.
{ "source": [ "https://physics.stackexchange.com/questions/148177", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/65015/" ] }
148,216
I was in argument with someone who claims that length contraction is not "real" but only "apparent", that the measurement of a solid rod in its rest reference frame is the "real length" of the rod and all other measurements are somehow just "artificial" and "apparent". Seemed to me like a bad conversation about poorly defined words and not really relevant to physics, but then some quotes were offered: ...so that the back appears closer to the front. But of course nothing has happened to the rod itself. (Rindler) The effects are apparent (that is, caused by the motion) in the same sense that proper quantities have not changed. (Resnick & Halliday) At the same time, the IEP entry on SR insists that length contraction and time dilation are "real", with observable consequences: Time and space dilation are often referred to as ‘perspective effects’ in discussions of STR. Objects and processes are said to ‘look’ shorter or longer when viewed in one inertial frame rather than in another. It is common to regard this effect as a purely ‘conventional’ feature, which merely reflects a conventional choice of reference frame. But this is rather misleading, because time and space dilation are very real physical effects, and they lead to completely different types of physical predictions than classical physics. [...] However, this does not mean that time and space dilation are not real effects. They are displayed in other situations where there is no ambiguity. One example is the twins' paradox, where proper time slows down in an absolute way for a moving twin. And there are equally real physical effects resulting from space dilation. It is just that these effects cannot be used to determine an absolute frame of rest. I went through a lot of Einstein, Minkowski and Lorentz original material, and didn't find anything about what is "real" and what is not. Finally, I know about muons, where the effects of SR seem to be very real (from Wikipedia , but I had seen it in a physics class before): When a cosmic ray proton impacts atomic nuclei in the upper atmosphere, pions are created. These decay within a relatively short distance (meters) into muons (their preferred decay product), and muon neutrinos. The muons from these high energy cosmic rays generally continue in about the same direction as the original proton, at a velocity near the speed of light. Although their lifetime without relativistic effects would allow a half-survival distance of only about 456 m (2,197 µs×ln(2) × 0,9997×c) at most (as seen from Earth) the time dilation effect of special relativity (from the viewpoint of the Earth) allows cosmic ray secondary muons to survive the flight to the Earth's surface, since in the Earth frame, the muons have a longer half life due to their velocity. From the viewpoint (inertial frame) of the muon, on the other hand, it is the length contraction effect of special relativity which allows this penetration, since in the muon frame, its lifetime is unaffected, but the length contraction causes distances through the atmosphere and Earth to be far shorter than these distances in the Earth rest-frame. Both effects are equally valid ways of explaining the fast muon's unusual survival over distances. So which is which? Why do Rindler, Resnick & Halliday use the word "apparent"?
The laws of physics have the same form for all, but there are different measurements which are equally "real"? Correct. Having said that, it is often sensible to differentiate between 'apparent' and 'proper' (or 'intrinsic') values, the latter normally measured in the rest frame of the object in question and giving an upper or lower bound for an observable that varies continuously from frame to frame. However, this does not imply that 'apparent' values are less real: For example, arguably all massive objects have zero proper (3-)momentum, but if you get hit by a train, its apparent momentum will feel quite real to you ;) Also, proper values need not always exist, in particular in case of light. Eg, there's no way to decide on physical grounds which wavelength should be considered the intrinsic one of a photon: The one at time of emission, or the Doppler-shifted one at time of absorption? The process is time-symmetric and as there is no rest frame for light-like particles, basically the whole continuum of wavelengths is equally (im)proper.
{ "source": [ "https://physics.stackexchange.com/questions/148216", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/9604/" ] }
148,418
According to the Wikipedia page on Galaxy Types , there are four main kinds of galaxies: Spirals - as the name implies, these look like huge spinning spirals with curved "arms" branching out Ellipticals - look like a big disk of stars and other matter Lenticulars - those that are somewhere in between the above two Irregulars - galaxies that lack any sort of defined shape or form; pretty much everything else Now, from what I can tell, these all appear to be 2D, that is, each galaxy's shape appears to be confined within some sort of invisible plane. But why couldn't a galaxy take a more 3D form? So why aren't there spherical galaxies (ie: the stars and other objects are distributed within a 3D sphere, more or less even across all axes)? Or if there are, why aren't they more common?
This whole question is a mistaken premise. There are spherical (or at least nearly spherical) galaxies! They fall into two basic categories - those elliptical galaxies that are pseudo-spherical in shape and the much smaller, so-called "dwarf spheroidal galaxies" that are found associated with our own Galaxy and other large galaxies in the "Local Group". Of course when you look at a galaxy on the sky it is just a two dimensional projection of the true distribution, but one can still deduce (approximate) sphericity from the surface brightness distribution and large line of sight velocity distribution for many ellipticals and dwarf spheroidals. Dwarf spheroidal galaxies may actually be the most common type of galaxy in the universe. These galaxies are roughly spherical because the stars move in orbits with quite random orientations, many on almost radial (highly eccentric) orbits with no strongly preferred axes. The velocity dispersion is usually much bigger than any rotation signature. There is an excellent answer to a related question at Why the galaxies form 2D planes (or spiral-like) instead of 3D balls (or spherical-like)? Pretty pictures: UK Schmidt picture of the Sculptor dwarf spheroidal galaxy (credit: David Malin, AAO) The E0 elliptical galaxy M89 (credit Sloan Digitized Sky Survey). Details: I have found a couple of papers that put some more flesh onto the argument that many elliptical galaxies are close-to spherical. These papers are by Rodriquez & Padilla (2013) and Weijmans et al. (2014) . Both of these papers look at the distribution of apparent ellipticities of galaxies in the "Galaxy Zoo" and the Sloan Digitized Sky Surveys respectively. Then, with a statistical model and with various assumptions (including that galaxies are randomly oriented), they invert this distribution to obtain the distribution of true ellipticity $\epsilon = 1- B/A$ and an oblate/prolate parameter $\gamma = C/A$, where the three axes of the ellipsoid are $A\geq B \geq C$. i.e. It is impossible to say whether a circular looking individual galaxy seen in projection is spherical, but you can say something about the distribution of 3D shapes if you have a large sample. Rodriguez & Padilla conclude that the mean value of $\epsilon$ is 0.12 with a dispersion of about 0.1 (see picture below), whilst $\gamma$ has a mean of 0.58 with a broader (Gaussian) dispersion of 0.16, covering the whole range from zero to 1. Given that $C/A$ must be less than $B/A$ by definition, this means many ellipticals must be very close to spherical (you cannot say anything is exactly spherical), though the "average elliptical" galaxy is of course not. This picture shows the observed distribution of 2D ellipticities for a large sample of spiral and elliptical galaxies. The lines are what you would predict to observe from the 3D shape distributions found in the paper. This picture from Rodriguez and Padilla show the deduced true distributions of $\epsilon$ and $\gamma$. The solid red line represents ellipticals. Means of the distributions are shown with vertical lines. Note how the dotted line for spirals has a much smaller $\gamma$ value - because they are flattened. Weijmans et al. (2014) perform similar analyses, but they split their elliptical sample into those that have evidence for significant systematic rotation and those that don't. As you might expect, the rotating ones look more flattened and "oblate". The slow-rotating ones can also be modelled as oblate galaxies, though are more likely to be "tri-axial". The slow rotators have an average $\epsilon$ of about 0.15 and average $\gamma$ of about 0.6 (in good agreement with Rodriguez & Padilla), but the samples are much smaller.
{ "source": [ "https://physics.stackexchange.com/questions/148418", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/37607/" ] }
148,567
I was expecting a whirlpool in 3D and the matter glowing from friction as it nears the center, as I expected a event horizon to be negligible visually. How does this depiction work? How big is the central sphere? I am puzzled by the perpendicular circles. Are they the event horizons if they are both visible? What would be a path of a particle as it gets swallowed into the singularity?
First note that this is a fictional movie and the image is an artist's impression , not a detailed simulation. The public seems to think the movie is some sort of fictionalized documentary, which it never claimed to be. That said, the image is qualitatively conveying some of what happens near a black hole. The diagonal disk is the accretion disk -- this is where matter is spiraling inward due to gravity, friction, and electromagnetic forces too. It glows because it is very hot. The circular ring is the result of gravity bending the light emitted from the far side of the accretion disk into our line of sight. A similar effect happens when the light source is much farther behind the black hole, as seen in this Wikimedia image : As for event horizons, no, you would only see the outer one by definition. (An event horizon is nothing more or less than a surface which delineates which regions of spacetime can communicate with each other.) Since the image in question is made for primarily artistic purposes, I wouldn't try to read too much into details that you see.
{ "source": [ "https://physics.stackexchange.com/questions/148567", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/9052/" ] }
148,569
I have been reading the excellent Command and Control by Eric Schlosser and discovered more about Louis Slotin's experiment with "tickling the dragons tail" and the infamous Demon Core . What I don't understand; and please excuse my naivety, is when the accident on May 21st, 1946 occurred and Slotin's screwdriver slipped causing the reflector to close around the core and the core becoming supercritical: On the day of the accident, Slotin's screwdriver slipped outward a fraction of an inch while lowering the top reflector, allowing the reflector to fall into place around the core. Instantly there was a flash of blue light and a wave of heat across Slotin's skin; the core had become supercritical, releasing a massive burst of neutron radiation estimated to have lasted about a half second.[10] He quickly flipped the top shell to the floor. The heating of the core and shells stopped the criticality within seconds of its initiation,[11] but Slotin's reaction prevented a recurrence and ended the accident. Slotin's body's positioning over the apparatus also shielded the others from much of the neutron radiation. He received a lethal dose of 1000 rads neutron/114 rads gamma[5] in under a second and died nine days later from acute radiation poisoning. Why did the core not explode/react? My understanding is that the core 'is' the bomb and once the chain reaction begins and occurs in trillionth's of a second could not possibly be stopped in time by the reactions of Slotin alone? I have tried to source information to fill my knowledge gap, but am perhaps using the wrong terminology to ask the question. Can anyone assist? Finally, and once again, please excuse the naivety of my question.
Your understanding is pretty much correct and your question quite a natural one. The core did react: the release of energy heated the core and shells quickly, thus changing the neutron capture cross section for the plutonium in the core. A plutonium (or any fissionable) atom's ability to capture a neutron and undergo fission is weakly dependent on temperature: decreasing with increasing temperature. As the core heated, the lowered ability to capture neutrons meant that the core actually became subcritical pretty quickly, thus quenching the chain reaction. If Slotin would not have flipped the top shell off, the uncritical core would then begin to cool and become critical again, repeating the process. Generally, and thankfully, big bangs are very hard to provoke with nuclear chain reactions. The immense release of energy in a small space means that the critical mass is going to heat up and quench the reaction very quickly. If you do get to explosion, the mass blows itself apart, quenching the reaction even more quickly, so unless there is very particular conditions, the bang is not going to be a big one: just enough to break apart the apparatus and drench every living thing nearby in a lethal dose of neutrons. Big nuclear explosions only happen when the process of assembling the critical mass is so fast and the process of crushing the critical mass and keeping it confined does so long enough that a huge amount of material has time to undergo fission before the core blows itself apart and puts an end to the whole process. In a plutonium bomb, this is done by crushing the hollow core with shaped explosives that produce a highly symmetrical, inward moving shock wave.
{ "source": [ "https://physics.stackexchange.com/questions/148569", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/52712/" ] }
148,602
Do the front tires of a car act like gyroscopes, such that a car could steer on a frictionless surface?
No, a car cannot steer on a frictionless surface. This has little to do with gyroscopic action and more to do with conservation of momentum: to turn, even when conserving its speed, the car needs to accelerate at right angles to its motion, which changes the total momentum of the motion. This change in momentum requires a force which, in normal roads, is ultimately provided by the friction between the tyres and the road. In the absence of friction, the car tyres would skid sideways with respect to their rotation (i.e. along the axis) without being able to influence the car's inertia. It's important to note that, because of the gyroscopic effect, the car can indeed change the direction it's facing pretty much arbitrarily. The easiest way to accomplish this is to have a big flywheel, with a horizontal axis, inside the car, with a mass that's at least comparable to the car's. If you then try to turn the flywheel's axis within the car, you will instead turn the car around the flywheel, because of conservation of the large amount of angular momentum in the flywheel. (This will also cause a torque on the car about a horizontal axis, but this can be cancelled by the normal force from the surface.) However, even if you manage to turn the car 90° from its direction of motion, it will continue to move in the same direction as before, with its wheels skidding perpendicularly across the ice. Also, as other answers have mentioned, if the car can interact with the air in any meaningful fashion - either by its air intake and exhaust, or by using its bulge as a sail, or by propping up an actual sail - then it will indeed be susceptible to external forces and it will be able to change its direction of motion. Similarly, the car would be able to steer if it could chuck rocks, bump off of other cars, or use rocket thrusters. I don't think this directly answers the core of the question, though.
{ "source": [ "https://physics.stackexchange.com/questions/148602", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10024/" ] }
148,656
In a comment to Rob Jeffries' answer to this question on spherical galaxies, Incnis Mrsi commented There should exist the entire range of orbits in a spherically symmetric system: near-circular, highly excentric (but not elliptical because galatic orbits are not Keplerian!) Why shouldn't the orbits of stars be Keplerian? I can think of a few reasons (but I don't know if they're correct): The influence of dark matter Interactions with gas and dust in the galactic arms Are either of these explanations correct?
Why shouldn't the orbits of stars be Keplerian? The answer is simple. Keplerian orbits are predicated on a single central point mass. That assumption fails to some extent even in a solar system. It fails massively in a galaxy. A galaxy is not a point mass.
{ "source": [ "https://physics.stackexchange.com/questions/148656", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/56299/" ] }
149,487
You have likely read in books that tides are mainly caused by the Moon. When the Moon is high in the sky, it pulls the water on the Earth upward and a high-tide happens. There is some similar effect causing low-tides. They also say that the Sun does the same as well, but has smaller effect compared to the Moon. Here's my question: Why is the Moon the major cause of tides? Why not the Sun? The Sun is extremely massive compared to the Moon. One might say, well, the Sun is much farther than the Moon. But I've got a simple answer: Just substitute those numbers in $a=\frac{GM}{d^2}$ and find gravitational acceleration for the Moon and then for the Sun (on Earth, by the way). You'll find something around $3.38$ $10^{-6}$ $g$ for the Moon and $6.05$ $10^{-4}$ $g$ for the Sun - I double checked it to make sure. As you can see, the sun pulls about $180$ times stronger on the Earth. Can anyone explain this? Thanks is advance.
What is important for tidal forces is not the absolute gravity, but the differential gravity across the planet, that is, how different is the force of gravity at a point on the Earth's surface near the sun relative to a point on the Earth's surface far from the sun. If you compare it with the moon, the result will be that the tidal force from the sun is about 0.43 that of the moon. Suppose two different bodies in the sky that have the same apparent size. Because the mass M of the object will grow as $r^3$ (because $M=4/3\rho\pi R^3$ and $R=\theta r$), the gravitational force will actually grow linearly with $r$, where $r$ is the distance and $R$ is the radius of the object. So if two bodies have the same apparent size (such as the Moon and the Sun) and the same density, the tidal force would be the same. The density of the moon is about 2.3 times larger than that of the sun, that is why the tidal force is larger by that factor.
{ "source": [ "https://physics.stackexchange.com/questions/149487", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/7787/" ] }
149,648
What happens to Protons and Electrons when a Neutron star forms? At some point gravity overcomes the Pauli Exclusion Principle ( I assume) and they are all forced together. What happens in the process?
It is the Pauli Exclusion Principle that actually allows the formation of a "neutron" star. In an "ordinary" gas of protons and electrons, nothing would happen - we call that ionized hydrogen! However, when you squeeze, lots of interesting things happen. The first is that the electrons become "degenerate". The Pauli exclusion principle forbids more than two electrons (one spin up the other spin down) from occupying the same momentum eigenstate (particles in a box occupy quantised momentum states). In that case what happens is that the electrons "fill up" the low momentum/low energy states and are then forced to fill increasingly higher momentum/energy states. The electrons with large momentum consequently exert a degeneracy pressure, and it is this pressure that supports white dwarf stars. If the density is increased even further - the energies of degenerate electrons at the top of the momentum/energy distribution get so large that they are capable of interacting with protons (via the weak nuclear force) in a process called inverse beta decay (sometimes referred to as electron capture when the proton is part of a nucleus) to produce a neutron and a neutrino. $$p + e \rightarrow n + \nu_e$$ Ordinarily, this endothermic process does not occur, or if it does, the free neutron decays back into a proton and electron. However at the densities in a neutron star, not only can the degenerate electrons have sufficient energy to instigate this reaction, their degeneracy also blocks neutrons from decaying back into electrons and protons. The same is also true of the protons (also fermions), which are also degenerate at neutron star densities. The net result is an equilibrium between inverse beta decay and beta decay. If too many neutrons are produced, the drop in electron/proton densities leaves holes at the top of their respective energy distributions that can be filled by decaying neutrons. However if too many neutrons decay, the electrons and protons at the tops of their respective energy distributions have sufficient energies to create new neutrons. Mathematically, this equilibrium is expressed as $$E_{F,p} + E_{F,e} = E_{F,n},$$ where these are the "Fermi energies" of the degenerate protons, electrons and neutrons respectively, and we have the additional constraint that the Fermi momenta of the electrons and protons are identical (since their number densities would be the same). At neutron star densities (a few $\times 10^{17}$ kg/m $^{3}$ ) the ratio of neutrons to protons is of order 100. (The number of protons equals the number of electrons). This calculation assumes ideal (non-interacting) fermion gases. At even higher densities (cores of neutron stars) the strong interaction between nucleons in the asymmetric nuclear matter alters the equilibrium above and causes the n/p ratio to decrease to closer to 10.
{ "source": [ "https://physics.stackexchange.com/questions/149648", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
149,744
When an electron absorbs a photon, it gets into a higher energy state and goes into the upper orbit/shell. Does (rather should) this absorption of energy also have an impact on its mass (although incredibly small)? Can we even measure the mass of an electron while it is it still bound to the nucleus?
This is really an extended comment to Geoffrey's answer, so please upvote Geoffrey's answer rather than this. The mass of a hydrogen atom is $1.67353270 \times 10^{-27}$ kg. If you add the masses of a proton and electron together then they come to $1.67353272 \times 10^{-27}$ kg. The difference is about 13.6eV, which is the ionisation energy of hydrogen (though note that the experimental error in the masses isn't much less than the difference so this is only approximate). This shouldn't surprise you because you have to add energy (in the form of a 13.6eV photon) to dissociate a hydrogen atom into a free proton and electron, and this increases the mass in accordance with Einstein's famous equation $E = mc^2$ . So this is a direct example of the sort of mass increase you describe. However you can't say this is an increase of mass of the electron or the proton. It's an increase in mass of the combined system. The invariant masses of the electron and proton are constants and not affected by whether they're in atoms or roaming freely. The change in mass is coming from a change in the binding energy of the system.
{ "source": [ "https://physics.stackexchange.com/questions/149744", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/65672/" ] }
149,832
Satellites are isolated systems, the only way for it to transfer body heat to outer space is thermal radiation. There are solar panels, so there is continuous energy flow to inner system. No airflow to transfer the accumulated heat outer space easily. What kind of cooling system are being used in satellites?
Typically, satellites use radiative cooling to maintain thermal equilibrium at a desired temperature. How they do this depends greatly on the specifics of the satellite's orbit around Earth. For instance, sun-synchronous satellites typically always have one side in sunlight and one side in darkness. These are particularly easy to keep cool because you can apply a white coating to the Sunward side and and black coating to the dark side. The white coating has a low value for radiative absorption while the black coating has a high value for radiative emission. This means it can absorb as little light as possible while emitting more thermal radiation. Different types of satellites have different strategies for cooling, but in general, cooling is achieved by applying functional coatings to the spacecraft that lower or raise the absorptivity/emissivity/reflectivity of its different surfaces. While designing a satellite, the space engineers perform thermal analyses and lots of calculations to determine which surfaces need to have what absorption values in order for the satellite to maintain the desired temperature. It's hard for me to be more specific than this. But this is the reason any good space engineer knows how to find a coating with the desired absorptivity/emissivity values within a day or two.
{ "source": [ "https://physics.stackexchange.com/questions/149832", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/65722/" ] }
149,977
Why is space a vacuum ? Why is space free from air molecules ? I heard that even space has a small but finite number of molecules. If so, won't there be a drag in space?
Why is space a vacuum ? Because, given enough time, gravity tends to make matter clump together. Events like supernovae that spread it out again are relatively rare. Also space is big. Maybe someone could calculate the density if visible matter were evenly distributed in visible space. I imagine it would be pretty thin. (Later) Space is big. Really big. You just won't believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it's a long way down the road to the chemist's, but that's just peanuts to space. Douglas Adams, Hitchiker's guide to the galaxy. According to Wikipedia, the observable universe has a radius of 46.6 billion light years and contains about $10^{53}$ kg of matter. One light year is about $9.5 \times 10^{15} m$ - so that is a radius of roughly $4.4 \times 10^{24} m$ and a volume of roughly $2.73 \times 10^{80} m^3$ So that means a density of $0.366 \times 10^{-27} kg/m^3$ If that matter were all Hydrogen, which has $6 \times 10^{26}$ atoms per kg, that would give us around $0.2$ atoms per $m^3$. So if my horrible calculations are any guide (and I'm very likely to have made an error), space is a vacuum mostly because the amount of matter in the observable universe is negligible. Why is space free from air molecules ? Well, air is what we call the mix of gases in Earth's atmosphere, so this is a question about space near Earth specifically. Air is mostly molecular Nitrogen and Oxygen - $N_2$ and $O_2$. These are heavy enough that not many of them escape Earth's gravity. Also, space is big. I heard that even space has a small but finite number of molecules. If so, Wont there be a drag in space? According to WIkipedia: Intergalactic space contains a few hydrogen atoms per cubic meter. By comparison, the air we breathe contains about $10^{25}$ molecules per cubic meter. That is such a large difference that space is effectively frictionless (at least for typical space vehicles constructed by humans). Bumping against 1 hydrogen atom is very different to bumping against 10000000000000000000000000 Nitrogen molecules.
{ "source": [ "https://physics.stackexchange.com/questions/149977", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63394/" ] }
149,978
I am possessed! Yes, with the thinking that if there is actually a Maxwell's Demon , then it would open the negligible weighted door which would ultimately make the second law invalid. But really can second law be invalid? This is not my question. It is a universal law. So, what should be the logic that the Demon would fail?? Please don't say that work must be done to open the door. The door is so light that negligible work is done.
The resolution to Maxwell's demon paradox is mostly understood to be through Landauer's principle , and it is one of the most compelling applications of information science to physics. Landauer's principle asserts that erasing information from a physical system will always require performing work, and particularly will require at least $$k_B T \ln(2)$$ of energy to be spent and eventually released as heat. The concept of 'erasing information' is relatively tricky, but there are some pretty solid foundations to think that this principle is right. To apply it to the demon, you should realize that the demon consists of (at least) two parts: a sensor to detect when particles are coming, and an actuator to actually move the door. For the demon to work correctly, the actuator must act on the current instruction from the sensor, instead of the previous one, so it must forget instructions as soon as a new one comes in. This takes some work: there is some physical system encoding a bit and it will take some energy cost to flip it. Now, there are some criticisms of Landauer's principle, and it is not completely clear whether it is dependent on the Second Law of Thermodynamics or if it can be proved independently; for an example see this paper ( doi ). Nevertheless, even if it is a restatement of the Second Law, it carries considerable explanatory power, in that it clarifies how the Second Law forbids the demon from operating.
{ "source": [ "https://physics.stackexchange.com/questions/149978", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
149,983
How can an electron revolve in a circular orbit because circular motion is an accelerated motion and acc. charged particle is a source of E.M. Wave. So,it should radiate out energy and hence would loose energy; so from where does it get energy from?
The resolution to Maxwell's demon paradox is mostly understood to be through Landauer's principle , and it is one of the most compelling applications of information science to physics. Landauer's principle asserts that erasing information from a physical system will always require performing work, and particularly will require at least $$k_B T \ln(2)$$ of energy to be spent and eventually released as heat. The concept of 'erasing information' is relatively tricky, but there are some pretty solid foundations to think that this principle is right. To apply it to the demon, you should realize that the demon consists of (at least) two parts: a sensor to detect when particles are coming, and an actuator to actually move the door. For the demon to work correctly, the actuator must act on the current instruction from the sensor, instead of the previous one, so it must forget instructions as soon as a new one comes in. This takes some work: there is some physical system encoding a bit and it will take some energy cost to flip it. Now, there are some criticisms of Landauer's principle, and it is not completely clear whether it is dependent on the Second Law of Thermodynamics or if it can be proved independently; for an example see this paper ( doi ). Nevertheless, even if it is a restatement of the Second Law, it carries considerable explanatory power, in that it clarifies how the Second Law forbids the demon from operating.
{ "source": [ "https://physics.stackexchange.com/questions/149983", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/65797/" ] }
150,128
Today I learnt that microwaves heat food by blasting electromagnetic waves through the water molecules found in the food. Does that mean food with 0% moisture (if such a thing exists - dried spices?) will never receive heat from a microwave oven? And how in that case is a microwave able to melt plastics etc., which contain no obvious water?
Microwave heating is largely caused by the changing electric and magnetic fields (i.e. the "microwaves") which are emitted by your microwave oven affecting polar molecules . As the direction of the electric field changes over time, the polar molecules (often, of water) attempt to follow the field by changing their orientation inside the material to line up along the field lines in an energetically favorable configuration (namely, with the positive side pointing in the same direction as the field lines). As these molecules change direction rapidly ( millions of times per second at least ), they gain energy - which increases the temperature of the material. This process is called dielectric heating . However, water is not the only polar molecule in the world. You can test for yourself that most plastics don't heat in a microwave while most glass and ceramic objects do. So, a microwave oven melting your plastic bowl has more to do with it over-heating your food than over-heating that food's container. EDIT: After doing some research to address some questions brought up in the comments to this post, I've found some very interesting information about why glass and ceramics heat up in the microwave which I will share here. First of all, according to this article from the Royal Society of Chemistry so-called "earthenware" ceramics are fired at categorically lower temperatures than "stoneware." As a result, a non-negligible quantity of water molecules remain inside the now-seemingly-dry "earthenware," while the vast preponderance of water molecules in "stoneware" have been removed as a result of the higher firing temperature. The conclusion is that earthenware ceramics heat up in the microwave because they have the polar water molecules in them which undergo dielectric heating. On the other hand, stoneware (and apparently porcelain) will not heat in the microwave due to their respective lack of water molecules. Either way, I still wouldn't recommend microwaving your grandmother's porcelain china to find out. Second, glass' molecular structure is apparently locally tetrahedral but without long-range order (i.e. it is an amorphous solid) which means that there tend to be spaces in the molecular structure of the glass to accommodate ionic impurities (mostly sodium, see this explanation of how glass is made to get an idea of the chemicals that go into the final product). These impurities are only loosely bound and are able to move around within the amorphous structure of the glass. These ions of sodium or other elements have a net charge (they are ions after all) which means that the oscillating electric field produced by the microwave oven causes the ions to jostle back and forth, gaining energy. The idea is very similar to the rotations of polar molecules (which have an electric dipole but no net charge), but the mechanism is different (namely, translational energy rather than rotational energy). So in summary, ceramics apparently heat up because they still contain some water, while glass heats up mostly because of the presence of semi-free, charged ions.
{ "source": [ "https://physics.stackexchange.com/questions/150128", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/65873/" ] }
150,387
I'm a software / maths guy who would like to build a physical setup for generating quantum random numbers. I have no physics background, so bear with me. Background The project is for a public exhibit, so it's important to me that the setup should look cool, or at least interesting, and should reveal something about how it works. API calls to a QRNG on the internet wouldn't serve that purpose. It also need to be safe to have around people, albeit perhaps in a glass case or whatever. I don't have a big budget or any physics or engineering skills, so I need something that doesn't require much skill or specialist equipment to build. I don't mind learning on the job, though, and I might be able to enlist the help of a physicist further down the line. I don't actually need the numbers generated to exhibit "true randomness", and I expect larger-scale effects (temperature, vibration etc) will interfere with the quantum ones; but I do want to be able to honestly say that photon emission times (or whatever) are influencing the results. My very rough understanding is that this can be done by pointing a laser at a photocell; the resistance of the cell ought to vary slightly as the number of photons arriving varies. So if my software asks the photocell for its current value, and looks at the last few digits of that value, in theory the fluctuations ought to have a quantum origin. Questions Is the setup I've described feasible and likely to work with low-cost components? If it's not, can you suggest another approach? If it is, how would I find out what strength of laser and sensitivity of photocell I need to get the effect? (That is, how "big" is the fluctuation in power?) What other technical things do I need to consider? Note Inevitably this question is a mixture of physics and engineering -- I'm primarily looking for (in)validation of the general approach here, and will take specific engineering questions elsewhere if necessary. I'm hoping, perhaps in vain, that with a good definition of the physics, the engineering part can be accomplished by assembling a few stock components.
An alternative way to generate random numbers, that truly is quantum, and also quite easy: put a small radioactive source near a Geiger counter. Radioactive decay is a truly random event in the quantum sense, and is basically not subject to thermal noise at all. For maximum visual impact, replace the Geiger counter with a cloud chamber . That way you can literally see the consequences of quantumly-random events. You could make random numbers from it using a web cam and some basic image processing.
{ "source": [ "https://physics.stackexchange.com/questions/150387", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/65823/" ] }
150,503
A moment ago, I was emptying bottles filled with water (2 liters) that are on the terrace of my house. As I did so I remembered something I saw on TV a some time ago (I don't remember when or where or exactly what) but I made a move on the horizontal plane as if the bottle drew a circle several times. To my surprise, after several laps, a vortex was formed and the bottle did empty in half the time than those that are simply put upside down. I repeated the experiment 20 times to be sure, and always got the same results. Bottle upside down: 20 seconds. (approx) Bottle upside down and rotation to form a vortex: 10 seconds (approx) Clearly the vortex allows air to enter the bottle faster than in the other case. I would like to understand what happens in physical terms. EDIT: Another interesting fact is that in the first 6/7 seconds just the first liter is gone. And in just two seconds (after the vortex is formed) the remaining water goes fast, very fast.
When water leaves the bottle, the pressure above it drops. This reduces the net force pushing the water out of the opening, until it stops and a bubble can rise up. When the bubble has left the mouth of the bottle, the water can start flowing again. The stop-start of the water, and the reduced pressure inside the bottle, contribute to the lower flow rate in the bubbling case. We can actually estimate the difference in efficiency. Case 1: vortex. Simplifying assumptions: - the water comes down half the aperture and the air comes up the other half. - Air pressure is maintained inside the bottle at atmospheric pressure. - The tangential water flow velocity was generated by swirling the bottle initially and we only concern ourselves with vertical velocity. For water height in the bottle of $h$, the vertical velocity is given (from conservation of energy) by $$v = \sqrt{2gh}$$ And the mass flow rate $M = \rho v \frac{A}{2}$ where $\rho$ is the density and $A$ the full area of the bottle opening. Case 2: bubbles. When the bottle is bubbling the water will keep stopping and starting - but when it flows it has the whole aperture available. But since the water needs to accelerate, then decelerate (as the pressure above the water drops, the velocity goes back down to zero) we can see that the mean velocity will be quite a bit lower. In fact, if the time between bubbles is $T$ and we assume that the pressure above the water is lowered by the weight of the column of water below it when the next bubble is formed, we can write an approximate expression for the net pressure: $$P_{net} = \rho g h (1 - \frac{t}{T})$$ On average, this is half the pressure experienced by the free flowing water. And using this mean pressure, the water needs to first accelerate, then decelerate. This means that the average velocity of the water will be half what it would otherwise be (if it didn't keep stopping and starting), and half again since the mean pressure difference is halved. This gets a mean flow velocity that is $\frac14$ of the free flowing value - but we have twice the aperture. The net result is that the flow with bubbles is about half as fast as with the vortex. Which, coincidentally, is exactly what you observed. Note that the above uses many simplifying assumptions - but the basic mechanics I described is plausible. If anyone has a more complete mathematical description I invite them to offer it up.
{ "source": [ "https://physics.stackexchange.com/questions/150503", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57823/" ] }
150,548
An essential part of a guitar is its hollow body. Without it, the strings wouldn't be very loud; as far as I know, the purpose of the body is to set up some sort of resonance and make the sound louder. How does this work? How can an isolated system amplify sound? Where is the energy coming from?
It is not amplification! The purpose of the guitar body is to impedance and mode match between the string and the surrounding air. Intuition When a an object vibrates it pushes on the surrounding air creating pressure waves which we hear as sound. A string vibrating alone without the body of the instrument doesn't make a very loud sound because exchange of energy from the vibrating string to the air pressure waves is inefficient. Why is it inefficient? The fundamental reason is that the string is a lot stiffer than the surrounding air and has a small cross sectional area. This means that as the string vibrates with a given amount of energy, it doesn't actually displace much air. With the same energy in the motion, a larger, more mechanically compliant object (e.g. the acoustic guitar body) would do a better job at transferring the energy into the air and thus into your ears. Analysis The equation of motion of a vibrating string with vertical displacement $u$ and horizontal position $x$ is $$\frac{\partial^2 u}{\partial x^2} = \frac{1}{v^2} \frac{\partial^2 u}{\partial t^2}$$ where $v = \sqrt{T/\mu}$ is the velocity of sound in the string, $\mu$ is the linear mass density, and $T$ is the tension. If you consider, for example, the fundamental mode, then the solution is of the form $$u(x,t) = \sin(k x) f(t)\,.$$ Plugging this into the equation of motion gives you $$ \begin{align} -k^2 f &= \frac{1}{v^2} \ddot{f} \\ 0&= \ddot{f} + \omega_0^2 f \end{align} $$ where $\omega_0^2 = (vk)^2 = (2\pi)^2T/(\mu \lambda^2)$ and $\lambda$ is the wavelength of the mode ( $k \equiv 2\pi / \lambda$ ). This is just the equation of a harmonic oscillator. Now suppose we add air friction. We define a drag coefficient $\gamma$ by saying that the friction force on a piece of the moving string of length $\delta x$ is $$F_{\text{friction}} = -\delta x \, \gamma \, \dot{u} \, .$$ Note that $\gamma$ has dimensions of force per velocity per length. Drag coefficients are usually force per velocity; the extra "per length" comes in because we defined $\gamma$ as the friction force per length of string. Adding this drag term, re-deriving the equation of motion, and again specializing to a single mode, we wind up with $$0 = \ddot{f} + \omega_0^2 f + \frac{\gamma}{\mu} \dot{f} \, .$$ Now we have a damped harmonic oscillator. The rate at which this damped oscillator decays tells you how fast (i.e. how efficiently) that oscillator transfers energy into the air. The energy loss rate $\kappa$ for a damped harmonic oscillator is just the coefficient of the $\dot{f}$ term, which for us is $\kappa = \gamma / \mu$ . $^{[1]}$ The quality factor $Q$ of the resonator, which is the number of radians of oscillation that happen before the energy decays to $1/e$ of its initial value, is $$Q = \omega_0 / \kappa = \frac{2\pi}{\lambda} \frac{\sqrt{T \mu}}{\gamma} \, .$$ Lower $Q$ means less oscillations before the string's energy has dissipated away as sound. In other words, lower $Q$ means louder instrument. As we can see, $Q$ decreases if either $\mu$ or $T$ decreases. This is in perfect agreement with our intuitive discussion above: lower tension would allow the string to deflect more for a given amount of vibrational energy, thus pushing more air around and more quickly delivering its energy to the air. Impedance and mode matching Alright, so what's going on when we attach the string to a guitar body? We argued above that the lower tension string has more efficient sound production because it can move farther to push more air. However, you know this isn't the whole story with the guitar body because you plainly see with your eyes that the guitar body surface does not deflect even nearly as much as the string does. Note that the guitar surface has much more area than the string. This means that for a given velocity, the frictional force is much higher, i.e. $\gamma$ is larger than for the string. So there you have it: the guitar body has lower $T$ and higher $\gamma$ than the string. These both contribute to making the $Q$ lower, which means that the vibrating guitar body more efficiently transfers energy to the surrounding air than does the bare string. Lowering the $T$ to be more mechanically compliant like the air is "impedance matching". In general, two modes with similar response to external force (or voltage, or whatever), more efficiently transfer energy between themselves. This is precisely the same principle at work when you use index matching fluid in an immersion microscope to prevent diffraction, or an impedance matching network in a microwave circuit to prevent reflections. Increasing the area to get larger $\gamma$ is "mode matching". It's called mode matching because you're taking a vibrational mode with a small cross section (the string) and transferring the energy to one with a larger cross section (the guitar body), which better matches the waves you're trying to get the energy into (the concert hall). This is the same reason horn instruments flare from a tiny, mouth sides aperture at one side, to a large, "concert hall" sized aperture at the other end. [1] I may have messed up a factor of 2 here. It doesn't matter for the point of this calculation.
{ "source": [ "https://physics.stackexchange.com/questions/150548", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5788/" ] }
150,553
Consider a particle in two dimensions with position vector $r(t)=<x(t),y(t)>$ and the shape of the path is described by a function $y(t)=f(x(t))$ (Thus $r(t)$ is a parametrization of $f$ with respect to time). Given some function $f$ and a speed $s$, how do we find the position vector $r(t)$ such that the particle moves always with constant speed $s$ given also some starting point? For example, consider the path shape described by $y=x^2$. And suppose we have a particle following this path beginning at the origin at $t=0$, moving always with constant speed $s=1$. How do we find the parametrization $<r(t)>$?
It is not amplification! The purpose of the guitar body is to impedance and mode match between the string and the surrounding air. Intuition When a an object vibrates it pushes on the surrounding air creating pressure waves which we hear as sound. A string vibrating alone without the body of the instrument doesn't make a very loud sound because exchange of energy from the vibrating string to the air pressure waves is inefficient. Why is it inefficient? The fundamental reason is that the string is a lot stiffer than the surrounding air and has a small cross sectional area. This means that as the string vibrates with a given amount of energy, it doesn't actually displace much air. With the same energy in the motion, a larger, more mechanically compliant object (e.g. the acoustic guitar body) would do a better job at transferring the energy into the air and thus into your ears. Analysis The equation of motion of a vibrating string with vertical displacement $u$ and horizontal position $x$ is $$\frac{\partial^2 u}{\partial x^2} = \frac{1}{v^2} \frac{\partial^2 u}{\partial t^2}$$ where $v = \sqrt{T/\mu}$ is the velocity of sound in the string, $\mu$ is the linear mass density, and $T$ is the tension. If you consider, for example, the fundamental mode, then the solution is of the form $$u(x,t) = \sin(k x) f(t)\,.$$ Plugging this into the equation of motion gives you $$ \begin{align} -k^2 f &= \frac{1}{v^2} \ddot{f} \\ 0&= \ddot{f} + \omega_0^2 f \end{align} $$ where $\omega_0^2 = (vk)^2 = (2\pi)^2T/(\mu \lambda^2)$ and $\lambda$ is the wavelength of the mode ( $k \equiv 2\pi / \lambda$ ). This is just the equation of a harmonic oscillator. Now suppose we add air friction. We define a drag coefficient $\gamma$ by saying that the friction force on a piece of the moving string of length $\delta x$ is $$F_{\text{friction}} = -\delta x \, \gamma \, \dot{u} \, .$$ Note that $\gamma$ has dimensions of force per velocity per length. Drag coefficients are usually force per velocity; the extra "per length" comes in because we defined $\gamma$ as the friction force per length of string. Adding this drag term, re-deriving the equation of motion, and again specializing to a single mode, we wind up with $$0 = \ddot{f} + \omega_0^2 f + \frac{\gamma}{\mu} \dot{f} \, .$$ Now we have a damped harmonic oscillator. The rate at which this damped oscillator decays tells you how fast (i.e. how efficiently) that oscillator transfers energy into the air. The energy loss rate $\kappa$ for a damped harmonic oscillator is just the coefficient of the $\dot{f}$ term, which for us is $\kappa = \gamma / \mu$ . $^{[1]}$ The quality factor $Q$ of the resonator, which is the number of radians of oscillation that happen before the energy decays to $1/e$ of its initial value, is $$Q = \omega_0 / \kappa = \frac{2\pi}{\lambda} \frac{\sqrt{T \mu}}{\gamma} \, .$$ Lower $Q$ means less oscillations before the string's energy has dissipated away as sound. In other words, lower $Q$ means louder instrument. As we can see, $Q$ decreases if either $\mu$ or $T$ decreases. This is in perfect agreement with our intuitive discussion above: lower tension would allow the string to deflect more for a given amount of vibrational energy, thus pushing more air around and more quickly delivering its energy to the air. Impedance and mode matching Alright, so what's going on when we attach the string to a guitar body? We argued above that the lower tension string has more efficient sound production because it can move farther to push more air. However, you know this isn't the whole story with the guitar body because you plainly see with your eyes that the guitar body surface does not deflect even nearly as much as the string does. Note that the guitar surface has much more area than the string. This means that for a given velocity, the frictional force is much higher, i.e. $\gamma$ is larger than for the string. So there you have it: the guitar body has lower $T$ and higher $\gamma$ than the string. These both contribute to making the $Q$ lower, which means that the vibrating guitar body more efficiently transfers energy to the surrounding air than does the bare string. Lowering the $T$ to be more mechanically compliant like the air is "impedance matching". In general, two modes with similar response to external force (or voltage, or whatever), more efficiently transfer energy between themselves. This is precisely the same principle at work when you use index matching fluid in an immersion microscope to prevent diffraction, or an impedance matching network in a microwave circuit to prevent reflections. Increasing the area to get larger $\gamma$ is "mode matching". It's called mode matching because you're taking a vibrational mode with a small cross section (the string) and transferring the energy to one with a larger cross section (the guitar body), which better matches the waves you're trying to get the energy into (the concert hall). This is the same reason horn instruments flare from a tiny, mouth sides aperture at one side, to a large, "concert hall" sized aperture at the other end. [1] I may have messed up a factor of 2 here. It doesn't matter for the point of this calculation.
{ "source": [ "https://physics.stackexchange.com/questions/150553", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63691/" ] }
150,847
Graphene is always in the news now a days and its key features are that it is; very strong, conductive and transparent . It is so transparent that each layer of graphene will only absorb 2% of Light passing through it. But what is it about the structure of Graphene which makes it (almost) transparent?
Graphene is only transparent because it is very thin (one atom thick). If it absorbs 2% per layer then just a few hundred layers would absorb almost all light and that would still be a very thin sheet of graphite. The question should be why does graphene absorb so much light compared to diamond which really is transparent? A simplified answer is that graphene is a very good conductor because it has only three covalent bonds per atom compared to the full four in diamond. This makes it possible for electrons to move freely over a sheet of graphene to conduct electricity. Like metals this means it will absorb or reflect light because the free electrons can absorb the small amount of energy in the photon. In diamond the photons would need to have enough energy to release an electron from the covalent bonds. For visible light this is not possible so the photons pass right through the diamond and are only stopped or deflected by impurities.
{ "source": [ "https://physics.stackexchange.com/questions/150847", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62834/" ] }
150,899
This was recently brought up, and I haven't been able to conclude a solid answer. Let's say we have two identical boxes ( A and B ) on Earth, both capable of holding a vacuum and withstanding 1 atm outside acting upon them. A is holding a vacuum, while B is filled with air (at 1 atm). Shouldn't they weigh the same as measured by a scale? Current thought process The following thought experiment suggests they'd have the same weight, but I haven't formulaically shown this — and everyone has disagreed so far. Take a box like B (so it's full of 1 atm air) and place it on a scale. Here's a cross section: +------------+ | | | | | | <-- B box | | +------------+ *********************** | | <-- scale Now, taking note of the scale readings, start gradually pushing down the top "side" (rectangle/square) of the box ( assume the air can somehow escape out of the box as we push down ) | | +------------+ | | | | | | +------------+ *********************** | | Then | | | | +------------+ | | | | +------------+ *********************** | | etc., until the top side is touching the bottom of the box (so the box no longer has any air between the top and bottom sides): | | | | | | | | +------------+ +------------+ *********************** | | It seems to me that: 1) pushing the top of the box down wouldn't change the weight measured by the scale. 2) the state above (where the top touches the bottom) is equivalent to having a box like A (just a box holding a vacuum). Which is how I arrived to my conclusions that they should weigh the same. What am I missing, if anything? What's a simple-ish way to model this?
The buoyant force on a body immersed in a fluid is equal to the weight of the fluid it displaces. In other words, $$ F_B = -\rho_{\text{fluid}} V_{\text{body}} ~g $$ The force of gravity on the body is equal to $$ F_g = m_{\rm body} ~g $$ The apparent weight of this body will therefore be equal to the sum of these two forces. $$ W_{\rm app} = -\rho_{\rm fluid} V_{\rm body}~g + m_{\rm body} ~g $$ When you add air into a box full of vacuum an evacuated box, the mass of the body (which is now $m_{\rm box} + m_{\rm air}$ ) increases, but the volume still stays the same. Therefore $W_{\rm app}$ must increase.
{ "source": [ "https://physics.stackexchange.com/questions/150899", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/66174/" ] }
151,331
Suppose a ball is flying towards me at a speed of 10m/s and that, on impact, I feel "x" amount of pain. If, instead, it was me flying towards the ball at the same speed, with all other conditions being the same, would I feel the same amount of pain?
Look at it this way: Suppose you are in a train travelling at 10 m/s. Somebody inside the train throws a ball at you in the opposite direction at 10 m/s. You feel the pain belonging to your first experiment. However, somebody looking at this experiment from outside the train would say that the ball is standing still and you are travelling towards the ball at 10 m/s (= your second experiment). Since it is the same experiment, you will feel exactly the same pain.
{ "source": [ "https://physics.stackexchange.com/questions/151331", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57948/" ] }
151,402
My book says: For most of the small objects, both are same. But for mammoth ones, they are really different ones. And in a gravity-less environment, COG is absent; COM still exists. Ok, what's the big deal when things are small and big? How these two : centre of mass & centre of gravity ?
Both values are computed as a position weighted average. For the center of mass we average the mass in this way, while for the center of gravity we average the effect of gravity on the body (i.e. the weight). \begin{align} x_\text{com} &= \dfrac{\int x \, \rho(x) \,\mathrm{d}x}{\int \rho(x) \, \mathrm{d}x} \\ \\ x_\text{cog} &= \frac{\int x \, \rho(x)\, g(x) \,\mathrm{d}x}{\int \rho(x) \,g(x) \,\mathrm{d}x} \end{align} Now, in the usual Physics 101 "near the surface of the Earth" convention $g(x)$ is constant so these two are equivalent. However, if the body is big enough that we need to account for either the changing strength or changing direction of gravity then they are no longer the same thing.
{ "source": [ "https://physics.stackexchange.com/questions/151402", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
152,927
Why is stress a tensor quantity? Why is pressure not a tensor? According to what I know pressure is an internal force whereas stress is external so how are both quantities not tensors? I am basically having a confusion between stress pressure and tensor. I am still in school so please give a very basic answer.
Stress is a tensor 1 because it describes things happening in two directions simultaneously. You can have an $x$-directed force pushing along an interface of constant $y$; this would be $\sigma_{xy}$. If we assemble all such combinations $\sigma_{ij}$, the collection of them is the stress tensor. Pressure is part of the stress tensor. The diagonal elements form the pressure. For example, $\sigma_{xx}$ measures how much $x$-force pushes in the $x$-direction. Think of your hand pressing against the wall, i.e. applying pressure. Given that pressure is one type of stress, we should have a name for the other type (the off-diagonal elements of the tensor), and we do: shear . Both pressure and shear can be internal or external -- actually, I'm not sure I can think of a real distinction between internal and external. A gas in a box has a pressure (and in fact $\sigma_{xx} = \sigma_{yy} = \sigma_{zz}$, as is often the case), and I suppose this could be called "internal." But you could squeeze the box, applying more pressure from an external source. Perhaps when people say "pressure is internal" they mean the following. $\sigma$ has some nice properties, including being symmetric and diagonalizable. Diagonalizability means we can transform our coordinates such that all shear vanishes, at least at a point. But we cannot get rid of all pressure by coordinate transformations. In fact, the trace $\sigma_{xx} + \sigma_{yy} + \sigma_{zz}$ is invariant under such transformations, and so we often define the scalar $p$ as $1/3$ this sum, even when the three components are different. 1 Now the word "tensor" has a very precise meaning in linear algebra and differential geometry and tensors are very beautiful things when fully understood. But here I'll just use it as a synonym for "matrix."
{ "source": [ "https://physics.stackexchange.com/questions/152927", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/56263/" ] }
152,932
Can anyone tell me why elastic collision occur between atomic particles? inelastic collision occur between ordinary objects? perfectly inelastic collision occur during shooting? super elastic collision occur during explosion? What is the reason for such a difference?
Stress is a tensor 1 because it describes things happening in two directions simultaneously. You can have an $x$-directed force pushing along an interface of constant $y$; this would be $\sigma_{xy}$. If we assemble all such combinations $\sigma_{ij}$, the collection of them is the stress tensor. Pressure is part of the stress tensor. The diagonal elements form the pressure. For example, $\sigma_{xx}$ measures how much $x$-force pushes in the $x$-direction. Think of your hand pressing against the wall, i.e. applying pressure. Given that pressure is one type of stress, we should have a name for the other type (the off-diagonal elements of the tensor), and we do: shear . Both pressure and shear can be internal or external -- actually, I'm not sure I can think of a real distinction between internal and external. A gas in a box has a pressure (and in fact $\sigma_{xx} = \sigma_{yy} = \sigma_{zz}$, as is often the case), and I suppose this could be called "internal." But you could squeeze the box, applying more pressure from an external source. Perhaps when people say "pressure is internal" they mean the following. $\sigma$ has some nice properties, including being symmetric and diagonalizable. Diagonalizability means we can transform our coordinates such that all shear vanishes, at least at a point. But we cannot get rid of all pressure by coordinate transformations. In fact, the trace $\sigma_{xx} + \sigma_{yy} + \sigma_{zz}$ is invariant under such transformations, and so we often define the scalar $p$ as $1/3$ this sum, even when the three components are different. 1 Now the word "tensor" has a very precise meaning in linear algebra and differential geometry and tensors are very beautiful things when fully understood. But here I'll just use it as a synonym for "matrix."
{ "source": [ "https://physics.stackexchange.com/questions/152932", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60433/" ] }
153,361
Suppose a wagon is moving at constant velocity on a friction-less surface, and rain begins to fill the wagon. The net force on the wagon is zero, so momentum is conserved; as the mass of the wagon increases, the speed decreases. But if the velocity of the wagon changes, the net force can't be zero, right? There has to be some force opposing the motion of the wagon to slow it down. How do we reconcile this?
To deal with this type of problem, you must be careful to define exactly what system you are dealing with, and then not change that system part way through the problem. This definition allows you to be very clear about whether the "system" has any external forces acting, and thus whether the momentum of the system is constant or not. In this case, you seem to be defining the wagon itself as the system, but then talk about the wagon as gaining weight, implying that the definition of what constitutes the wagon system is changing. Let's try this: the system is the wagon itself, without any stray mass that may be added. The wagon has a certain amount of momentum, and since there is no outside force of friction, that momentum is constant. But then the rain starts to fall. None of this rain is included in the system , even though it gets trapped inside the wagon. But in being trapped, the vertically falling rain also exerts an horizontal force on the system : either impacting the back of the wagon in the air, or hitting the bottom, and flowing towards the back of the wagon. All this means that there is an external force exerted by the rain on the system , and momentum of the system is not conserved. We can start over: the system now is defined as including the wagon and all the vertically falling water. Since the rain initially has no horizontal velocity, the total momentum of this new system is just that of the wagon. Now the rain starts hitting the wagon. But under our new definition, all of the rain impacting is an internal force, and cannot change the total momentum. This new system is isolated and momentum is conserved. So now, since more and more of the system is travelling with the wagon, the wagon must slow down. Internally, momentum is being transferred from the wagon part to the rain part of the overall system.
{ "source": [ "https://physics.stackexchange.com/questions/153361", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63691/" ] }
153,538
This morning I saw an ant and suddenly a question came to my mind: how do ants actually carry items much heavier than themselves? What's the difference (in physics) between us and them?
Strength is proportional to a surface area divided by volume, but since volume is directly proportional with mass and I can't get an accurate density (I am guessing approximately both for mass and size.), I will use mass instead. According to Wolfram Alpha, the average mass of the human body is 70 kilograms. The surface area of a person weighing 70 kg with a height of 170 cm is 1.818 square meters. This gives us a weight/surface area ratio of about $38.5 \frac{kg}{m^2}$ . So now, how much does an ant weigh? This article provides a variety of different numbers, varying from 1 mg to 60 mg. Since the biggest ants will be soldiers, I assume that the approximation will be slightly smaller than 30 mg. Say 25 mg or 0.000025 kilograms. Now comes the interesting part. Not Wolfram, not even uncle Google knows the surface area of an ant. This Britannica page says that ants range from 2 mm to 25 mm. Let's eliminate the soldiers since they are huge. (A big worker would be as long as 8 mm.) That gives an approximation of 5 mm. I gave the animation industry a shot and tried to measure the surface area of this free ant model . The length of the ant is now 0.005 - let's call it a meter. This gives us a surface area of about $4.87\cdot 10^{-5}$ , or $0.0000487$ square meters. So an ant that weighs 0.000025 kg with a length of 5 millimetres has a surface area of about $0.0000487 m^2$ . This gives a weight/surface area rate of about $0.5335 \frac{kg}{m^2}$ . So, the uniform strength of an ant is about thirteen times more than a human's. How much can a human carry while walking a long distance, maybe even climbing? Maximum 20 kilograms for most people. That is slightly more than a quarter of our weight (about 0.28). How much can an ant carry? About 1 gram - the weight of a leaf, or 40x the weight of an average ant. 4 divided by .28 = 14. So ants are about 14 times stronger than we are. (Carrying capacity according to body mass.)
{ "source": [ "https://physics.stackexchange.com/questions/153538", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/59444/" ] }
153,543
The title pretty much asks the question. I was reminded of the Kinetic Theory of Gases recently and I am puzzled as to why gas molecules would have any translational motion. Is it because collisions are presumed to be perfectly elastic, like billiard balls bouncing around on a billiard table forever after a break? If so, where does the initial momentum come from?
Strength is proportional to a surface area divided by volume, but since volume is directly proportional with mass and I can't get an accurate density (I am guessing approximately both for mass and size.), I will use mass instead. According to Wolfram Alpha, the average mass of the human body is 70 kilograms. The surface area of a person weighing 70 kg with a height of 170 cm is 1.818 square meters. This gives us a weight/surface area ratio of about $38.5 \frac{kg}{m^2}$ . So now, how much does an ant weigh? This article provides a variety of different numbers, varying from 1 mg to 60 mg. Since the biggest ants will be soldiers, I assume that the approximation will be slightly smaller than 30 mg. Say 25 mg or 0.000025 kilograms. Now comes the interesting part. Not Wolfram, not even uncle Google knows the surface area of an ant. This Britannica page says that ants range from 2 mm to 25 mm. Let's eliminate the soldiers since they are huge. (A big worker would be as long as 8 mm.) That gives an approximation of 5 mm. I gave the animation industry a shot and tried to measure the surface area of this free ant model . The length of the ant is now 0.005 - let's call it a meter. This gives us a surface area of about $4.87\cdot 10^{-5}$ , or $0.0000487$ square meters. So an ant that weighs 0.000025 kg with a length of 5 millimetres has a surface area of about $0.0000487 m^2$ . This gives a weight/surface area rate of about $0.5335 \frac{kg}{m^2}$ . So, the uniform strength of an ant is about thirteen times more than a human's. How much can a human carry while walking a long distance, maybe even climbing? Maximum 20 kilograms for most people. That is slightly more than a quarter of our weight (about 0.28). How much can an ant carry? About 1 gram - the weight of a leaf, or 40x the weight of an average ant. 4 divided by .28 = 14. So ants are about 14 times stronger than we are. (Carrying capacity according to body mass.)
{ "source": [ "https://physics.stackexchange.com/questions/153543", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/47612/" ] }
153,791
What is the exact use of the symbols $\partial$, $\delta$ and $\mathrm{d}$ in derivatives in physics? How are they different and when are they used? It would be nice to get that settled once and for all. $$\frac{\partial y}{\partial x}, \frac{\delta y}{\delta x}, \frac{\mathrm{d} y}{\mathrm{d} x}$$ For what I know, $\mathrm{d}$ is used as a small infinitisemal change (and I guess the straight-up letter $\mathrm{d}$ is usual notation instead of italic $d$, simply to tell the difference from a variable). Of course we also have the big delta $\Delta$ to describe a finite (non-negligible) difference. And I have some vague idea that $\partial$ is used for partial derivatives in case of e.g. three-dimensional variables. Same goes for $\delta$, which I would have sworn was the same as $\partial$ until reading this answer on Math.SE: https://math.stackexchange.com/q/317338/ Then to make the confusion total I noticed an equation like $\delta Q=\mathrm{d}U+\delta W$ and read in a physics text book that: The fact that the amount of heat [added between two states] is dependent on the path is indicated by the symbol $\delta$... So it seems $\delta$ means something more? The text book continues and says that: a function [like the change in internal energy] is called a state function and its change is indicated by the symbol $\mathrm{d}$... Here I am unsure of exactly why a $\mathrm d$ refers to a state function . So to sum it up: down to the bone of it, what is $\delta$, $\partial$ and $\mathrm{d}$ exactly, when we are talking derivatives in physics. Addition Especially when reading a mathematical process on a physical equation like this procedure: $$\delta Q=\mathrm{d}U+p\mathrm{d}V \Rightarrow\\ Q=\Delta U+\int_1^2 p \mathrm{d}V$$ It appears that $\delta$ and $\mathrm{d}$ are the same thing. An integral operation handles it the same way apparently?
Typically: $\rm d$ denotes the total derivative (sometimes called the exact differential):$$\frac{{\rm d}}{{\rm d}t}f(x,t)=\frac{\partial f}{\partial t}+\frac{\partial f}{\partial x}\frac{{\rm d}x}{{\rm d}t}$$This is also sometimes denoted via $$\frac{Df}{Dt},\,D_tf$$ $\partial$ represents the partial derivative (derivative of $f(x,y)$ with respect to $x$ at constant $y$). This is sometimes denoted by $$f_{,x},\,f_x,\,\partial_xf$$ $\delta$ is for small changes of a variable , for example minimizing the action $$\delta S=0$$ For larger differences, one uses $\Delta$, e.g.: $$\Delta y=y_2-y_1$$ NB: These definitions are not necessarily uniform across all subfields of physics, so take care to note the authors intent . Some counter-examples (out of many more): $D$ can denote the directional derivative of a multivariate function $f$ in the direction of $\mathbf{v}$: $$D_\mathbf{v}f(\mathbf{x}) = \nabla_\mathbf{v}f(\mathbf{x}) = \mathbf{v} \cdot \frac{\partial f(\mathbf{x})}{\partial\mathbf{x}}$$ More generally $D_tT$ can be used to denote the covariant derivative of a tensor field $T$ along a curve $\gamma(t)$: $$D_tT=\nabla_{\dot\gamma(t)}T $$ $\delta$ can also represent the functional derivative : $$\delta F(\rho,\phi)=\int\frac{\delta F}{\delta\rho}(x)\delta\rho(x)\,dx$$ The symbol $\mathrm{d}$ may denote the exterior derivative , which acts on differential forms; on a $p$-form, $$\mathrm{d} \omega_p = \frac{1}{p!} \partial_{[a} \omega_{a_1 \dots a_p]} \mathrm{d}x^a \wedge \mathrm{d}x^{a_1} \wedge \dots \wedge \mathrm{d}x^{a_p}$$ which maps it to a $(p+1)$-form, though combinatorial factors may vary based on convention. The $\delta$ symbol can also denote the inexact differential , which is found in your thermodynamics relation$${\rm d}U=\delta Q-\delta W$$ This relation shows that the change of energy $\Delta U$ is path-independent (only dependent on end points of integration) while the changes in heat and work $\Delta Q=\int\delta Q$ and $\Delta W=\int\delta W$ are path-dependent because they are not state functions .
{ "source": [ "https://physics.stackexchange.com/questions/153791", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4962/" ] }
153,904
As I learned today in school, my teacher told me that when light enters a glass slab it slows down due to the change in density and it speeds up as it goes out of the glass slab. This causes a lateral shift and the light emerges out from the point different than that from where it should have actually emerged from. Okay so what I mean to ask is, when light enters point A on glass slab and emerges from point C why does the light speed up? Where does it get the energy it has lost when it entered the glass slab? P.S.: Also, if I place a very very very large glass slab and make a beam of light pass through it will the light never come out as all the energy was lost in place of heat?
When light is propagating in glass or other medium, it isn't really true, pure light. It is what (you'll learn about this later) we call a quantum superposition of excited matter states and pure photons, and the latter always move at the speed of light $c$. You can think, for a rough mind picture, of light propagating through a medium as somewhat like a game of Chinese Whispers. A photon is absorbed by one of the dielectric molecules, so, for a fantastically fleeting moment, it is gone. The absorbing molecule lingers for of the order of $10^{-15}{\rm s}$ in its excited state, then emits a new photon. The new photon travels a short distance before being absorbed and re-emitted again, and so the cycle repeats. Each cycle is lossless : the emitted photon has precisely the same energy, momentum and phase as the absorbed one. Unless the material is birefringent , angular momentum is perfectly conserved too. For birefringent mediums, the photon stream exerts a small torque on the medium. Free photons always travel at $c$, never at any other speed. It is the fact that the energy spends a short time each cycle absorbed, and thus effectively still, that makes the process have a net velocity less than $c$. So the photon, on leaving the medium, isn't so much being accelerated but replaced. Answer to a Comment Question: But how the ray of light maintain its direction? After it is absorbed by first atom, how does it later knows where to shot new photon again? Where is this information is preserved? A very good question. This happens by conservation of momentum. The interaction is so short that the absorber interacts with nothing else, so the emitted photon must bear the same momentum as the incident one. Also take heed that we're NOT a full absorption in the sense of forcing a transition between bound states of the atom (which gives the sharp spectral notches typical of the phenomenon), which is what David Richerby is talking about. It is a transition between virtual states - the kind of thing that enables two-photon absorption, for example - and these can be essentially anywhere, not at the strict, bound state levels. As I said, this is a rough analogy: it originated with Richard Feynman and is the best I can do for a high school student who likely has not dealt with quantum superposition before. The absorption and free propagation happen in quantum superposition , not strictly in sequence, so information is not being lost and when you write down the superposition of free photon states and excited matter states, you get something equivalent to Maxwell's equations (in the sense I describe in my answer here or here ) and the phase and group velocities naturally drop out of these. Another way of qualitatively saying my last sentence is that the absorber can indeed emit in any direction, but because the whole lot is in superposition, the amplitude for this to happen in superposition with free photons is very small unless the emission direction closely matches the free photon direction, because the phases of amplitudes the two processes only interfere constructively when they are near to in-phase, i.e. the emission is in the same direction as the incoming light. All this is to be contrasted with fluorescence , where the absorption lasts much longer, and both momentum and energy is transferred to the medium, so there is a distribution of propagation directions and the wavelength is shifted. Another comment: There was a book which said mass of photon increases when it enters glass... I think that book was badly misleading. If you are careful, the book's comment may have some validity. We're talking about a superposition of photon and excited matter states when the light is propagating in the slab, and this superposition can indeed be construed to have a nonzero rest mass, because it propagates at less than $c$. The free photons themselves always propagate at $c$ and always have zero rest mass. You actually touch on something quite controversial: these ideas lead into the unresolved Abraham-Minkowsky Controversy .
{ "source": [ "https://physics.stackexchange.com/questions/153904", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/64348/" ] }
153,926
The Earth moves through space at 67,000 MPH. The Milky Way travels through a local group at 2,237,000 MPH. Wouldn't you need a fixed point to be able to measure velocity against? After all, compared to the total speed of our Milky Way, the Earth isn't moving through space. What fixed points do we compare against? When we say the speed of light is $c$, what is that relative to?
There are two separate questions there. The easiest one to answer is how we measure the velocity of the Earth, Milky Way etc, because we measure it relative to the cosmic microwave background (or CMB ). If you measure the CMB in all directions and find it's the same in all directions then you are stationary in comoving coordinates . However if you find the CMB is blue shifted in one direction and red shifted in the opposite direction then you know you're moving relative to the comoving frame, and the change in the CMB is due to the Doppler shift. From the size of this change you can calculate your velocity. Measurements of the CMB from the Earth show exactly this Doppler shift , and that's how we can work out the velocity of the Earth. Having got this we can convert velocities measured relative to the Earth into velocities measured relative to the comoving frame. There are traps for the unwary here, because all velocities are relative and the comoving frame is in no sense an absolute way to measure velocity. It just happens to be a useful reference and one that tallies what our instinctive interpretation of velocity relative to the rest of the universe. This is discussed in the question Assuming that the Cosmological Principle is correct, does this imply that the universe possess an empirically privileged reference frame? . Lastly back to light. The velocity of light is special because every observer who makes a local measurement of the speed of light will always get the same value of $c$, regardless of what their velocity is. This is one of the building blocks of special relativity.
{ "source": [ "https://physics.stackexchange.com/questions/153926", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/67953/" ] }
153,937
What is currently stopping us from having a theory of everything? i.e. what mathematical barriers, or others, are stopping us from unifying GR and QM? I have read that string theory is a means to unify both, so in this case, is it a lack of evidence stopping us, but is the theory mathematically sound?
One thing that stops us from having a theory of everything is actually quite simple. Gravity as we understand it, thanks to the strong equivalence principle, is not a force. It is entirely geometrizable because there is actually no coupling constant between a physical object and the "gravitational field". This means that there is no a priori way to discriminate the action of "gravity" on different objects: it acts the same for everybody (obviously, I'm not speaking about the interaction of EM with gravity and stuff here). On the contrary, quantum fields as we know them are defined on space-time, and therein exist coupling constants that tell you how the dynamics of an object are influenced by the value of the field on a given space-time point. In this respect, one can easily see that the question "if usual fields with coupling constants happen on space-time, where does space-time interaction happen?" hardly makes sense. This shows that a theory of everything has to treat space-time as something else than just an usual quantum field. Let's stick to Newtonian mechanics in order to understand what I mean by "no coupling constant". Let me remind you that in some inertial frame, the second law is $F = m_I a$, for some object of inertial mass $m_I$. Now, call $\phi(x,t)$ some potential. A physical object is said to interact with $\phi$ with a coupling constant $q_\phi$ if $F = - q_\phi \nabla \phi$. Now, what happens if the quotient $m_I/q_\phi = G$ is the same constant for all physical objects? Newton's second law shows the acceleration of an object that interacts with such potential is the same for everyone, that is, $G a(t) = -\nabla \phi(x,t)$. This means that there's no way to discriminate physical objects by looking, only at how they interact with $\phi$. Hence, we are always free to follow a "generalized" strong equivalence principle, which would stipulate that to be inertial is to be in "free fall" in the potential $\phi$. This would lead us to a geometric formulation of $\phi$ as a metric theory of space-time. There is therefore no need to introduce a coupling constant $q_\phi$ and to see the $\phi$-interaction as a force. Now, notice that this is exactly what happens for gravitation.
{ "source": [ "https://physics.stackexchange.com/questions/153937", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/45874/" ] }
153,965
I want to develop a game where time runs backwards, based on the idea that physical laws are reversible in time. However, when I have objects at rest on the earth, having gravity run backwards would mean that the objects are repulsed and falling upwards. Obviously, that is not how things happen in the real world (objects stay at rest for a long time without having just fallen down) so what is my logic mistake? How can I start planning a simple physics engine with a reverse time flow with gravity without that happening?
The direction of the gravitational force would not change under time reversal. Your object would feel a force downward, just as it does usually. It might be easier to imagine you had a movie of an object under the influence of gravity. Drop the ball from rest some distance above the floor. You'll see it move downward and speed up. You'd interpret this as a gravitational force downward. Then, upon playing the movie in reverse, you'll see the ball move upward with decreasing speed . This observation is still consistent with a gravitational force downward.
{ "source": [ "https://physics.stackexchange.com/questions/153965", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/41007/" ] }
154,227
I touched a tree that was touching an electric fence and got an electric shock. How was this possible if wood is an insulator? The tree wasn't wet either, and it was a pretty strong shock too.
Trees are not as good an insulator as you might think. This source suggests a typical conductivity of living tree sap is 0.01 S/m with a relative permittivity of 80. So not an insulator, though a poor conductor. Typical advice when using electric fencing is that you do not use wooden posts! Presumably because wet wood is also conductive to some extent. In any case, all that is required is that the tree acquired an electric potential and that you were more resistive than the path between the fence and you through the tree. It's the "volts that jolt". The current flow through the tree and you, would have been very small. I would expect that the jolt would be maximised if you touched the tree near where it touched the fence or at least at the same height as where it touched the fence - thus minimising the resistance along the path to you. EDIT: Oven dried wood has a conductivity of $\sim 10^{-15}$ S/m (i.e. 13 orders of magnitude lower), so it would be fair enough to call that an insulator for most practical purposes.
{ "source": [ "https://physics.stackexchange.com/questions/154227", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57434/" ] }
154,443
The general equation for the force of friction (static or kinetic) is $F_f = \mu * F_N$, where $F_f$ is the force of friction and $\mu$ is the coefficient of friction (its value is dependent upon the surfaces interacting on each other). Why is it that this equation is so simple and does not contain any other variables that account for the force of friction on an object? Does anyone know how this equation was developed?
It's so simple because it's only a first order approximation model to how friction actually works. There are several other models , but to use them you usually need more parameters or other pieces of information about the system (for example, if there are fluid lubricants involved, the pattern of the surface, the materials involved, etc). The model $F_f = \mu F_N$ is called Coulomb model of friction . It assumes 3 important laws: 1. Amonton's first law of friction The magnitude of the friction force is independent of the area of contact. This law dates back to Leonardo da Vinci: 2. Amonton's second law of friction The magnitude of the friction force is proportional to the magnitude of the normal force. Here is an example of experimental data showing the dependence of friction with normal force: The slope gives the friction coefficient: $\mu = F_f/F_N$. This also dates back to Leonardo da Vinci, who noticed that if the load of an object was doubled, its friction would also be doubled. 3. Coulomb's law of friction The kinetic friction is independent of the sliding velocity. This is only somewhat true for small changes in velocity. Some models account for this dependence: a) Coulomb model (without static friction) b) Coulomb model + viscosity (without static friction) c) Coulomb model + viscosity d) Coulomb model + viscosity + Stribeck effect Limitations Here is an example of experimental data showing the dependence of friction with velocity: Here is an example showing non linearity with respect to the normal force: The author comments on the graph above: What’s going on here? Let’s look at the data for the teflon (the blue data). I fit a linear function to the first 4 data points and you can see it is very linear. The slope of this line gives a coefficient of static friction with a value of 0.235. However, as I add more and more mass to the friction box, the normal force keeps increasing but the friction force doesn’t increase as much. The same thing happens for friction box with felt on the bottom. This shows that the “standard” friction model is just that – a model. Models were meant to be broken. Here is another simple article about the limitations of the Coulomb model of friction.
{ "source": [ "https://physics.stackexchange.com/questions/154443", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/68179/" ] }
154,461
I have had this question in my mind for a long time, I thought you guys might enlighten me easily. I am confused about some space photographs and claims like "this galaxy is 13 billions light years away from us.": how we can take the photograph of something that far, if it is 13 billion light years away wouldn't it take 26 billion light years to take those pictures? today this post led me to ask this question, at last: a space picture There certainly is something I don't know about photography or light years; if you could tell me the logic behind this, I would appreciate it. I am not a physicist or any science guy, so please tolerate my ignorance.
The error is probably in this statement if it is 13 billion light years away wouldn't it take 26 billion light years to take those pictures? I think you are imagining that cameras send out light to the objects, and when this light comes back records the light as an image. Not really. Cameras merely record the light they see from that area. So if that area is 13 billion light years away (not sure how credible source is) then all that means is that the light you are capturing today is the light that galaxy emitted 13 billion years ago. Imagine for instance Anna and Bob are playing catch with a ball. Anna throws the ball to Bob. Bob receives the ball, and says the ball came at 3:00pm sharp. But the ball was in the air for 1 minute (anna is a slow thrower). That means Anna threw the ball at 2:59, even if Bob recorded it at 3:00. In this scenario, Bob is acting much like a camera acts, by receiving information (in this case a ball, in a camera's case it would be light from galaxies) The reason that Hubble took photos for 4 months (this might be wrong, I'm no good with photography) is that the longer it receives the information, the more 'background' light that we don't want to capture can be removed. Hoped this makes sense. P.S. may have misunderstood the question. You say if it is 13 billion light years away wouldn't it take 26 billion light years to take those pictures? as if light years are a measure of time. A light year is a measure of distance, the distance light travels in a year in a vacuum.
{ "source": [ "https://physics.stackexchange.com/questions/154461", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/68194/" ] }
154,651
Context Why is it so easy to create audible sounds in life with basically anything? Putting your cup of coffee on a table comes with a sound Turning a page of your book comes with a sound Even something as soft as a towel creates sound when you move it or unfold it. When leaves on the ground are moved by wind, one hears the sound! Of course the list doesn't have an end, but you get the point. My own guess is that: it must be related to the average density of air around us, which as it so happens, makes air compression caused by our day-to-day activities audible. The fact that our hearing frequency range extends from 20 Hz to 20,000 Hz is most probably due to evolutionary reasons, namely that those with poorer hearing range had a harder time to survive. But that's another story. Question All that aside, what criteria need to be fulfilled for an acoustic sound to be audible? i.e. fit into our hearing range. I would imagine that for a complete picture of the problem, there are many factors to consider e.g. : Density of the object $\rho_o$ creating the sound Density of air $\rho_{\rm air}$, for simplicity let's assume it is constant, i.e. fixed latitude! Speed $v$ of the moving object The object's cross section $S$ (probably a very crucial factor as it goes hand in hand with the intensity of the acoustic wave I'd imagine) The object's surface details: rough, soft, hard, flat etc. … Any back of the envelope estimation with the minimum necessary number of factors to take into account will do fine!
We can consider four aspects of your question: Why do most events generate sound? What sounds get propagated? What does it take for sound to be detected? Has evolution got anything to do with this? 1 - generating sound Most of the sounds you describe are "broad band". Remember that a delta pulse (short sharp shock) is basically "all frequencies", although in reality a pulse of finite duration will not contain the very highest frequencies. Now it turns out (see for example my earlier answer on this topic ) that it takes an absolutely TINY motion (less than an atom's width) to generate an audible sound pulse - so we can safely say "every motion makes a sound; most motions make audible sound". 2 - propagation of sound Like all finite-sized sources of energy, once you are a reasonable distance (reasonable compared to the size of the object generating the sound) away, sound intensity falls off as the inverse square of the distance (barring mechanisms to contain the direction of propagation: tunnels, mountains etc). This means that sound will typically remain audible for roughly the same distance as the object making it remains visible/interesting. Certain very loud sources (e.g. crickets) are an exception to this rule - but they are deliberately trying to be heard a long way off (see point 4). Sound is also attenuated by air - according to Stokes's Law, the attenuation coefficient $\alpha \propto \omega^2$, meaning that higher frequencies are absorbed more strongly (because of viscous interactions in the air). From the Bruell & Kjaer website : Low frequencies really only get attenuated according to the inverse square law, but higher frequencies are attenuated more strongly. 3 - detecting sound In order to detect sound, a membrane needs to be moved. This motion then has to somehow be conveyed to the nervous system, which is water-based and therefore has a very different acoustic impedance than air ($z_0 = \rho c$ - so when density increases by 1000x and speed of sound by 4x, you have a mismatch...). The mechanisms in the ear (tympanic membrane, malleus, incus, stapes, oval window, cochlea) is a beautiful piece of engineering to create something of an acoustic match, and works quite well over a range of frequencies. Unfortunately, for very low or very high frequencies, bit of that mechanism stop working so well - the finite mass (inertia) of the components makes them more reluctant to move at high frequencies. This again puts an upper limit on the frequency we can hear. However, the "amplification" that the entire organ provides is exquisite - as I computed in the answer linked above this means you can hear tiny, tiny vibrations. 4 - evolution The human body is a wonderful machine, refined by aeons of evolution - "she who hears the approaching predator lives to procreate another day". The combination of "everything disturbs the air around it" and "we are designed to detect the slightest sound" is the answer to your question.
{ "source": [ "https://physics.stackexchange.com/questions/154651", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62173/" ] }
154,700
Would this be because tidal deceleration causing the Earth to spin faster or are there other actions in play that I haven't considered? Would the Earth even spin faster because of the tidal deceleration?
Suppose the Moon didn't orbit the Earth at all, so it just stayed at some fixed point while the Earth rotated underneath it: In this case every point on the equator would pass directly under the Moon every 24 hours, and we'd get a high tide every 24 hours. (There's another high tide when we're exactly on the opposite side of the earth to the moon, but let's ignore that for now.) But the Moon does orbit the Earth in a prograde orbit, and that means after 24 hours the Earth has revolved once but the Moon has also moved on a bit: So to get directly under the Moon again takes a bit longer than 24 hours. From memory it's about 45 minutes longer, so the interval between every two high tides is about 24 hours and 45 minutes. Now suppose the Moon moved in a retrograde orbit: This time as you're revolving around with the Earth the Moon moves towards you, so it takes about 45 minutes less to get directly under the Moon again. If the Moon were in a retrograde orbit every two high tides would be separated by about 23 hours and 15 minutes. That's why tides would be more frequent if the Moon were in a retrograde orbit. The diffrence between the prograde and retrograde orbit tide timings is about 90 minutes per 24 hours, which is about 7%. That's where your teacher gets the 7% figure from.
{ "source": [ "https://physics.stackexchange.com/questions/154700", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/68298/" ] }
154,951
Consider this picture of sun beams streaming onto the valley through the clouds. Given that the valley is only (at a guess) 3km wide, with simple trigonometry and the angles of the beams, this gives the result that the position of the light source is being a few tens of km away at most. What is wrong with the analysis?
This picture ( source ) should pretty much answer your question: The train's destination is not above the ground, but rather far away, and perspective means that the tracks appear not to be parallel but instead to converge to the vanishing point. The same applies to the beams of light above them. The Sun is very far away and the beams are pretty much parallel, but they're pointing towards you, and perspective makes them appear to converge towards the vanishing point - which in this case is the Sun's location in the sky. The technical term for these beams is "crepuscular rays." Occasionally, when the Sun is very low on the horizon, you can see "anticrepuscular rays," where the beams seem to converge to a different point on the opposite side of the sky to the Sun. Here's an example: ( source ) This happens for the same reason - the rays are really parallel, and there's another vanishing point in the opposite direction from the Sun.
{ "source": [ "https://physics.stackexchange.com/questions/154951", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
155,539
From the school physics I know that the material objects bounce from the plane surface at the same angle, losing some kinetic energy. In the same school I was taught that the light (and waves in general) obeys this principle too. Obviously, in the case of light the plane surface should be a perfect mirror. But I can't understand how should this work from the quantum point of view. Let's assume that our mirror consists of a single silver atom. Why should the electrons of this atom re-emit consumed photons at some specific angle?
That's a good question. Without realising it you have stumbled across the Huygens-Fresnel principle . The starting point it that a single silver atom is far smaller than the wavelength of light, so any scattering from it will be isotropic i.e. it will scatter the light equally in all directions. But suppose we have two silver atoms side by side. Each atom will scatter isotropically, so in effect we have two closely spaced emitters of light and the system behaves like a Young's slits setup . Now the light isn't simply isotropically scattered, but instead it's scattered into preferred directions. (I'm oversimplifying because two atoms would be too closely spaced to act as Young's slits, but bear with me.) Now add lots of atoms in a row, and you get something like a diffraction grating . Add lots more to make a 2D surface, then add more layers of silver atoms below, and you're building up a system where the overall light scattering is the sum of individual scattering from huge numbers of individual silver atoms. This is basically the Huygen's construction, and if you do the sums for a surface you can show that the overall scattering is only non-zero when the angle of reflection is equal to the angle of incidence. Any optics textbook should have the calculation, or a quick Google found an example here .
{ "source": [ "https://physics.stackexchange.com/questions/155539", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/37179/" ] }
155,606
What does it mean that the Fermi level for some semiconductors lie in the band gap? Is Fermi level definition different from what is know as usual? We define the Fermi level as the highest level of energy which is occupied by electrons. Then what is missing here?
No the definition of the "Fermi level" doesn't change, it's only its relative position in the band structure of a material changes. Definition : The Fermi energy or Fermi level $E_F$ is the chemical potential of electrons at $T=0.$ The states that are filled at $T=0$ are called the Fermi sea. In a solid the energy eigenvalues are filled in accordance with the Fermi-Dirac distribution, which says that the probability of a state of energy $E$ being occupied by an electron is given by: $$f(E)=\frac{1}{e^{(E-E_F)/k_BT}+1}.$$ Now what does it mean when the Fermi level lies in the band gap or instead in one of the bands? Here's an overview using a nice diagram from wikipedia : The y-axis is the energy and x-axis the density of states(of electrons) available for different band structures of different materials. Metals : are very good conductors because the Fermi level lies inside one of the bands, meaning there's no energy gap to overcome for an electron in the valence band to pass on to the conduction band. Which explains why metals are populated by quasi-free electrons in the conduction band. Semi-metals : Very similarly to metals, $E_F$ lies in one of the bands, but unlike metals, here the overlap between valence and conduction bands are very small. But still no gap! Semiconductors : For semiconductors, there's a small energy gap between the two valence and conduction bands, the states in the gap are not available to the electrons. So in order to have moving charge carriers, electrons and holes, the system first has to be heated up (excited, also possible with an applied E-field) in order to overcome the gap $E_g$ and be able to occupy the available states in the conduction band. Now because for semiconductors the gap is usually very narrow, about $\leq 4 eV$, meaning it requires a specific minimum amount of energy for the transition. Because this gap is so small, the properties of semiconductors lies between those of conductors and insulators, but unlike metals, semiconductors increase their conductivity with temperature, because the overlap between the two bands becomes greater with increasing T. Finally as you see in the above diagram, $E_F$ lies in the gap for semiconductors, which means occupied and unoccupied states in valence and conduction band respectively are energetically separated. The neat thing about semiconductors is that they can be n and p doped, which respectively causes the fermi level to shift towards the conduction band and the valence band. Finally for insulators , the energy gap between the two bands in so large, that it becomes very difficult to excite electrons towards the conduction band, hence their poor conductivity. A final point : one may ask where do these gaps come from anyway? For solids, when you take the nearly free electron model and add a periodic potential $V(\mathbf{r+R})=V(\mathbf{r})$ to the picture ($\mathbf{R}$ the lattice vector), and solve the Schroedinger equation using degenerate perturbation theory, we see that a gap opens up between the energy eigenvalues at the zone boundary in momentum space $\mathbf{k}$, i.e. $$E_{\pm}=\epsilon_0(\mathbf{k})\pm |V_G|$$ $V_G$ being the value of added periodic potential at zone boundary. In conclusion when electrons are subject to a periodic potential, gaps arise in their dispersion relation. Thus the electron spectrum breaks into bands, with a forbidden energy gap between the two bands. Clarifications added in regard to the question asked by Calmarius in comments: Question: From this answer it's still not clear to me why would the fermi level lie in a forbidden gap. Based on what I have found on the internet they say fermi level is the maximum energy level occupied by electrons on 0K "the top of the Fermi sea". For semiconductors this mean all electrons are in the valence band. So why don't the fermi level lies right at the top of the valence band instead? It's not clear to me. Don't worry, your confusion is well placed and your question is rather common, note that some of the things I describe here may or may not be already clear to you, but I have to make sure they're mentioned.. First things first, the statement that "fermi level is the maximum energy level occupied by electrons at 0K" is completely wrong (this would be actually a more valid definition for the valence band energy), and for some reason seems to be a common misunderstanding, as you have realised yourself by looking through the web. You could add a correction to that definition, by saying that "the maximum energy level occupied by electrons in the valence band always lies below the fermi level", and this would be sort of say something more accurate about the fermi level. The most general definition though, which is always correct, is: "the chemical potential at 0 Kelvin". Frequently though the Fermi level is wrongly defined as the energy of the most energetic occupied electron state in a system, this definition leads to errors as soon as you have discrete energy eigenstates, in other words when there's a gap between the most energetic occupied state and the least energetic unoccupied state in the system. Whereas the correct definition is the chemical potential at $T=0 K$, and will be halfway between the minimum of the conduction band and the maximum of the valence band. It is exactly in the middle, i.e. $E_F = 1/2(E_c-E_v)$ when you have e.g. an intrinsic semiconductor, meaning equal density of conduction band electrons $n_0$ and free valence band holes $p_0$, or usually written as $n_0 = p_0 = n_i,$ (i for intrinsic) this equation should already resolve some of your confusion about what defines $E_F.$ As you dope the semiconductor, the latter turns into an inequality, e.g. $n_0>n_i>p_0$ for n-doped. To simplify the picture, imagine a metallic hulk which is made of a periodic array of atomic cores, without its electron cloud, necessary for electrostatic neutrality. Suppose now electrons are added to this array of cores until neutrality is reached. The first electrons added go into the lowest possible kinetic energy states, or as seen in k-space, states of smallest wave-vectors. The next electrons, due to the Pauli exclusion principle, must build up occupancy of successive shells in k-space, of ever higher energy (the arrangement of this levels depends on both the dimensionality of the system and boundary conditions). As soon as enough electrons are supplied to reach neutrality, a chemical potential is established (assuming this neutrality was reached for 0 K), which corresponds to the Fermi level, note again this chemical potential will not be equal to the highest occupied k-state of electrons. Now an important remark: if you kept trying to continuously add electrons to the system, the available momentum states allowed in the system do not form a continuum, (because of the periodic arrangement of the cores, in simple terms) and when you solve the Schrodinger equation for the simplest such model, you immediately come across an energetic separation, the bandgap, between the valence (where the electrons are locked in fixed k-states) and conduction band, which should be overcome for any further addition of electrons to the system. In a nutshell, in any system the density of free holes in the valence band and electron is conduction band defines the fermi level. The fermi level for semiconductors is always between the two bands, its exact position determined by doping concentration. Recommended references: The Oxford Solid State Basics, by Steven H. Simon. Fundamentals of Semiconductors, 4th edition, by Peter Y. Yu and Manuel Cardona .
{ "source": [ "https://physics.stackexchange.com/questions/155606", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60620/" ] }
155,989
I woke up recently to find the following structures on my lawn; they resemble bubbles, but are formed from ice (we had a moderate frost overnight). There were eight of these 'bubbles' on my lawn and one smashed one on the adjacent path. They were only present in my garden, and nowhere else. What processes could have allowed these 'bubbles' to form in this shape? 'Bubbles' on grass: Damaged 'bubble' This was found on the paving slab and was not touched:
I think this is a frost flower , crystallofolia , closely related to hair ice . They can appear quite similar to the above photos. Pluchea odorata — Marsh Fleabane — Mown Stem on Path December 25, 2007 &mdash 26° F — N. Hays County, Texas Capillary action sucks up water from cracks in plant stems or just water present inside wood, freezing as it gets exposed to air. This makes more water arrive, displacing the ice outwards forming long petals along cracks, ridges or other structures. A lot depends on cracks forming in the right directions and supplying water at the right rate.
{ "source": [ "https://physics.stackexchange.com/questions/155989", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/68794/" ] }
156,339
When a drop of water falls into a reservoir of water from a high enough altitude, water droplets will splash ( image credit ): My question: Does the water in those droplets come from the original drop or from the reservoir?
When a drop of water impacts onto a liquid surface 'pool' of water, we can observer one or more phenomena: The drop 'bounces' then 'floats' the surface of the pool. The drop 'coalesces' into the pool. The drop 'splashes' on the pool, creating a 'crown' around a 'crater'. Which will occur depends upon the size and velocity of the droplet. Specifically, the collision can be characterised by the ratio of the inertial forces to the surface force, given by the (dimensionless) Weber number, $We$ defines as: $$We=\frac{\rho v^2 d}{\sigma}$$ where $\rho$ is the density of the droplet, $v$ is the velocity and $d$ is droplet diameter and $\sigma$ is the surface tension of the droplet fluid. When the $We$ is above a threshold of $\approx 84$, the impact is characterised as a 'splashing', whereby a 'crown' is formed around the crater and a column of water rises from the middle. 1 In order to determine whether the original drop is present in the 'recoiled' water, a series of photos were taken at successive time intervals 0.0003s apart, using a 4.83mm diameter drop (with coloured dye) dropped from a height of 175mm into a transparent pool of water. The number in the corner of each photo represents the sequence number of the photo (at 0.0003s intervals). As can be seen from these photos, the colour dye is present in the water jets which recoil off the surface of the transparent pool water. However, not all of the water in the jets is from the coloured drop. Some of the original drop is trapped in a pocket below the surface, with the rebounding 'jets' having a 'coating' of the original drop material. The way we know this is because in the experiment, the coloured drop was made from water mixed with thymol blue, an indicator which is dark orange in colour at neutral-to-acidic pH. The pool water contained 0.1% sodium carbonate (alkali), which is transparent in colour, but when the two combined, the mixture turns blue in colour. Some fascinating insight into the phenomenon can be gained by examining some high speed video footage. If you look at this high speed video , you will see that when the water droplet falls into the water, it appears to bounce back out! An even better example of the 'bouncing' phenonenon can be found in this video , when the drop is released gently from close to the surface of the water, it appears that after the drop is 'coalesced' into the water, part of it 'bounces' back out as a smaller droplet, which the falls back and floats on the surface of the water. The explanation offered is that a layer of air gets trapped beneath the droplet as it hits the surface of the water. Some of the water in the droplet gets coalesced into the pool by the water tension, releasing a smaller droplet back out.
{ "source": [ "https://physics.stackexchange.com/questions/156339", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
156,529
So I saw this tip but I don't think this is true, it would be that your leg or shoe is more flexible than a hard floor so the momentum change would be slower right?
You are correct. It's all about reducing the maximum force experienced by the phone. The phone has gained a certain amount of momentum when it gets to the level of the floor, and if it's going to come to a stop, it will lose all that momentum because of a force, either from your shoe or the ground. This graph illustrates the force your phone would experience over time if it hit the ground (purple curve) or if it hit your shoe (red curve). The momentum change is the same in both cases - that is, if you integrate (add up) all the momentum that the force gives the phone in each case, you get the same momentum change - the phone comes to a stop. However, in the case where it hits the ground, the ground is, as you said, much less flexible than your shoe, so the ground delivers a very large force, stopping the phone in a tiny period of time. This large force is what shatters your screen. In the case of the shoe, your shoe is nice and soft, so it delivers a smaller force, stopping the phone over a longer period of time. In fact, if you move your foot downward as the phone hits your shoe (known as "trapping" in soccer), you can stretch out that period of time over which the phone comes to a stop even longer, making the maximum force even smaller. Since the maximum force in the shoe impact is much smaller than the maximum force in the ground impact, the phone is more likely to break if it hits the ground rather than your shoe. The whole "transferring to lateral motion" thing is a red herring. If it loses all its vertical momentum, then a vertical force was applied to it. The only question is, was it a high force for a short amount of time (bad) or a low force for a longer period of time (good). In fact, if you actually "kick" it, you're irrelevantly adding momentum in the horizontal direction, which just introduces an unnecessary extra force onto your precious phone.
{ "source": [ "https://physics.stackexchange.com/questions/156529", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63057/" ] }
156,907
Hilbert space and rays: In a very general sense, we say that quantum states of a quantum mechanical system correspond to rays in the Hilbert space $\mathcal{H}$, such that for any $c∈ℂ$ the state $\psi$ and $c\psi$ map to the same ray and hence are taken as equivalent states. How should one interpret the above in order to understand why $\psi$ and $c\psi$ are the same states? Clearly for $c= 0$ it doesn't hold, and for $c=1$ it is trivial, but why should this equivalence hold for any other $c$? Knowing Hilbert space is a complex vector space with inner product, is ray just another way of saying vectors? In the case that $c$ just corresponds to a phase factor of type $e^{i\phi}$ with $\phi \in \mathbb{R},$ then obviously $|\psi|=|e^{i\phi}\psi|,$ i.e. the norms didn't change, then what is the influence of $e^{i\phi}$ at all? In other words what does the added phase of $\phi$ to the default phase of $\psi$ change in terms of the state of the system? Projective Hilbert space: Furthermore, through a process of projectivization of the Hilbert space $\mathcal{H},$ it is possible to obtain a finite dimensional projective Hilbert space $P(\mathcal{H}).$ In the Projective Hilbert space, every point corresponds to a distinct state and one cannot talk in terms of rays anymore. What does such projectivization entail in a conceptual sense? I guess in other words, how are rays projected to single points in the process? and what implies the distinctness? Is such process in any way analogous to the Gram–Schmidt process used to orthonormalise a set of vectors in linear algebra? When one limits the Hilbert space to that of a certain observable of the system at hand, e.g. momentum or spin space (in order to measure the momentum and spin of a system respectively), does that mean we're talking about projective spaces already? (e.g. is the spin space spanned by up $\left|\uparrow\rangle\right.$ and down $\left|\downarrow\rangle\right.$ spins states of a system referred to as projective spin Hilbert space ?) The aim is to develop a better and clearer understanding of such fundamental concepts in quantum mechanics.
Why are states rays? (Answer to OP's 1. and 2.) One of the fundamental tenets of quantum mechanics is that states of a physical system correspond ( not necessarily uniquely - this is what projective spaces in QM are all about!) to vectors in a Hilbert space $\mathcal{H}$, and that the Born rule gives the probability for a system in state $\lvert \psi \rangle$ to be in state $\lvert \phi \rangle$ by $$ P(\psi,\phi) = \frac{\lvert\langle \psi \vert \phi \rangle \rvert^2}{\lvert \langle \psi \vert \psi \rangle \langle \phi \vert \phi \rangle \rvert}$$ (Note that the habit of talking about normalised state vectors is because then the denominator of the Born rule is simply unity, and the formula is simpler to evaluate. This is all there is to normalisation .) Now, for any $c \in \mathbb{C} - \{0\}$, $P(c\psi,\phi) = P(\psi,c\phi) = P(\psi,\phi)$, as may be easily checked. Therefore, especially $P(\psi,\psi) = P(\psi,c\psi) = 1$ holds, and hence $c\lvert \psi \rangle$ is the same states as $\lvert \psi \rangle$, since that is what having probability 1 to be in a state means. A ray is now the set of all vectors describing the same state by this logic - it is just the one-dimensional subspace spanned by any of them: For $\lvert \psi \rangle$, the associated ray is the set $$ R_\psi := \{\lvert \phi \rangle \in \mathcal{H} \vert \exists c \in\mathbb{C}: \lvert \phi \rangle = c\lvert \psi \rangle \}$$ Any member of this set will yield the same results when we use it in the Born rule, hence they are physically indistiguishable. Why are phases still relevant? (Answer to OP's 3.) For a single state, a phase $\mathrm{e}^{\mathrm{i}\alpha},\alpha \in \mathbb{R}$ has therefore no effect on the system, it stays the same. Observe, though, that "phases" are essentially the dynamics of the system, since the Schrödinger equation tells you that every energy eigenstate $\lvert E_i \rangle$ evolves with the phase $\mathrm{e}^{\mathrm{i}E_i t}$. Obviously, this means energy eigenstates don't change, which is why they are called stationary states . The picture changes when we have sums of such states, though: $\lvert E_1 \rangle + \lvert E_2 \rangle$ will, if $E_1 \neq E_2$, evolve differently from an overall multiplication with a complex phase (or even number), and hence leave its ray in the course of the dynamics! It is worthwhile to convince yourself that the evolution does not depend on the representant of the ray we chose: For any non-zero complex $c$, $c \cdot (\lvert E_1 \rangle + \lvert E_2 \rangle)$ will visit exactly the same rays at exactly the same times as any other multiple, again showing that rays are the proper notion of state. The projective space is the space of rays (Answer to OP's 4. and 5. as well as some further remarks) After noting, again and again, that the physically relevant entities are the rays, and not the vectors themselves, one is naturally led to the idea of considering the space of rays . Fortunately, it is easy to construct: "Belonging to a ray" is an equivalence relation on the Hilbert space, and hence can be divided out in the sense that we simply say two vectors are the same object in the space of rays if they lie in the same ray - the rays are the equivalence classes . Formally, we set up the relation $$ \psi \sim \phi \Leftrightarrow \psi \in R_\phi$$ and define the space of rays or projective Hilbert space to be $$ \mathcal{P}(\mathcal{H}) := (\mathcal{H} - \{0\}) / \sim$$ This has nothing to do with the Gram-Schmidt way of finding a new basis for a vector space! This isn't even a vector space anymore! (Note that, in particular, it has no zero) The nice thing is, though, that we can now be sure that every element of this space represents a distinct state , since every element is actually a different ray . 1 (Side note (see also orbifold's answer ): A direct, and important, consequence is that we need to revisit our notion of what kinds of representations we seek for symmetry groups - initially, on the Hilbert space, we would have sought unitary representations , since we want to preserve the vector structure of the space as well as the inner product structure (since the Born rule relies on it). Now, we know it is enough to seek projective representations , which are, for many Lie groups, in bijection to the linear representations of their universal cover, which is how, quantumly, $\mathrm{SU}(2)$ as the "spin group" arises from the classical rotation group $\mathrm{SO}(3)$.) OP's fifth question When one limits the Hilbert space to that of a certain observable of the system at hand, e.g. momentum or spin space (in order to measure the momentum and spin of a system respectively), does that mean we're talking about projective spaces already? (e.g. is the spin space spanned by up |↑⟩ and down |↓⟩ spins states of a system referred to as projective spin Hilbert space?) is not very well posed, but strikes at the heart of what the projectivization does for us: When we talk of "momentum space" $\mathcal{H}_p$ and "spin space" $\mathcal{H}_s$, it is implicitly understood that the "total space" is the tensor product $\mathcal{H}_p \otimes \mathcal{H}_s$. That the total/combined space is the tensor product and not the ordinary product follows from the fact that the categorial notion of a product (let's call it $\times_\text{cat}$) for projective spaces is $$ \mathcal{P}(\mathcal{H}_1) \times_\text{cat} \mathcal{P}(\mathcal{H}_2) = \mathcal{P}(\mathcal{H}_1\otimes\mathcal{H}_2)$$ For motivations why this is a sensible notion of product to consider, see some other questions/answers (e.g. this answer of mine or this question and its answers ). Let us stress again that the projective space is not a vector space , and hence not "spanned" by anything, as the fifth question seems to think. 1 The inquiring reader may protest, and rightly so: If our description of the system on the Hilbert space has an additional gauge symmetry, it will occur that there are distinct rays representing the same physical state , but this shall not concern us here.
{ "source": [ "https://physics.stackexchange.com/questions/156907", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62173/" ] }
156,943
In my physics book after this solved example: A child of mass $m$ is initially at rest on top of a water slide at height h = 8.5m above the bottom of the slide. Assuming that the slide is frictionless because of water, find the speed of the child at bottom of slide. a comment was written: If we were asked to find the time taken for the child to reach the bottom of the slide, methods would be of no use; we would need to know the shape of the slide and we would have a difficult problem. Why does the author say that we would need to know the shape of the slide to find the time taken for the child to reach bottom of the slide? Can't we use Newton's first law of motion in uniform acceleration to find the time? we can find velocity at bottom $v$ = $\sqrt{2gh}$ = $13m/s$(approx) Using first law $v = u + at$ $13 = 0 + 9.8t$ $t = 13/9.8$
Why does the author say that we would need to know the shape of the slide to find the time taken for the child to reach bottom of the slide? As you've discovered, the speed going down a frictionless slide only depends on the vertical distance. This speed is not the vertical component of velocity. It is the magnitude of the velocity. The vertical component of velocity will be less than this on an inclined slide. To make the geometry as simple as possible, I'll look at inclined ramps (no bumps, no curves; just a ramp at some angle inclined at with respect to horizontal). To keep the numbers simpler, I'll use g =10 m/s 2 rather than 9.80665 m/s 2 . Suppose the slide has a vertical drop of 5 meters. That means the velocity at the bottom of the slide is 10 m/s. The average velocity is half that, 5 m/s. Now let's put different length slides in place. A slide that is 5 meters long means you are falling rather than sliding. It takes one second to drop 5 meters. What if we used a ten meter long slide (i.e., inclined at a 30 degree angle with respect to horizontal). The velocity hasn't changed, but the distance has doubled. It takes two seconds to slide down this slide; twice as long as the vertical drop. Use an even longer slide, but still a 5 meter vertical drop, and it takes even longer to get to the bottom. With a 50 meter long slide (5.74 degrees with respect to horizontal), it takes ten seconds, or ten times as long to get to the bottom compared to the vertical drop. In general, the time needed to reach the bottom of a frictionless inclined ramp is given by $t_\text{slide}=\frac l h t_\text{vert}$, where $l$ is the length of the ramp, $h$ is the vertical drop, and $t_\text{vert}$ is the time it takes to fall that same vertical distance.
{ "source": [ "https://physics.stackexchange.com/questions/156943", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60433/" ] }
158,246
I'm a physics (and electronics and astronomy, etc.) enthusiast. As I learn and research topics, I notice that many SI units are often expressed using a variety of prefixes, such as in electronics where we use microvolts, millivolts, volts, kilovolts, and sometimes megavolts. In computer storage, the prefixes kilo, mega, giga, and tera are very familiar. In physics, however, for large quantities of mass, I usually just see kilograms used with scientific notation: $$2\times10^6 kg$$ This could also be expressed as 2 gigagrams , but I've never heard anyone use that particular unit (which might be why it sounds silly). I understand that it is impractical to use a prefix for something as large as the mass of the sun, $2\times10^{30}$ kg, but wouldn't it be more appropriate to use grams, as in $2\times10^{33}$g? Is this simply out of convention, or is there a more logical reason?
It's a weird quirk of the SI system that the base unit of mass is the kilogram, not the gram. So you'll see a lot of things expressed in kilograms. Of course, scientists in a given field tend to standardize on certain choices of units without any regard to the SI recommendations. And this makes sense; the units you use should be the ones that make your values most understandable for the intended audience. SI is only intended as a fallback to enable unambiguous communication between groups that don't otherwise have a shared convention (especially between experimentalists and theorists). So sometimes you'll see quantities expressed in grams or tons or solar masses or whatever because that is the standard in the context you're looking at.
{ "source": [ "https://physics.stackexchange.com/questions/158246", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1638/" ] }
158,283
I am not able to understand the definition of the density operator . I know that if $V$ is a vector space and if I have $k$ states belonging to this vector space, say $|\psi_{i}\rangle$ for $1\le i\le k$, and the probability of the system being in $|\psi_{i}\rangle$ is $p_i$, then the density operator for the system is given by $$\rho=\sum_{i=1}^{i=k}p_i|\psi_i\rangle\langle\psi_i |. $$ Now what I am unable to understand is Does this mean the system is in exactly one of the $|\psi_i\rangle$ states and we don't know in which state it is in and we just know the probability? Or it is in a superposition of these $k$ states with probabilities being interpreted as weights?
Firstly, what is a state? A state gives you the complete description of a system. Let's label the state of a system $\lvert \psi \rangle$ . This is a normalised state vector which belongs in the vector space of states. Keep in mind that we are talking about the full state; I haven't decomposed it into basis states, and I will not. This is not what the density matrix is all about. The state vector description is a powerful one, but it is not the most general. There are some quantum experiments for which no single state vector can give a complete description. These are experiments that have additional randomness or uncertainty, which might mean that either state $\lvert \psi_1⟩$ or $\lvert \psi_2⟩$ is prepared. These additional randomness or uncertainties arise from imperfect devices used in experiments, which inevitably introduce this classical randomness, or they could arise from correlations of states due to quantum entanglement. In this case, then, it is convenient to introduce the density matrix formalism. Since in quantum mechanics all we calculate are expectation values, how would you go about calculating the expectation value of an experiment where in addition to having intrinsic quantum mechanical randomness you also have this classical randomness arising from imperfections in your experiment? Recall that $$Tr(\lvert\phi_1\rangle\langle\phi_2\rvert)=Tr(\lvert\phi_1\rangle\otimes\langle\phi_2\rvert)=\langle\phi_1\mid\phi_2\rangle,$$ and $$\hat{O}\circ(\lvert\phi_1\rangle\langle\phi_2\rvert)=(\hat{O}\lvert\phi_1\rangle)\otimes\langle\phi_2\rvert$$ Now, using the linearity of the trace, we can compute the expectation value as: $$ \langle \hat{O} \rangle = p_1\langle \psi_1 \lvert \hat{O} \lvert \psi_1 \rangle + p_2\langle \psi_2 \lvert \hat{O} \lvert \psi_2 \rangle$$ $${} = p_1Tr(\hat{O} \lvert \psi_1 \rangle \langle \psi_1 \lvert) + p_2Tr(\hat{O} \lvert \psi_2 \rangle \langle \psi_2 \lvert) $$ $${} =Tr(\hat{O} (p_1 \lvert \psi_1 \rangle \langle \psi_1 \lvert) + p_2 \lvert \psi_2 \rangle \langle \psi_2 \lvert)) = Tr(\hat{O} \rho)$$ where $p_1$ and $p_2$ are the corresponding classical probabilities of each state being prepared, and $\rho$ is what we call the density matrix (aka density operator): it contains all the information needed to calculate any expectation value for the experiment. So your suggestion 1 is correct, but suggestion 2 is not, as this is not a superposition. The system is definitely in one state; we just don't know which one due to a classical probability.
{ "source": [ "https://physics.stackexchange.com/questions/158283", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58482/" ] }
158,557
If a magnet is rotating around an axis perpendicular to the axis north-south axis of the magnet (which I assume to be cylindrically symmetric), in space (so no-gravity/freefall or friction), should it still slow down because it emits electromagnetic radiation/photons? I would think so, due to conservation of energy. The power output of the oscillating magnetic field should mean a decrease of the rotational energy of the magnet. But what causes the torque that gradually slows the magnet's rotation? One way of looking at it would be conservation of (angular)momentum and the fact that photons have momentum. But how would you express the torque in terms of electromagnetism/Maxwell's equations?
This question can be addressed using classical electrodynamics, with "electromagnetic radiation" in place of "photons." The question has two parts: Part 1: Does a rotating produce EM radiation? Part 2: We know that the magnet must lose rotational kinetic energy to balance the energy that is carried away as radiation, because the total energy is conserved. But what mechanism slows the rotation? Radiation from a rotating magnet The answer to Part 1 is yes . This is well-known, so I'll just quote a result that quantifies the rate at which energy is radiated away. Treat the magnet as a dipole of negligible size but with a finite magnetic dipole moment $\mathbf{m}$ . Suppose that the dipole is rotating about an axis perpendicular to $\mathbf{m}$ , as described in the OP, and let $\omega$ denote the angular velocity of the rotation. The radiated power, integrated over all directions and averaged over a rotational period, is $$ P = \frac{\mu_0 |\mathbf{m}|^2\omega^4}{6\pi c^3} \tag{1} $$ where $c$ is the speed of light and $\mu_0$ is the vacuum magnetic permeability. The condition that the magnet have "negligible size" means that its size is small compared to the wavelength $c/\omega$ . This is essentially equation (9.53) in Griffiths (1989), Introduction to Electrodynamics (Second Edition) , except that (1) is larger by a factor of $2$ . That's because Griffiths considers a dipole that oscillates along the direction of $\mathbf{m}$ instead of a rotating dipole. To deduce (1) from Griffiths' result, a rotating dipole may be regarded as a pair of orthogonal oscillating dipoles, out of phase by $\pi/2$ so that the magnitude of the total magnetic moment remains constant. The source of the torque: intuition We know that the magnet must lose rotational kinetic energy to balance the energy that is carried away as radiation, because the total energy is conserved. But what mechanism slows the rotation? I didn't find a detailed treatment in the literature, so I'm posting a relatively detailed answer here. I'll start with an intuitive preview, and then I'll show that the intuition has a solid theoretical foundation using an explicit model that includes Maxwell's equations. We can think of the magnet as being made of lots of little magnetic dipoles distributed throughout the volume of the magnet. Consider one of these dipoles, say the one at $\mathbf{x}$ . These magnetic dipoles "want" to be aligned with each other. This can be verified empirically, using a pair of bar magnets. Therefore, any misalignment between the dipole at $\mathbf{x}$ and the magnetic field produced by the other dipoles results in a torque on the dipole at $\mathbf{x}$ that tends to restore its alignment with the others. The dipoles interact with each other via the magnetic field. In the limit of a continuum of infinitesimal dipoles, it becomes correct to think of the magnetic field at $\mathbf{x}$ as being the superposition of the magnetic fields from all of the other dipoles. The model described below uses this idealization. Here's the key: If the magnet is rotating, then the magnetic field at $\mathbf{x}$ is the superposition of delayed versions of the magnetic fields from the other dipoles, because any change in the field propagates at a finite speed (the speed of light). Thus the torque on the dipole at $\mathbf{x}$ tends to restore its alignment with where the rest of the magnet was at some time slightly in the past, a time that increases with the distance from $\mathbf{x}$ . In other words, the torque tends to counteract the magnet's rotation, gradually slowing it down. Intuitively, we expect the effect to be very small because the delay is very small, but does it have precisely the right magnitude to balance the energy that is carried away as EM radiation? Below, I'll show that it does. I'll do this by deriving the torque from a model that explicitly includes Maxwell's equations and that explicitly conserves the total energy. Does the torque have the correct magnitude? To derive the torque mathematically and show that it has the correct magnitude, I'll use a model defined by a lagrangian. The lagrangian describes a rigid magnet that can rotate about a given axis, and it describes the dynamcis of the EM field, in such a way that the total energy is conserved. In this model, the rotating magnet produces EM radiation via Maxwell's quations, so it must gradually lose rotational kinetic energy. The model will tell us exactly how this happens. The model used here is non-relativistic (not Lorentz symmetric). In particular, it treats the magnet as a perfectly rigid object. This makes the math easier. Also, the non-relativistic approximation is consistent with the approximation that was used to derive (1): the rotation is assumed to be slow enough so that the wavelength is very long compared to the size of the magnet. In units with $\epsilon_0=\mu_0=1$ , the model is defined by the lagrangian \begin{align} L =\frac{1}{2}I\dot\theta^2 & + \int d^3x\ \frac{\mathbf{E}^2(t,\mathbf{x}) -\mathbf{B}^2(t,\mathbf{x})}{2} \\ & +\int d^3x\ \mathbf{M}\big(\theta(t),\mathbf{x}\big) \cdot\mathbf{B}(t,\mathbf{x}) \tag{2} \end{align} with this notation: $I$ is the magnet's moment of inertia. The rotation axis is such that $I$ is independent of $\theta$ . $\theta$ describes the magnet's time-dependent orientation, with time-derivative $\dot\theta$ . $\mathbf{M}(\theta,\mathbf{x})$ is (proportional to) the magnetic moment density at $\mathbf{x}$ when the magnet's orientation is $\theta$ . $\mathbf{B}=\nabla\times \mathbf{A}$ is the magnetic field in terms of the vector potential $\mathbf{A}(t,\mathbf{x})$ . $\mathbf{E}=-\mathbf{\dot A}$ is the electric field in the temporal gauge. This lagrangian describes a perfectly rigid magnet that can move only by rotating about a given axis. The dynamic variables are the magnet's time-dependent orientation $\theta(t)$ and the electromagnetic vector potential $\mathbf{A}(t,\mathbf{x})$ . The model doesn't have any prescribed time-dependent coefficients, so Noether's theorem ensures that it has a conserved total energy, which is given by \begin{equation} U = \frac{1}{2}I\dot\theta^2 + \int d^3x\ \frac{\mathbf{E}^2+\mathbf{B}^2}{2} - \int d^3x\ \mathbf{M}\cdot\mathbf{B}. \tag{3} \end{equation} The equations of motion derived from the lagrangian (2) are: One equation that describes how the EM field reacts to the magnet: \begin{equation} \mathbf{\dot E}=\nabla\times (\mathbf{B}-\mathbf{M}) \tag{4a} \end{equation} This is one of Maxwell's equations, with current density $\mathbf{J}\propto \nabla\times\mathbf{M}$ . To derive (4), I assumed that $\mathbf{M}$ falls smoothly to zero at the "boundary" of the magnet, so that the boundary is not abrupt. This simplifies the equations without changing the insight. The definitions of $\mathbf{E}$ and $\mathbf{B}$ in terms of $\mathbf{A}$ imply another of Maxwell's equations: \begin{equation} \mathbf{\dot B}=-\nabla\times\mathbf{E}. \tag{4b} \end{equation} Equations (4a)-(4b) together imply that EM waves propagate at a finite speed, which is equal to $1$ in the units I'm using here. One equation that describes how the magnet's orientation reacts to the EM field: \begin{equation} I\ddot\theta = \int d^3x\ \frac{\partial\mathbf{M}}{\partial\theta}\big(\theta(t),\mathbf{x}\big) \cdot \mathbf{B}(t,\mathbf{x}). \tag{5} \end{equation} The right-hand side of equation (5) is the source of the torque. The total energy (3) is independent of time if equations (4) and (5) are satisfied. The purpose of deriving these equations from the lagrangian (2) is to ensure that this conservation law holds, so that the rate at which the torque (5) drains the rotational kinetic energy precisely balances the rate at which energy is carried away as EM radiation through Maxwell's equations (4). The next section explains how to relate these equations to the intuition that I described earlier. Recovering the intuition from the math We can think of the magnetic moment density $\mathbf{M}(\theta,\mathbf{x})$ as a continuous distribution of infinitesimal magnetic dipoles. As before, consider one of these dipoles, say the one at $\mathbf{x}$ . The right-hand side of equation (5) says that any misalignment between the dipole at $\mathbf{x}$ and the magnetic field at $\mathbf{x}$ contributes to a torque that tends to restore their alignment. To see this, recall that the magnitude of $\mathbf{M}$ at each point is fixed because the magnet is rigid. This implies that $\partial \mathbf{M}/\partial\theta$ is orthogonal to $\mathbf{M}$ . Therefore, if $\theta$ is such that $\mathbf{M}$ is already aligned with $\mathbf{B}$ , then the right-hand side of (5) is zero (no torque). On the other hand, if $\theta$ is slightly larger than this equilibrium value, then the right-hand side of (5) is negative, because increasing $\theta$ even further would only makes the alignment worse. (Think about the definition of the derivative.) We can use that picture to confirm that the sign of the right-hand side of (5) is what we need it to be: the torque tends to restore the alignment between $\mathbf{M}$ and $\mathbf{B}$ . Equation (4a) implies that the magnetic field at $\mathbf{x}$ is the superposition (integral) of the magnetic fields from all of the other dipoles. In a situation with no motion, this means that the torque in equation (5) tends to align the dipole at $\mathbf{x}$ with the other dipoles. This is consistent with our experience with macroscopic bar magnets. The different internal torques at different points $\mathbf{x}$ cancel each other, so that the net torque on the whole magnet is zero. Equations (4) imply that any change in the field due to a change in the magnet's orientation propagates at a constant finite speed. Therefore, if the magnet is rotating, the magnetic field at $\mathbf{x}$ is the superposition of delayed versions of the magnetic fields from the other dipoles, with each delay proportional to the distance to $\mathbf{x}$ . This completes the justification of the intuition that I described before, and it shows that the effect has precisely the right magnitude to balance the energy that is carried away by EM radiation. This conclusion follows because we derived the effect from equations (4)-(5), which manifestly conserve the total energy (3).
{ "source": [ "https://physics.stackexchange.com/questions/158557", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/23741/" ] }
158,589
In a physics text book I read the following: $$e/m=1.758820150(44) ×10^{11} \mathrm{C/kg} $$ In this expression, $(44)$ indicates the likely uncertainty in the last two digits, $50$. How should I understand this uncertainty? Does it mean $\pm 44$ on the last two digits?
The digits in parentheses are the uncertainty, to the precision of the same number of least significant digits. (The meaning of the uncertainty is context-dependent but generally represents a standard deviation, or a 95% confidence interval.) So: $$e/m=1.758\,820\,1\color{blue}{50}\,\color{magenta}{(44)}×10^{11} \ \mathrm{C/kg}=\left(1.758\,820\,1\color{blue}{50}×10^{11} \pm 0.000\,000\,0\color{magenta}{44}×10^{11}\right) \ \mathrm{C/kg}$$
{ "source": [ "https://physics.stackexchange.com/questions/158589", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4962/" ] }
158,938
If the spring in Figure A is stretched a distance d, how far will the spring in Figure B stretch? The spring constants are the same. The answer is "by half". I don't get it, to me it's the same.
Think on the equilibrium position: in figure A you have a force $mg$ exerted at the lower end and an identical force pushing the opposite direction exerted by the wall (if this force didn't exist, the spring with mass attached would just fall down by gravity). In figure B you have a force $mg/2$ exerted on the right end of the spring and an identical force of $mg/2$ on the other end.
{ "source": [ "https://physics.stackexchange.com/questions/158938", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/46995/" ] }
158,943
Baryon number is directly violated through electroweak anomaly and so does the Lepton number, for each transition from one vacuum to another. The two violations are of equal amount $\Delta B=\Delta L=N_f[N_{CS}(t_i)-N_{CS}(t_f)]$ since $(B-L)$ is anomaly free. Both the violations (i.e., $\Delta B,\Delta L$) will occur simultaneously for each transition. But in this manner $B$ is violated directly and I do not think that $L$ violation is inducing $B$ violation. Is this true? When standard model is extended with right-handed neutrinos, then decays of heavy Majorana neutrinos give rise additional to $L-$violation. How can this leptogenesis induce baryogenesis? If the question is not clear enough I can clarify it further.
Think on the equilibrium position: in figure A you have a force $mg$ exerted at the lower end and an identical force pushing the opposite direction exerted by the wall (if this force didn't exist, the spring with mass attached would just fall down by gravity). In figure B you have a force $mg/2$ exerted on the right end of the spring and an identical force of $mg/2$ on the other end.
{ "source": [ "https://physics.stackexchange.com/questions/158943", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/36793/" ] }
159,021
I've been wondering what makes the harmonic oscillator such an important model. What I came up with: It is a (relatively) simple system, making it a perfect example for physics students to learn principles of classical and quantum mechanics. The harmonic oscillator potential can be used as a model to approximate many physical phenomena quite well. The first point is sort of meaningless though, I think the real reason is my second point. I'm looking for some materials to read about the different applications of the HO in different areas of physics.
The harmonic oscillator is important because it's an approximate solution to nearly every system with a minimum of potential energy. The reasoning comes from Taylor expansion . Consider a system with potential energy given by $U(x)$. You can approximate $U$ at $x=x_0$ by $$ U(x) = U(x_0) + (x-x_0) \left.\frac{dU}{dx}\right|_{x_0} + \frac{(x-x_0)^2}{2!} \left.\frac{d^2U}{dx^2}\right|_{x_0} + \cdots $$ The system will tend to settle into the configuration where $U(x)$ has a minimum --- but, by definition, that's where the first derivative $dU/dx = 0$ vanishes. Also a constant offset to a potential energy usually does not affect the physics. That leaves us with $$ U(x) = \frac{(x-x_0)^2}{2!} \left.\frac{d^2U}{dx^2}\right|_{x_0} + \mathcal O(x-x_0)^3 \approx \frac12 k (x-x_0)^2 $$ which is the harmonic oscillator potential for small oscillations around $x_0$.
{ "source": [ "https://physics.stackexchange.com/questions/159021", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/23937/" ] }
159,022
From what I understand, the magnetic force is just a relativistic effect of the electric force, and I understand how this is can be the case when considering the magnetic field generated by a current- carrying wire (length contraction of space between the electrons so they appear to be more concentrated than the positively charged ions to a 'stationary' observer, so the observer sees the wire as being negatively charged). However I do not understand why there would be a magnetic field around a permanent magnet. My teacher's explanation of permanent magnets involved there being permanent dipoles inside the magnet, however If you have two oppositely charged particles close to each other you will have an electric field around them, and not a magnetic field. I don't quite understand how lined up dipoles will produce a magnetic field if there is no motion of charges?
The harmonic oscillator is important because it's an approximate solution to nearly every system with a minimum of potential energy. The reasoning comes from Taylor expansion . Consider a system with potential energy given by $U(x)$. You can approximate $U$ at $x=x_0$ by $$ U(x) = U(x_0) + (x-x_0) \left.\frac{dU}{dx}\right|_{x_0} + \frac{(x-x_0)^2}{2!} \left.\frac{d^2U}{dx^2}\right|_{x_0} + \cdots $$ The system will tend to settle into the configuration where $U(x)$ has a minimum --- but, by definition, that's where the first derivative $dU/dx = 0$ vanishes. Also a constant offset to a potential energy usually does not affect the physics. That leaves us with $$ U(x) = \frac{(x-x_0)^2}{2!} \left.\frac{d^2U}{dx^2}\right|_{x_0} + \mathcal O(x-x_0)^3 \approx \frac12 k (x-x_0)^2 $$ which is the harmonic oscillator potential for small oscillations around $x_0$.
{ "source": [ "https://physics.stackexchange.com/questions/159022", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/57218/" ] }
159,101
I hope that this is a fun question for you physicists to answer. Say you had a perfect piston - its infinitely strong, infinitely dense, has infinite compression ... you get the idea. Then you fill it with some type of matter, like water or dirt or something. What would happen to the matter as you compressed it indefinitely? Edit: I'm getting some responses that it would form a black hole. For this question I was looking for something a little deeper, if you don't mind. Like if water kept getting compressed would it eventually turn into a solid, then some sort of energy fireball cloud? I'm not as concerned about the end result, black hole, as I am in the sequence.
You asked for process. I'm assuming infinite material strength here, as in the piston cannot be stopped (infinite force on an infinite strength material that can resist infinite temperature). Solids will be compressed, resulting in lots of heat as this happens (with infinite pressure, and infinitely strong materials and thus force, the matter will give), until they reach a liquid state, gaseous state, or start losing electrons and ionizing, or just stays solid all the way up to Electron Degeneracy - it depends greatly on the substance what happens here. With current realistic materials, the piston would break. Since it doesn't break, and there's infinite force behind it, the substance gets compressed and heated anyway. Liquids will be compressed, resulting in lots of heat as this happens (with infinite pressure, and infinitely strong materials and force, the matter will give) into a gas, plasma, or Electron Degeneracy (depends on substance). With current realistic materials, the piston would break. Since it doesn't break, and there's infinite force behind it, the substance gets compressed and heated anyway. Gaseous substances will then easily compress, resulting in lots of heating as they do, until they heat up enough that the electrons freely float among the nuclei, and you have just made a Plasma. Now at a Plasma , the matter is slightly ionized (+1,+2) as the outermost electrons will have escaped and thus resulting in positive charges. The matter will continue to compress and heat More compression, resulting in more heat. More electrons are too energetic to orbit the nuclei, resulting in higher positive charges (+3,+4 as allowable...). More compression, resulting in more heat. More electrons are too energetic to orbit the nuclei, resulting in higher positive charges (+5,+6 as allowable...). More compression, resulting in more heat. More electrons are too energetic to orbit the nuclei, resulting in higher positive charges (+7,+8 as allowable... until they're all gone). At some point you will surpass electron degeneracy pressure and form: Electron Degenerate matter where no electron can orbit the nuclei, but now freely traverse the highly positively charged nuclei 'soup'. Keep adding pressure, and you'll form: Proton Degenerate matter where only the repulsion of the protons is holding the nuclei apart. Keep adding pressure, and you'll form: Neutron Degenerate matter where the electrons and protons join and cancel, leaving you with basically a huge neutral atom full of mostly neutrons, being held apart by the quarks. Keep adding pressure, and you'll (in theory) form: Quark Degenerate matter where the quarks, or at least the standard up/down quarks, can no longer hold the pressure and perhaps combine/change form. Keep adding pressure, and in theory you might form: Preon Degenerate matter which would sort of be like one big subatomic particle (though you might skip this one), and finally: A singularity aka Black Hole
{ "source": [ "https://physics.stackexchange.com/questions/159101", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/64329/" ] }
159,937
It is common in popular science culture to assume that Hawking radiation causes black holes to vaporize. And, in the end, the black hole would explode. I also remember it being mentioned in A Brief History of Time . Why would a black hole explode? Why can't it gradually vanish to zero? What is the exact mechanism or theory which causes a black hole to explode?
The expression for the power emitted as Hawking radiation is $$ P = \frac{\hbar c^6}{15360 \pi G^2 M^2} = 3.6\times10^{32} M^{-2}\ \text{W} = -c^2 \frac{dM}{dt},$$ where the term on the far right hand side expresses the rate at which the black hole mass decreases due to the emission of Hawking radiation. You can see that what happens is that the power emitted actually increases as $M$ decreases . At the same time, the rate at which the mass decreases also increases . So as the black hole gets less massive, the rate at which it gets less massive increases rapidly and hence the power it emits increases very, very rapidly. By solving this differential equation it can be shown that the time to evaporate to nothing is given by $$ t = 8.4\times10^{-17} M^3\ \text{s},$$ so for example a 100 tonne black hole would evaporate in $8.4 \times10^{-2}\ \text{s}$, emitting approximately $E = Mc^2 = 9\times 10^{21}$ joules of energy as it does so – equivalent to more than a million megatons of TNT. I guess you could call this an explosion! This will be the fate of all evaporating black holes, but most will take a very long time to get to this stage (even supposing they do not accrete any matter). The evaporation time is only less than the age of the universe for $M < $ a few $10^{11}\ \text{kg}$. A 1 solar mass black hole takes $2\times10^{67}$ years to evaporate. EDIT: The Hawking radiation temperature is given by $$ kT = \frac{\hbar c^3}{8 \pi GM}.$$ Unless the temperature is well above the ambient temperature (at a minimum the cosmic microwave background temperature), the black hole will always absorb more energy than it radiates, and get bigger. i.e. to evaporate $$ \frac{\hbar c^3}{8 \pi GM} > kT_{\rm ambient}$$ $$ M < \frac{1.2\times10^{23}}{T_{\rm ambient}}\ {\rm kg}$$ Therefore unless I've made a mistake, this proviso is of no practical importance other than for evaporating black holes (i.e. those with $M<10^{11}$ kg) in the early universe. The temperature of a black hole goes as its evaporation timescale as $t_{\rm evap}^{-1/3}$. The temperature of the early, radiation-dominated, universe scales as $t^{-1/2}$. Thus it appears to be the case that at some point in the past, a black hole that might have had an evaporation timescale shorter than the age of the universe is incapable of doing so.
{ "source": [ "https://physics.stackexchange.com/questions/159937", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2170/" ] }
160,380
Consider a formula like $y = mx + b$. For instance, $y = 2x + 3$. It is simple to check that $(1,5)$ is a solution, as is $(2,7)$, as is $(3,9)$, etc. So it's easy to see that $y =2x + 3$ is a useful equation in this case because it accurately describes a pattern between a bunch of numbers. But if it's so hard to calculate exact solutions of the Einstein Field Equations, how did he verify the were correct? How did he even know to write them without first doing calculations and then identifying the general formula? To use my initial analogy, if I begin with a bunch of pairs of numbers, I then derive the equation $y = 2x +3$ as the equation that describes the pattern. But if solutions to the EFEs are so hard to find, how did Einstein find the EFEs in the first place?
It's not uncommon that the equations to describe a system are fairly simple but finding solutions is very hard. The Navier-Stokes equations are a good example - there's a million dollars waiting for the first person to make progress in finding solutions. In the case of relativity, it became clear to Einstein fairly quickly that a metric theory was required so the equation needed was one that gave the metric as a solution. Einstein tried several variations before settling on the GR field equation. I believe one of the factors that influenced him was when Hilbert pointed out that the GR field equations followed from an obvious choice for the gravitational action . I'm not sure if Einstein himself ever found an analytic solution to his own equations. However he used a linearised form of the equation to calculate the precession of Mercury and to calculate the deflection of light. The precession of Mercury was already known by then, so he knew (linearised) GR gave the correct answer there, but he had to wait a few years for Eddington's measurement of the deflection of light (though to modern eyes it seems likely that Eddington got the answer he wanted!). The first analytic solution was Schwarzschild's solution for a spherically symmetric mass. General relativity is one of the very few cases in science where a successful theory was devised purely on intellectual grounds rather than as a response to experimental data. Anyone who has suffered the pain of trying to learn GR can appreciate what an astonishing accomplishment this was, and why Einstein deserves every bit of the fame associated with him.
{ "source": [ "https://physics.stackexchange.com/questions/160380", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/66165/" ] }
160,548
How does adding one more particle to the nucleus of an atom give that atom new properties? I can see how it changes it's mass, that's obvious... But how does it give that new atom different properties like color? A good example would be: start with a copper atom (Cu), with the atomic number 29, thus Cu has 29 protons, and you add one proton to the nucleus you are left with an atom of Zinc (Zn) with the atomic number 30, thus 30 protons. The first element mentioned is a totally different color than the second, and conducts electricity better etc. Not only protons, but neutrons, which are the same type of particle (Baryon) affect the properties of the element in a much different and much less important manner. Adding a neutron only creates an isotope of that element, not a different one all together, unlike adding a proton. Also, it is obvious that adding (or subtracting) electrons does not make a difference. For example, if you remove 28 electrons (I know that would take huge amounts of energy, but lets ignore that) that "orbit" the copper atom, we are still left with a copper atom, although a ion, but still a copper atom. So, its apparent that only protons play a major role in "making" elements different from each other. How and why? Also, the same can be asked about the protons themselves and quark flavor.
You are not correct in your latter part of the analysis; the chemical properties (which is mostly what matters in ordinary matter) almost only depend on the electron shell, and in particular the outermost electrons (called the valence electrons). So more protons mean more electrons and a different electron shell, meaning different chemical properties. Why there is such a diversity of properties just by changing around the electron shell, is one of the wonders of chemistry! Due to quantum mechanics, the electrons don't simply spin around the nucleus like planets around the sun, but arrange themselves in particular, complicated patterns. By having different patterns, you can achieve a lot of different atom<->atom binding geometries, at a lot of different energies. This is what gives the diversity of chemical properties of matter (see the periodic table). You can add or remove electrons to an atom to make the electron shells look more like the shells of another atom (with a different number of protons), but then the atom as a whole is then no longer electrically neutral, and due to the strength of the electromagnetic force, the resulting ion does not imitate the other atom type very well (I'm not a chemist - I'm sure there are properties that indeed could become similar). Many physical properties are also mostly due to the electron shells, like photon interactions including color. Mass obviously is almost only due to the nucleus though, and I should add that in many chemical processes the mass of the atoms are important for the dynamics of processes, even if it isn't directly related to the chemical bindings. This was just a small introduction to chemistry and nuclear physics ;)
{ "source": [ "https://physics.stackexchange.com/questions/160548", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62989/" ] }
160,585
Suppose I build a machine which will be given Rubik's cubes that have been scrambled to one of the $\sim 2^{65}$ possible positions of the cube, chosen uniformly at random. Is it possible for the machine to solve the cubes without giving off any heat? Solving the cube might be thought to consist of destroying about 65 bits of information because it takes 65 bits to describe the cube's state before entering the machine, but zero bits to describe it afterwards (since it is known to be solved). If the information stored on a Rubik's cube is equivalent to any other type of physically-stored information, then by Landauer's principle we might expect the machine to have to give off a heat of $T \mathrm{d}S \sim 65 T k_B \ln(2)$, but is it valid to apply Landauer's principle to the information stored in such a manner? What sort of argument does it take to say that a certain type of information is physically meaningful, such that destroying it necessitates paying an entropy cost somewhere else?
Let's suppose you have a Rubik's cube that's made of a small number of atoms at a low temperature, so that you can make moves without any frictional dissipation at all, and let's suppose that the cube is initialised to a random one of its $\sim 2^{65}$ possible states. Now if you want to solve this cube you will have to measure its state. In principle you can do this without dissipating any energy. Once you know the moves you need to make to solve the cube, these can also be made without dissipating any energy. So now you decide to build a machine that will solve the cube without dissipating any energy. First it measures the state and stores it in some digital memory. Then it calculates the moves required to solve the cube from this position. (In principle this needn't generate any heat either.) Then it makes those moves, solving the cube. None of these steps need to give off any heat in principle, but your machine ends in a different state than the state it starts in. At the end of the process, 65 bits' worth of the machine's state has effectively been randomised, because it still contains the information about the cube's initial state. If you want to reset the machine so that it can solve another cube, you will have to reset those bits of state back to their initial conditions, and that's what has to dissipate energy according to Landauer's principle. In the end the answer is just that you have to pay an entropy cost to erase information in all cases where you actually need to erase that information. If you only wanted to solve a finite number of cubes then you can just make the memory big enough to store all the resulting information, so there's no need to erase it, and no heat needs to be generated. But if you want to build a finite-sized machine that can keep on solving cubes indefinitely then eventually dumping entropy into the environment will be a necessity. This is the case for Maxwell's demon as well: if the demon is allowed to have an infinite memory, all initialised to a known state, then it need not ever dissipate any energy. But giving it an infinite memory is very much the same thing as giving it an infinite source of energy; it's able to indefinitely reduce the thermodynamic entropy of its surroundings only by indefinitely increasing the information entropy of its own internal state.
{ "source": [ "https://physics.stackexchange.com/questions/160585", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/74/" ] }
161,013
If air cannot conduct electricity, how can lightning happen?
This is due to the principle of dielectric breakdown . During thunderstorms, the air between the cloud and the ground acts like a capacitor. When the electric field is high enough, the air partially ionizes, at which point there are free electrons to carry current and the air becomes, essentially, conductive.
{ "source": [ "https://physics.stackexchange.com/questions/161013", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/71407/" ] }
161,055
Everyone is always talking about photon's wavelength. But what about its dimensions? What is length and width of it? And does it even have a point to think about such things? Or those dimensions are non-existent in such cases?
The fundamental particles we know today (of which the photon is one) are called fundamental exactly because they have no substructure, or indeed, spatial extent, we know of. They are point-like when localized. Note that these "particles" are quantum objects, not classical particles, so you should not imagine them as points whizzing about in space - they possess delocalized states where they take no definite shape at all (for example, the "electron cloud" around atoms is such a delocalized state). The above is a short, non-relativistic view of "particles". When going to the relativistic description that is actually needed for the full description of fundamental particles, things get considerably more murky. For one, we lose the naive position operators, and the notion of "localization" becomes a bit ill-defined because the new "position operator", the Newton-Wigner operators, do not allow to speak of localization in an observer-independent way. The generic particle state that is scattered in QFT calculations is usually a sharp momentum state, and therefore strongly delocalized, so any notion of "point-like" can't really rely on the localization of a particle state. In this picture, the proper notion of a "point-like" particle is one whose scattering behaviour indicates no substructure or spatial extent. For extended objects consisting of subobjects, their scattering behaviour will typically change when the energies/length scales of the scattering process reach their size, because then their internals get resolved and the individual subobjects start participating in the scattering. So then our notion of size becomes that the scattering behaviour is scale-independent. For more on this notion of size in QFT, see e.g. this answer by Bosoneando .
{ "source": [ "https://physics.stackexchange.com/questions/161055", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/46147/" ] }
161,406
I have watched light sources such as incandescent lamps and other lamp sources; they have always made shadows. But a fluorescent lamp doesn't make any shadow. What is the reason behind the non-appearance of prominent shadow?
To complement Floris's answer, here's a quick animation showing how the shadow changes with the size of the light source. In this animation, I've forced the intensity of the light to vary inversely with the surface area, so the total power output is constant ($P \approx 62.83 \, \mathrm{W}$). This is why the object (Suzanne) doesn't appear to get any brighter or darker, but the shadow sharpness does change: In this scene, the spherical lamp is the only light source, except for the dim ambient lighting. This makes the shadows very easily visible. In a real-world scenario with other light sources (windows, for example), the effect would be less pronounced because the shadows would be more washed out. The following animation shows the scenario Floris described, with a rotating long object:
{ "source": [ "https://physics.stackexchange.com/questions/161406", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/28197/" ] }
161,409
I was recently reviewing geometric optics, during which I read about Huygens' Principle and how it could be used to prove the Law of Reflection from the "light is a wave" viewpoint. I'll quote what I read in the book for Huygens' Principle (from Serway and Beichner, Physics for Scientists and Engineers with Modern Physics, 5th Ed., Vol. 2, pg. 1119): "In Huygens' construction, all points on a given wave front are taken as point sources for the production of spherical secondary waves, called wavelets, which propagate outward through a medium with speeds characteristic of waves in that medium. After some time has elapsed, the new position of the wave front is the surface tangent to the wavelets." The authors go on to show that using this principle for a plane wave shows that at any later time a plane wave is still a plane wave. Likewise, Huygens' Principle can be used to show that a spherical wave front remains a spherical wavefront at any later time (again assuming the light is not interacting with any surface). (See, e.g., here for a couple of figures.) Next, the authors apply Huygens' Principle to prove the Law of Reflection. However, they don't actually draw the wavelets, so I tried it myself (and I failed the first couple of times). What I've learned by thinking about this for a while boils down to this: to understand reflection via Huygens' Principle, the wavelets you are to use for drawing the tangent wave front must all originate during the reflection process (see Figure 1 below). This seems quite reasonable to me. If I want to know what happens after a reflection, I should use wavelets that form during reflection. However, one thing that feels strange to me is that these secondary wavelets are all formed at different times, and then I'm using information from all of them at the same time to draw a reflected wave front. If, during the reflection process, it's ok to use wavelets that form at different times, then when is it ever not ok to do that? For example, if I try the same trick (of drawing secondary wavelets that originate at different times and then using all of those wavelets at the same time to find a new wave front) with a plane wave, I seem to get a result that makes sense (see Figure 2 below), even though the way I'm using Huygens' Principle in this case does not make sense (I have to have knowledge of how a plane wave propagates to make Figure 2, but the point of Huygens' Principle is to show us how the wave propagates in the first place). With these examples in mind, if I naively start drawing wavelets that originate at different times (some before and some during reflection), I can get the following figure, which doesn't show me how reflection works. Why is that? I must be missing something fundamental about Huygens' Principle, or maybe it's a problem related to the $180^\circ$ phase change that occurs during reflection (so that wavelets that formed before reflection are out of phase from wavelets that form during reflection)?
To complement Floris's answer, here's a quick animation showing how the shadow changes with the size of the light source. In this animation, I've forced the intensity of the light to vary inversely with the surface area, so the total power output is constant ($P \approx 62.83 \, \mathrm{W}$). This is why the object (Suzanne) doesn't appear to get any brighter or darker, but the shadow sharpness does change: In this scene, the spherical lamp is the only light source, except for the dim ambient lighting. This makes the shadows very easily visible. In a real-world scenario with other light sources (windows, for example), the effect would be less pronounced because the shadows would be more washed out. The following animation shows the scenario Floris described, with a rotating long object:
{ "source": [ "https://physics.stackexchange.com/questions/161409", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/40483/" ] }