source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
208,367
I want to compute the Inverse Fourier transform of the following function (it appears as a certain correlation function in a physical model I am interested it): $$ \widetilde{f}(\omega) = \frac{2i}{\omega + \sqrt{\omega^2 - 1}} $$ $$ f(t) = \int_{-\infty}^{\infty} \frac{d\omega}{2\pi} e^{-i \omega t} \widetilde{f}(\omega) = \,? $$ Can this be done explicitly? If yes, how does one do the integral and what is the answer? If it cannot be done explicitly, then how can one derive the fact that $f(t)$ will behave as $e^{\pm i t} t^{-3/2}$ for large $t$? This appears in Equation 11 of http://arxiv.org/abs/0801.3657 , "Matrix Models for Black Hole Thermalization" by Iizuka and Polchinski.
The symmetry of the Sun has got very little to do with any symmetry in its formation. The Sun has had plenty of time to reach an equilibrium between its self gravity and its internal pressure gradient. Any departure from symmetry would imply a difference in pressure in regions at a similar radius but different polar or azimuthal angles. The resultant pressure gradient would trigger fluid flows that would erase the asymmetry. Possible sources of asymmetry in stars could include rapid rotation or the presence of a binary companion, both of which break the symmetry of the effective gravitational potential, even if the star were spherically symmetric. The Sun has neither of these (the centrifugal acceleration at the equator is only about 20 millionths of the surface gravity, and Jupiter is too small and far away to have an effect) and simply relaxes to an almost spherically symmetric configuration. The relationship between oblateness/ellipticity and rotation rate is treated in some detail here for a uniform density , self-gravitating spheroid and the following analytic approximation is obtained for the ratio of equatorial to polar radius $$ \frac{r_e}{r_p} = \frac{1 + \epsilon/3}{1-2\epsilon/3}, $$ where $\epsilon$, the ellipticity is related to rotation and mass as $$\epsilon = \frac{5}{4}\frac{\Omega^2 a^3}{GM}$$ and $a$ is the mean radius, $\Omega$ the angular velocity. Putting in numbers for the Sun (using the equatorial rotation period), I get $\epsilon=2.8\times10^{-5}$ and hence $r_e/r_p =1.000028$ or $r_e-r_p = \epsilon a = 19.5$ km. Thus this simple calculation gives the observed value to a small factor, but is obviously only an approximation because (a) the Sun does not have a uniform density and (b) rotates differentially with latitude in its outer envelope. A final thought. The oblateness of a single star like the Sun depends on its rotation. You might ask, how typical is the (small) rotation rate of the Sun that leads to a very small oblateness? More rapidly rotating sun-like (and especially more massive) stars do exist; very young stars can rotate up to about 100 times faster than the Sun, leading to significant oblateness. However, Sun-like stars spin-down through a magnetised wind as they get older. The spin-down rate depends on the rotation rate and this means that single (or at least stars that are not in close, tidally locked binary systems) stars converge to a close-to-unique rotation-age relationship at ages beyond a billion years. Thus we expect (it remains to be proven, since stellar ages are hard to estimate) that all Sun-like stars with a similar age to the Sun should have similar rotation rates and similarly small oblateness.
{ "source": [ "https://physics.stackexchange.com/questions/208367", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/74971/" ] }
208,516
This is joked about all the time, but... Can tin foil hats actually block anything? If they can, what frequencies? Is there any research into tin or aluminum foil and radio blocking or amplifying abilities when shaped into a hat? If they really don't do anything, what would be better? Radio blocking in a hat design that is, not a Faraday cage suit (with matching tie). Edit: people seem to be missing the tags, so I’ll clarify the questions – What frequencies (mainly on the radio range) can tin foil, when shaped into a hat, block or at least attenuate? If not, are there any frequencies that resonate/amplify within the hat? What would work better for signal blocking in this regard (to the head that is)? Yes this is a funny subject but let's actually take a look at the physics. There are related questions to the hat, but I believe they fall outside the scope here. If someone wants to ask those on the exchange sites, go ahead.
A tin foil hat can block: alpha rays electromagnetic waves, where the wavelength is short enough to not diffract around the edges (counterexample: FM radio waves ), but not short enough to punch through the foil (counterexample: gamma rays ) ultrasound rain job offers marriage proposals But, then again ...
{ "source": [ "https://physics.stackexchange.com/questions/208516", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/93530/" ] }
208,522
Are white noises always Markovian? I am a bit confused about it. As white noise always has a constant power spectrum, its auto correlation function must contain a delta function of time. Thus the correlation time of the noise vanishes. But I don't know whether they can be called Markovian.
A tin foil hat can block: alpha rays electromagnetic waves, where the wavelength is short enough to not diffract around the edges (counterexample: FM radio waves ), but not short enough to punch through the foil (counterexample: gamma rays ) ultrasound rain job offers marriage proposals But, then again ...
{ "source": [ "https://physics.stackexchange.com/questions/208522", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/93536/" ] }
208,537
The question is just as the title. It's said that electron must receive a specific amount of energy in order to go or drop to another energy level. So how can electrons of separate atoms which have same energy level (because the atom a separate so it doesn't violate Pauli exclusion principle) change its energy level to be slightly higher or lower than each others to create energy band
A tin foil hat can block: alpha rays electromagnetic waves, where the wavelength is short enough to not diffract around the edges (counterexample: FM radio waves ), but not short enough to punch through the foil (counterexample: gamma rays ) ultrasound rain job offers marriage proposals But, then again ...
{ "source": [ "https://physics.stackexchange.com/questions/208537", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/41974/" ] }
209,232
My friend noticed an interference-like pattern around the table leg. However, we do know that interference patterns of sunlight produces rainbow colours. What seems to be happening here?
These are probably caused by minute, periodic variations in the diameter of the table leg, formed by drawing through a die. Any vibration in the process would end up being circumferential waves in the surface of the tube. Changes in the diameter mean changes in the slope of the surface, and thus focus the reflected light to different rings around the base of the leg. You could probably confirm this by shining a laser pointer at the leg and slowly moving it down along the leg; the reflected point on the ground will move periodically, pausing when it's moving down across a concave (along the axis) portion of the leg, and moving quickly when it's moving down along a convex portion. Where the reflected laser pauses is where a broad beam of light would be focused and brighter; where the reflection moves quickly is where the beam of light would be diffused and darker.
{ "source": [ "https://physics.stackexchange.com/questions/209232", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/93877/" ] }
209,246
My study group is debating about which are the three lowest configurations of carbon. I've been arguing that the electron has to jump to the 3s level for the configuration to be different. Others have suggested that the two valence electrons just have to change their $m$ and $s$ numbers on the 2p level. We are using Morrison's Modern Physics and having trouble settling this issue within the text. We are aware of Hund's rule , so some of the problem is about exactly what is meant by "configuration." We want to understand this problem and do the work ourselves, but we are installing doubt in one another. Can someone clarify "configuration" and maybe suggest the general approach appropriate here?
These are probably caused by minute, periodic variations in the diameter of the table leg, formed by drawing through a die. Any vibration in the process would end up being circumferential waves in the surface of the tube. Changes in the diameter mean changes in the slope of the surface, and thus focus the reflected light to different rings around the base of the leg. You could probably confirm this by shining a laser pointer at the leg and slowly moving it down along the leg; the reflected point on the ground will move periodically, pausing when it's moving down across a concave (along the axis) portion of the leg, and moving quickly when it's moving down along a convex portion. Where the reflected laser pauses is where a broad beam of light would be focused and brighter; where the reflection moves quickly is where the beam of light would be diffused and darker.
{ "source": [ "https://physics.stackexchange.com/questions/209246", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/93886/" ] }
209,522
In kick-boxing, when a fighter's leg hits an opponents leg, the outcome, based on Newton's 3rd law, should be the same for each fighter. It's not even important who kicked who, as in the moment of contact the attacker should feel more less the same as the defender. Here is a catch: in most of situations different parts of the fighters' bodies collide: the attacker typically contacts the front of his leg with the defender's side. The front is harder. Is it hardness that makes the difference? Some web pages inform me, that because of the 3rd law, a fighter should make powerful but very brief hits - retracting a kicking leg before it receives a reaction. But from what I know, if you don't feel a reaction, there was no action in the first place. How is it ever possible to take advantage in martial arts hit and win?
In kick-boxing, when a fighter's leg hits an opponents leg, the outcome, based on Newton's 3rd law, should be the same for each fighter. It's not even important who kicked who, as in the moment of contact the attacker should feel more less the same as the defender. You are correct. As you noted, Newton's third law does indeed say that the force on each fighter's body is the same (but in opposite direction) at each instant in time. This guarantees not only that the forces are equal, but also the impulse delivered to each of the colliding objects. Denoting the impulse imparted to objects 1 and 2 as $J_1$ and $J_2$ we therefore have \begin{align} F_1(t) &= -F_2(t) \\ \text{and} \qquad J_1 \equiv \int dt \, F_1(t) &= -\int dt \, F_2(t) \equiv -J_2 \end{align} Impulse has the same dimensions as momentum, so really what we're saying is that in a collision both objects experience the same change in momentum over the same period of time. Some web pages inform me, that because of the 3rd law, a fighter should make powerful but very brief hits - retracting a kicking leg before it receives a reaction. But from what I know, if you don't feel a reaction, there was no action in the first place. Again, you are absolutely right. Retracting the arm or leg does not reduce the force or impulse delivered to that arm or leg during the blow. How is it ever possible to take advantage in martial arts hit and win? You actually already partially got it when you mentioned hardness of the objects involved in the collision. The technical words for describing this are stress and strain . Stress is essentially the inter-molecular or inter-atomic forces within a solid. Strain is the deformation of the solid from its usual shape. When an arm or leg hits a nose, both experience the same impulse, but because the nose is softer, it deforms more. Once the nose tissues move too much relative to one another the the nose breaks. The elbow, on the other hand, is made largely of calcium and and support much larger internal stress while maintaining low enough strain that the tissues don't move too much relative to one another; e.g. the elbow doesn't break. Once the collision is over the bone molecules move back to where they were before. Of course, if the stress in the bone is too large, and consequently the strain exceeds a certain amount, the bone fractures.$^{[a]}$ You can think of the difference between pushing on a nose or an elbow in terms of pushing on springs with different spring constant. Supposing we have $F = k x$, then for a given force (stress) the displacement (strain) is $x = F/k$. A low $k$ means a large strain (like the nose) while a large $k$ means less strain (like the elbow). Of course, there are also biological factors (but which are fundamentally physical of course). Certain parts of the body are simply more important than others. An elbow-skull collision does not have equal damaging effects to the owners of said elbow and skull. The impulse imparted to the elbow causes compression of the bone which transduces the force to articulating structures such as the shoulder. The skull, on the other hand, transduces the impulse to e.g. the brain. Rattling a shoulder around may hurt, but rattling a brain around leads to unconsciousness. $[a]$: There is fascinating data regarding the stress/strain curves for bone. At low stress the bone is essentially elastic. Past a threshold, the strain is a much steeper function of stress. Then at a critical point the bone fractures. P.S. I've left out a discussion of exactly how/why body tissues are destroyed by excessive strain. In other words, why doesn't your nose just always spring back to its original shape after being deflected by an elbow? A careful description of this process on the microscopic level would make an interesting subject for another question.
{ "source": [ "https://physics.stackexchange.com/questions/209522", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/94009/" ] }
209,648
Let's say a black hole of mass $M$ and a very compact lump of anti-matter ( not a singularity ) also of mass $M$ are traveling toward each other. What does an outside observer see when they meet? Will they blow themselves apart in a matter/anti-matter reaction? Or will their masses combine, never quite meeting in the infinite time dilation at the event horizon?
Whether the infalling material is matter or antimatter makes no difference. Fundamentally, the confusion probably comes from thinking of black holes as normal substances (and thus retaining the properties of whatever matter went into making them). Really, a black hole is a region of spacetime with certain properties, notably the one-way surface we call an event horizon. That's it. Whatever you envision happening on the inside of a black hole, whether it be a singularity or angels dancing on the head of a pin, is completely irrelevant. The reason spacetime is curved enough to form an event horizon is essentially the due to the density of mass and energy in the area. Antimatter counts just the same as matter when it comes to mass and energy. Anti-protons have the same, positive mass as normal protons, and at a given speed they have the same, positive kinetic energy too. Even if you wanted matter and antimatter to annihilate somewhere near/inside a black hole, the resulting photons would cause no less curvature of spacetime, as all particle physics reactions conserve energy and momentum. This is related to how you could form a black hole from nothing but radiation .
{ "source": [ "https://physics.stackexchange.com/questions/209648", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/52514/" ] }
210,087
In the following online article http://www.star.le.ac.uk/~sav2/stats/a.html I see the word "bin" used, in relation to x-ray spectroscopy, both as a verb and as a noun (people both "bin" things and talk about each "bin"). I've also heard it batted about in other places. There seems to be a very good explanation of why we would "bin", but the article doesn't seem to say what this process actually means. On searching up the term I see all sorts of definitions, but mostly to do with Photometrics. Rather than quote the areas of the article which I don't understand, I simply ask that given the context of spectroscopy, what "bin" means?
Suppose you are analysing the weights of people in the UK to see what the distribution of weights looks like. Suppose also you can measure the weight to arbitrary precision, so that no two people's weights will be exactly the same. When you're finished you plot your data on a histogram, but the trouble is that because everyone has a different weight you get a histogram that looks like (the data is entirely fictional): And this is no use to anyone. You can see there's some clustering around the average weight, but it's impossible to get any detail from the graph. Now suppose you choose groups of weights e.g. 50-55 kg, 55-60 kg, 60-65 kg, etc, and now you count the number of people whose weights fall into each group. This time your histogram is going to look like: and you can see the good old bell shape emerging. So you can see the average and the width of the distribution. The groups are called bins , and the process of assigning each data point to a bin is called binning . You choose the bin size to best suit your data. If you make the bins small you get lots of points on your histogram but you'll have lots of statistical noise. Make the bins too big and you get excellent signal to noise but too few points on your histogram to be useful. I've used weights because that's a particularly simple example, but exactly the same applies to measuring spectra. Each bin would be a range of wavelengths, and you'd measure the integrated intensity for the range. You choose the bin size to make the signal to noise as good as possible while keeping the spectral resolution within acceptable limits.
{ "source": [ "https://physics.stackexchange.com/questions/210087", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/89073/" ] }
210,245
Why is boron so good at absorbing neutrons? Why does it have such a large target area compared to the size of its nucleus?
It's boron-10 that is the good neutron absorber. Boron-11 has a low cross section for neutron absorption. The size of the nucleus isn't terribly relevant because neutrons are quantum objects and don't have a precise position. The incident neutron will be delocalised and some part of it will almost always overlap the nucleus. What matters is the energy of the reaction: $$ ^{10}\text{B} + n \rightarrow ^{11}\text{B} $$ and the activation energy for the reaction. I'm not sure we understand nuclear structure well enough to give a quantitative answer to this. However neutrons, like all fermions, like to be paired and $^{10}$B has 5 neutrons while $^{11}$B has 6 neutrons. So by adding a neutron we are pairing up the neutrons and completing a neutron shell. We would expect this to be energetically favourable. This argument would apply to any nucleus with an odd number of neutrons, but $^{10}$B is a light nucleus so we expect the effect to be particularly big. The lightest such nucleus is $^{3}$He, with one neutron, and that has has an even bigger neutron absorption cross section. However practical considerations rule out the use of $^{3}$He as a neutron absorber. $^{6}$Li, with three neutrons, also has a reasonably high cross section, though it is less than boron and helium.
{ "source": [ "https://physics.stackexchange.com/questions/210245", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/43087/" ] }
210,552
Yesterday a friend asked me what a Dirac delta function really is. I tried to explain it but eventually confused myself. It seems that a Dirac delta is defined as a function that satisfies these constraints: $$ \delta (x-x') = 0 \quad\text{if}\quad x \neq x'$$ $$ \delta (x-x') = \infty \quad\text{if}\quad x = x'$$ $$\int_{-\infty} ^{+\infty} \delta(x-x') dx = 1 $$ I have seen approximation of the dirac delta function as an infinitely peaked Gaussian. I have also seen interpretation of Dirac delta function as a Fourier Transform which is widely adopted in study of quantum theory. So what really is a Dirac delta function? Is it a function? Or is it some kind of limit for a large class of functions? I am really confused.
It is a distribution. The easiest, cleanest way to think of it is as a linear functional $\mathscr{H}\to\mathbb{R}$ on the Hilbert space $\mathscr{H}$ of functions $\mathbb{R}^N\to\mathbb{R}$ that are $\mathbf{L}^2(\mathbb{R}^N)$. Input a function $f\in\mathbf{L}^2(\mathbb{R}^N)$, and DiracDelta spits out $f(0)$. It's a manifestly linear operator. Historically it was intuitively motivated by the need for a function $\delta:\mathbb{R}^N\to\mathbb{R}$, in the everyday sense, with precisely the properties you state. It's also motivated by the Riesz Representation Theorem : a Hilbert space is complete, so every continuous linear functional can be represented as an inner product in the space: for every continuous linear functional $f:\mathbf{L}^2(\mathbb{R}^N)$ there exists a unique function $f_0:\mathbb{R}^N\to\mathbb{R}$ in the everyday sense such that: $$\langle f_0,\,g\rangle = \int_{\mathbb{R}^N} f_0(U)\,g(U)\,\mathrm{d} U\tag{1}$$ In finite dimensional vector spaces over $\mathbb{R}$ kitted with inner product, all linear functionals are also continuous. Thus, for example, the one forms make up all the linear functionals that there are: there are no linear functionals that cannot be written as the action $X\to\omega(X)$ of a one form $\omega$ on the vectors in a finite dimensional Hilbert space. In physicist's speak: all linear functionals can be written as vectors with their indices lowered by contraction with the metric tensor $g$. In infinite dimensional spaces, the notion of continuous linear functional is strictly more precise than simply linear functional: there are always linear functionals which are not continuous and indeed DiracDelta is one. See, for example, my answer here (as well as the MathOverflow threads linked in it) on rigged Hilbert spaces for more information. So we can discuss distributions through this notion of rigged Hilbert space: we kit the Hilbert space with a stronger topology than simply the Hilbert norm topology so that the new space's topology is precisely strong enough, but no stronger, to deem all linear functionals to be continuous with respect to this newer, stronger topology. We speak of the "topological dual space" of all continuous linear functionals to a Hilbert space as distinct from the larger, "algebraic dual" of linear, but not needfully continuous functionals. It is the former that a Hilbert space, by definition, is equivalent to, not the latter. The Riesz Representation Theorem in this context is then the assertion that this definition of topological self duality is the same as the Hilbert space definition as a complete, inner product space. So there is no everyday function with the properties you state precisely because the notion of linear functional is strictly more general than the notion of continuous linear functional in infinite dimension Hilbert space. We can also think of distributions as sequences of everyday functions. This is the approach you allude to in the thinking of DiracDelta as a a sequence of ever peakier Gaussians. This is the approach that M. J. Lighthill, "An Introduction to Fourier Analysis and Generalised Functions" uses. It's a cute little book: rigorous, highly readable but quite off-beat (it uses nomenclature "good functions" for the Schwartz functions that I have not seen elsewhere, for example) and a little dated. But it is still a valid conception of the notion of distribution.
{ "source": [ "https://physics.stackexchange.com/questions/210552", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/80878/" ] }
211,246
Is there a physical reason behind the frequency and voltage in the mains electricity? I do not want to know why exactly a certain value was chosen; I am rather interested to know why that range/order of magnitude was selected. I.e., why 50 Hz and not 50000 Hz or 0.005 Hz? For example, is 50 Hz the actual frequency at which a turbine rotates, and is it not practical to build one that rotates much faster or slower?
Why is mains frequency 50Hz and not 500 or 5? Engine efficiency, rotational stress, flicker, the skin effect, and the limitations of 19th century material engineering. 50Hz corresponds to 3000 RPM. That range is a convenient, efficient speed for the steam turbine engines which power most generators and thus avoids a lot of extra gearing. 3000 RPM is also a fast, but doesn't put too much mechanical stress on the rotating turbine nor AC generator. 500Hz would be 30,000 RPM and at that speed your generator would likely tear itself apart. Here's what happens when you spin a CD at that speed , and for funsies at 62,000 FPS and 170,000 FPS . Why not slower? Flicker. Even at 40Hz an incandescent bulb cools slightly on each half cycle reducing brightness and producing a noticeable flicker. Transformer and motor size is also directly proportional to frequency, higher frequency means smaller transformers and motors. Finally there is the skin effect . At higher frequencies AC power tends to travel at the surface of a conductor. This reduces the effective cross-section of the conductor and increases its resistance causing more heating and power loss. There are ways to mitigate this effect, and they're used in high tension wires, but they are more expensive and so are avoided in home wiring. Could we do it differently today? Probably. But these standards were laid down in the late 19th century and they were convenient and economical for the electrical and material knowledge of the time. Some systems do run at an order of magnitude higher frequency than 50Hz. Many enclosed systems such as ships, computer server farms, and aircraft use 400 Hz . They have their own generator, so the transmission loss due to the higher frequency is of less consequence. At higher frequencies transformers and motors can be made smaller and lighter, of great consequence in an enclosed space. Why is mains voltage 110-240V and not 10V or 2000V? Higher voltage means lower current for the same power. Lower current means less loss due to resistance. So you want to get your voltage as high as possible for efficient power distribution and less heating with thinner (and cheaper) wires. For this reason, power is often distributed over long distances in dozens to hundreds of kilovolts . Why isn't it lower? AC power is directly related to its voltage . AC power at 10 volts would have trouble running your higher energy household appliances like lights, heating or refrigerator compressor motor. At the time this was being developed, the voltage choice was a compromise between the voltage to run lights, motors and appliances. Why isn't it higher? Insulation and safety. High voltage AC wires need additional insulation to make them both safe to touch and to avoid interference with other wiring or radio receivers. Cost of home wiring was a major concern in the early adoption of electricity. Higher voltages would make home wiring more bulky, expensive and more dangerous.
{ "source": [ "https://physics.stackexchange.com/questions/211246", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/37677/" ] }
211,857
In Chapter 2 of David Tong's QFT notes, he uses the term " c-number " without ever defining it. Here is the first place. However, it's easy to check by direct substitution that the left-hand side is simply a c-number function with the integral expression$$\Delta(x - y) = \int {{d^3p}\over{(2\pi)^3}} {1\over{2E_{\vec{p}}}}(e^{-ip \cdot (x - y)} - e^{ip \cdot (x - y)}).$$ Here is the second place, on the same page (i.e. page 37). I should mention however that the fact that $[\phi(x), \phi(y)]$ is a c-number function, rather than an operator, is a property of free fields only. My question is, what does c-number function mean?
A c-number basically means 'classical' number, which is basically any quantity which is not a quantum operator which acts on elements of the Hilbert space of states of a quantum system. It is meant to distinguish from q-numbers, or 'quantum' numbers, which are quantum operators. See http://wikipedia.org/wiki/C-number and the reference therein.
{ "source": [ "https://physics.stackexchange.com/questions/211857", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/92265/" ] }
211,930
The Newtonian description of gravity can be formulated in terms of a potential function $\phi$ whose partial derivatives give the acceleration: $$\frac{d^2\vec{x}}{dt^2}=\vec{g}=-\vec{\nabla}\phi(x)=\left(\frac{\partial\phi}{\partial x}\hat{x}+\frac{\partial\phi}{\partial y}\hat{y}+\frac{\partial\phi}{\partial z}\hat{z}\right)$$ However, in general relativity, we describe gravity by means of the metric . This description is radically different from the Newtonian one, and I don't see how we can recover the latter from the former. Could someone explain how we can obtain the Newtonian potential from general relativity, starting from the metric $g_{\mu\nu}$?
Since general relativity is supposed to be a theory that supersedes Newtonian gravity, one certainly expects that it can reproduce the results of Newtonian gravity. However, it is only reasonable to expect such a thing to happen in an appropriate limit . Since general relativity is able to describe a large class of situations that Newtonian gravity cannot, it is not reasonable to expect to recover a Newtonian description for arbitrary spacetimes. However, under suitable assumptions, one does recover the Newtonian description of matter. This is called taking the Newtonian limit (for obvious reasons). In fact, it was used by Einstein himself to fix the constants that appear in the Einstein Field equations (note that I will be setting $c\equiv 1$ throughout). $$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\kappa T_{\mu\nu} $$ Requiring that general relativity reproduces Newtonian gravity in the appropriate limit uniquely fixes the constant $\kappa\equiv 8\pi G$. This procedure is described in most (introductory) books on general relativity, too. Now, let us see how to obtain the Newtonian potential from the metric. Defining the Newtonian limit We first need to establish in what situation we would expect to recover the Newtonian equation of motion for a particle. First of all, it is clear that we should require that the particle under consideration moves at velocities with magnitudes far below the speed of light. In equations, this is formalized by requiring $$\frac{\mathrm{d}x^i}{\mathrm{d}\tau}\ll \frac{\mathrm{d}x^0}{\mathrm{d}\tau} \tag{1}$$ where the spacetime coordinates of the particle are $x^\mu=(x^0,x^i)$ and $\tau$ is the proper time. Secondly, we have to consider situation where the gravitational field is "not too crazy", which in any case means that it should not be changing too quickly. We will make more precise as $$\partial_0 g_{\mu\nu}=0\tag{2}$$ i.e. the metric is stationary . Furthermore we will require that the gravitational field is weak to ensure that we stay in the Newtonian regime. This means that the metric is "almost flat", that is: $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$ where $h_{\mu\nu}$ is a small perturbation, and $\eta_{\mu\nu}:=\operatorname{diag}(-1,1,1,1)$ is the Minkowski metric. The condition $g_{\mu\nu}g^{\nu\rho}=\delta^\rho_\mu$ implies that $g^{\mu\nu}=\eta^{\mu\nu}-h^{\mu\nu}$, to first order in $h$, where we have defined $h^{\mu\nu}:=\eta^{\mu\rho}\eta^{\nu\sigma}h_{\rho\sigma}$$^1$. This can be easily checked by "plug-and-chug". Taking the Newtonian limit Now, if we want to recover the equation of motion of a particle, we should look at the corresponding equation in general relativity. That is the geodesic equation $$\frac{\mathrm{d}^2x^\mu}{\mathrm{d}\tau^2}+\Gamma^\mu_{\nu\rho}\frac{\mathrm{d}x^\nu}{\mathrm{d}\tau}\frac{\mathrm{d}x^\rho}{\mathrm{d}\tau}=0 $$ Now, all we need to do is use our assumptions. First, we use equation $(1)$ and see that only the $00$-component of the second term contributes. We obtain $$\frac{\mathrm{d}^2x^\mu}{\mathrm{d}\tau^2}+\Gamma^\mu_{00}\frac{\mathrm{d}x^0}{\mathrm{d}\tau}\frac{\mathrm{d}x^0}{\mathrm{d}\tau}=0 $$ From the definition of the Christoffel symbols $$\Gamma^{\mu}_{\nu\rho}:=\frac{1}{2}g^{\mu\sigma}(\partial_{\nu}g_{\rho\sigma}+\partial_\rho g_{\nu\sigma}-\partial_\sigma g_{\nu\rho}) $$ we see that, after we use equation $(2)$, the only relevant symbols are $$\Gamma^{\mu}_{00}=-\frac{1}{2}g^{\mu\sigma}\partial_\sigma g_{00} \textrm{.}$$ Using the weak field assumption and keeping only terms to first order in $h$, we obtain from straightforward algebra that $$\Gamma^{\mu}_{00}=-\frac{1}{2}\eta^{\mu\sigma}\partial_\sigma h_{00} $$ which leaves us with the simplified geodesic equation $$\frac{\mathrm{d}^2 x^\mu}{\mathrm{d}\tau^2}=\frac{1}{2}\eta^{\mu\sigma}\partial_\sigma h_{00}\bigg(\frac{\mathrm{d}x^0}{\mathrm{d}\tau}\bigg)^2 $$ Once again using equation $(2)$ shows that the $0$-component of this equation just reads $\ddot{x}^0=0$ (where the dot denotes differentiation with respect to $\tau$), so we're left with the non-trivial, spatial components only: $$\ddot{x}^i=\frac{1}{2}\partial_i h_{00} $$ which looks suspiciously much like the Newtonian equation of motion $$\ddot{x}^i=-\partial_i\phi$$ After the natural identification $h_{00}=-2\phi$, we see that they are exactly the same. Thus, we obtain $g_{00}=-1-2\phi$, and have expressed the Newtonian potential in terms of the metric. For a quick 'n' dirty derivation, we assume an expansion of the form $g^{\mu\nu}=\eta^{\mu\nu}+\alpha \eta^{\mu\rho}\eta^{\nu\sigma}h_{\rho\sigma}+\mathcal{O}(h^2)$ (note that the multiplication by $\eta$'s is the only possible thing we can do without getting a second order term), and simply plugging it into the relationship given in the post: $$(\eta_{\mu\nu}+h_{\mu\nu})\big(\eta^{\mu\nu}+\alpha \eta^{\mu\rho}\eta^{\nu\sigma}h_{\rho\sigma}+\mathcal{O}(h^2)\big)=\delta^\mu_\rho \Leftrightarrow \alpha=-1$$
{ "source": [ "https://physics.stackexchange.com/questions/211930", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/79444/" ] }
212,142
The word mode pops up in many fields of physics, yet I can't remember ever encountering a simple but precise definition. After having searched fruitlessly on this site as well, an easy to find place with (one or more) good answers seems in order. Objective Ideally, answers should give an intuitive and easy-to-remember definition of what a mode is, preferably in a general context. If limitation is necessary for a detailed answer, assume a context of theoretical physics, e.g. mode expansions in quantum field theory.
In a very mathematical sense, more often than not a mode refers to an eigenvector of a linear equation. Consider the coupled springs problem $$\frac{d}{dt^2} \left[ \begin{array}{cc} x_1 \\ x_2 \end{array} \right] =\left[ \begin{array}{cc} - 2 \omega_0^2 & \omega_0^2 \\ \omega_0^2 & - \omega_0^2 \end{array} \right] \left[ \begin{array}{cc} x_1 \\ x_2 \end{array} \right]$$ or in basis independent form $$ \frac{d}{dt^2}\lvert x(t) \rangle = T \rvert x(t) \rangle \, .$$ This problem is hard because the equations of motion for $x_1$ and $x_2$ are coupled. The normal modes are (up to scale factor) $$\left[ \begin{array}{cc} 1 \\ 1 \end{array} \right] \quad \text{and} \quad \left[ \begin{array}{cc} 1 \\ -1 \end{array} \right] \, .$$ These vectors are eigenvectors of $T$. Being eigenvectors, if we expand $\lvert x(t) \rangle$ and $T$ in terms of these vectors, the equations of motion uncouple. In other words The set of normal modes is the vector basis which diagonalizes the equations of motion (i.e. diagonalizes $T$). That definition will get you pretty far. The situation is the same in quantum mechanics. The normal modes of a system come from Schrodinger's equation $$i \hbar \frac{d}{dt}\lvert \Psi(t) \rangle = \hat{H} \lvert \Psi \rangle \, .$$ An eigenvector of $\hat{H}$ is a normal mode of the system, also called a stationary state or eigenstate. These normal modes have another important property: under time evolution they maintain their shape, picking up only complex prefactors $\exp[-i E t / \hbar]$ where $E$ is the mode's eigenvalue under the $\hat{H}$ operator (i.e. the mode's energy). This was actually also the case in the classical system too. If the coupled springs system is initiated in an eigenstate of $T$ (i.e. in normal mode), then it remains in a scaled version of that normal mode forever. In the springs case, the scale factor is $\cos(\sqrt{\lambda} t)$ where $\lambda$ is the eigenvalue of the mode under the $T$ operator. From the above discussion we can form a very physical definition of "mode": A mode is a trajectory of a physical system which does not change shape as the system evolves. In other words, when a system is moving in a single mode, the positions of its parts all move with same general time dependence (e.g. sinusoidal motion with a single frequency) but may have different relative amplitudes.
{ "source": [ "https://physics.stackexchange.com/questions/212142", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62859/" ] }
212,287
Our value for the Planck constant $h$ can be found on experiments on Earth, but how do we know that the Planck constant doesn't change throughout space, for instance it depends weakly upon the curvature of spacetime? As far as I know we have only done experiments to determine the value of $h$ close to the Earth's surface, so I don't think there's any direct evidence to suggest it should be a constant throughout all space, so I proposed the idea to my friends. One called me a nut, and said that the value for Planck constant has to be the same throughout all space otherwise the conservation of energy would be violated. He put forward the following situation: Suppose that in one region of space the Planck constant is found to be $1$ in SI units (call this Space $A$), and you have a photon of frequency $1$ Hertz. This photon then traveled to a different region in space where the value for the Planck constant is now $3$ in SI units(call this Space $B$). Hence the photon has gone from having $1$ Joule of energy to $3$ Joules of energy (by the relation $E = hf$). In his own words, a twitch of a finger here can cause an explosion in a different region in space. To this I replied: So what? As far as I know the conservation of energy argument derives from the fact that space looks the same everywhere we look, and if the Planck constant was really different in different areas of space, then that argument shouldn't hold, and conservation of energy shouldn't hold either. Even if I did want to abuse this system by creating a feedback loop where the energy got larger and larger, the point was if I ever tried to extract energy from Space $B$ back to Space $A$ it would become a small amount of energy again since the value for $h$ decreases. Another friend called me a nut, because he believed that in sending the photon from Space $A$ to Space $B$, I would violate the Second Law of Thermodynamics by decreasing the overall entropy. He didn't really expand on it, and (I believe) neither of us have the abilities to calculate the actual entropy change, so this point remains unresolved. Finally, another friend called me a nut, because he believed that if the value for $h$ was different in different regions of space, then all our calculations ever done in physics dealing with the stars and their brightness and so on and so forth were all incorrect, and that would be troubling. To this I replied that firstly, the changes in the value for $h$ could possibly be very small for 'normal' regions of space, and secondly that there was nothing inherently WRONG about all these other calculations we've done being wrong, in fact we know they are wrong already because we can't account for either dark matter or dark energy! So in my mind this wasn't a really good argument as to why the Planck constant couldn't be variable over space. With all that being said, I'm still fairly certain that the Planck constant is a constant over space, because it is called a constant. So is there an error in my previous arguments, or what is the conclusive reason that the Planck constant cannot be variable over space?
Take the example of a hydrogen atom. The energy levels in the hydrogen atom are given by $$ E_n = -2\pi^2 \frac{m_e^4}{n^2 h^2}.$$ The spacing between two energy levels determines the frequency of an emitted photon when a radiative transition is made between them: $$ h \nu_{n_2\rightarrow n_1} = \frac{2\pi^2 m_e^4}{h^2}\left(\frac{1}{n_2^{2}} - \frac{1}{n_1^{2}}\right). $$ Thus the frequency of the transition will be proportional to $h^{-3}$. Thus if $h$ changes, then the frequency of spectral lines corresponding to atomic transitions would change considerably. Now, when we look at distant galaxies we can identify spectral lines corresponding to the atomic transitions in hydrogen. As John Rennie says, these are redshifted and so this could be interpreted as a systematic spatial change in Planck's constant with distance from the Earth. However, this shift would have to be the same in all directions - since redshift appears to be very isotropic on large scales - and would thus place the Earth at the "centre of the universe", which historically has always turned out to be a very bad idea. Alternatively we could assume Planck's constant was the same everywhere is space but was changing with time - this would produce an isotropic signal. (NB: Cosmic redshift cannot be explained in this way since there are other phenomena, such as the time dilation of supernova light curves that do not depend on Planck's constant in the same way, that would be inconsistent). That is not to say that these issues are not being considered. Most effort has been focused on seeing whether the fine structure constant $\alpha$ varies as we look back in time at distant galaxies. The reason for this is, as Count Iblis correctly (in my view) points out, the quest for a variable $h$ is futile, since any phenomenon we might try to measure is actually a function of the dimensionless $\alpha$, which is proportional to $e^2/hc$. We might claim that if either varied, it would produce a variation in $\alpha$, but this would amount to a choice of units and only the variation in $\alpha$ is fundamental. So, in the example above, the hydrogen energy levels are actually proportional to $\alpha^2/n^2$. A change in $\alpha$ can be detected because the relativistic fine structure of spectral lines - i.e the separation in energy between lines in a doublet for instance, also depends on $\alpha^2$, but is different in atomic species with different atomic numbers. There are continuing claims and counter claims (e.g. see Webb et al. 2011 ; Kraiselburd et al. 2013 ) that $\alpha$ variations of 1 part in a million or so may be present in high redshift quasars (corresponding to a fractional change of $\Delta \alpha/\alpha \sim 10^{-16}$ per year) but that the variation may have an angular dependence (i.e. a dependence on space as well as time).
{ "source": [ "https://physics.stackexchange.com/questions/212287", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60080/" ] }
212,421
So, I was chatting with a friend and we noticed something that might be very, very, very stupid, but I found it at least intriguing. Consider Minkowski spacetime. The trace of a matrix $A$ can be written in terms of the Minkowski metric as $\eta^{\mu \nu} A_{\mu \nu} = \eta_{\mu \nu} A^{\mu \nu} = A^\mu_\mu$. What about the trace of the metric? Notice that $\eta^\mu_\mu$ cannot be written as $\eta_{\mu \nu} \eta^{\mu \nu}$, because this is equal to $4$, not $-2$. It seemed to us that there is some kind of divine rule that says "You shall not lower nor raise indexes of the metric", because $\eta^{\mu \nu} \eta_{\nu \alpha} = \delta^\mu_\alpha \neq \eta^\mu_\alpha$. Is the metric immune to index manipulations? Is this a notation flaw or am I being ultra-dumb?
I know this is an old and already answered question, but I thought I'd elaborate a bit on what's going on "behind the scenes." When dealing with rank-two tensors, it's sometimes better not to think of matrices at all, because that blurs the distinction between something like $A_{\mu \nu}$ and something like $A^\mu_{\ \ \nu}$ , which are very qualitatively different beasts. $A_{\mu \nu}$ is a bilinear form - a bilinear map that inputs two vectors and outputs a scalar, or equivalently, a linear map that inputs a vector and outputs a one-form. $A^\mu_{\ \ \nu}$ is a linear operator - a bilinear map that inputs a vector and a one-form and outputs a scalar, or equivalently, a linear map that inputs a vector and outputs another vector. They're very different, because for a bilinear form, the input and output live in two completely different spaces and can't be added or directly compared. (Think of it as being like a non-square matrix.) If you think back to your linear algebra course, you'll probably remember that the matrices were always considered as linear operators. And in fact, many linear algebra tools, like eigenvectors, eigenvalues, trace, determinant, etc., are only defined for linear operators. So in fact, you can't take the trace of a bilinear form $A_{\mu\nu}$ . Nor can you define its eigenvalues or eigenvectors. Sure, you can write it out as a matrix in some particular basis, pretend it was a linear operator, and then find the eigenvalues of that linear operator, but the result will be basis-dependent and therefore of no physical interest. When you say you can "use the metric to take the trace $\eta^{\mu \nu} A_{\mu \nu}$ of the bilinear form $A_{\mu \nu}$ ," what you're really doing is (a) using the metric to convert the bilinear form into a linear operator, and then (b) taking the trace of that linear operator without using the metric. In fact, the trace of a linear operator is a metric-independent concept, because the indices are already in the right places and ready to be contracted. (The determinant is actually a bit of a subtle case, because even though the metric $g_{\mu \nu}$ is a bilinear form, for non-Cartesian coordinate systems, the determinant $\det g$ of the metric is a valid and useful concept. It's not a tensor though, it's a "tensor density." That's another story.) Anyway, the metric's "job" is to raise and lower indices, so if it's not actually changing index heights then it should act trivially. So the one-index-up, one-index-down form of the metric is always the identity linear operator $\delta^\mu_{\ \ \nu}$ . That's true in any coordinate system, curved spacetime, whatever. That's why people almost always use the notation $\delta^\mu_{\ \ \nu}$ instead of $\eta^\mu_{\ \ \nu}$ . That's also why the two-upper-index tensor $\eta^{\mu \nu}$ is defined to the inverse of the metric $\eta_{\mu \nu}$ . (Again, note the subtlety - the metric inputs a vector and outputs a one-form, so the inverse metric must input a one-form and output a vector.) So now the answer to your question is clear: the trace of the metric is always just $\delta^\mu_{\ \ \mu} = d$ , the number of spacetime dimensions. Again, true in any coordinate system, any metric signature, curved spacetime, what have you. That fact that the trace of the matrix representation of $\eta_{\mu \nu}$ is 2 has no physical significance. (Well, okay, fine, in Cartesian coordinates on flat spacetime it's the summed metric signature , which is physically significant. But in general it has no physical significance.)
{ "source": [ "https://physics.stackexchange.com/questions/212421", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/30005/" ] }
212,962
I have these questions related to my question title: Do we, other species and things have our own gravity? How much Gs a human have? Are we effecting Earth, nearby moons and planets with our gravity?
Are we increasing the gravity of Earth with our population No, we don't increase the total mass of the Earth, because our bodies are made of things that were already on Earth (food, water, air, minerals etc). Do we, other species and things have our own gravity? Yes everything with mass has gravity (and things without but that's complicated) How much Gs a human have? Not quite the right question. The total mass of humans is probably around 300 million tons (7billion people * 40kg per person). The earth is about a billion, billion times heavier than this so we don't add very much. Are we effecting Earth, nearby moons and planets with our gravity? Yes, you attract the Earth in the same way that the Earth attracts you - but the Earth is a lot bigger than the mass of all the people so you don't have much effect.
{ "source": [ "https://physics.stackexchange.com/questions/212962", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/83042/" ] }
212,964
In a recent lecture we were told that $\langle x'|\hat{O}|x \rangle = O(x,x') = O(x)\delta(x-x')$ "due to the locality of quantum mechanical observables". I have no idea what this is supposed to mean any comments or references to a book from which this, or something similar, originates would be much appreciated
Are we increasing the gravity of Earth with our population No, we don't increase the total mass of the Earth, because our bodies are made of things that were already on Earth (food, water, air, minerals etc). Do we, other species and things have our own gravity? Yes everything with mass has gravity (and things without but that's complicated) How much Gs a human have? Not quite the right question. The total mass of humans is probably around 300 million tons (7billion people * 40kg per person). The earth is about a billion, billion times heavier than this so we don't add very much. Are we effecting Earth, nearby moons and planets with our gravity? Yes, you attract the Earth in the same way that the Earth attracts you - but the Earth is a lot bigger than the mass of all the people so you don't have much effect.
{ "source": [ "https://physics.stackexchange.com/questions/212964", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/95753/" ] }
212,975
We know that the Potential Energy is equal to Kinect Energy for any object Thrown from the ground. Not in a straight way of the words. But in the a especific case when I use my my arms to throw a Ball, is the Potential Energy equals to Kinect one? I mn not using any mechanical advice. Does the fact that I m using mechanical energy from my arms influentiate the throw?
Are we increasing the gravity of Earth with our population No, we don't increase the total mass of the Earth, because our bodies are made of things that were already on Earth (food, water, air, minerals etc). Do we, other species and things have our own gravity? Yes everything with mass has gravity (and things without but that's complicated) How much Gs a human have? Not quite the right question. The total mass of humans is probably around 300 million tons (7billion people * 40kg per person). The earth is about a billion, billion times heavier than this so we don't add very much. Are we effecting Earth, nearby moons and planets with our gravity? Yes, you attract the Earth in the same way that the Earth attracts you - but the Earth is a lot bigger than the mass of all the people so you don't have much effect.
{ "source": [ "https://physics.stackexchange.com/questions/212975", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/9220/" ] }
213,594
We all know that if I punch the wall with $100\,\mathrm{N}$ force, the wall pushes me back with with $100\,\mathrm{N}$ and I get hurt. But if I punch air with $100\,\mathrm{N}$, does air punches me with $100\,\mathrm{N}$? I mean I don't get $100\,\mathrm{N}$ back. I don't get hurt. Does it violate Newton's third law? Appreciate if someone can answer. This question has been bothering me for more than 5 years.
The assumption that you are making here is that with the same motion of a punch, that you are applying $\text{100 N}$ of force to both a wall and to the air. However, you should think about the most fundamental equation of Newton's laws, namely, $F=ma$ The most important part of this in relation to what you are talking about is that the force applied, $F$, is proportional to the acceleration, $a$. When you hit the wall, your hand goes from full speed to a complete stop rather quickly. This is a fast deceleration, or a high value of $a$, so the value of the force is high. However, when you punch through the air, the air molecules hardly slow down your hand at all. This means that your deceleration is low, or a low value of $a$, meaning that the force, $F$ is also low. In the absence of a wall to stop your fist, what is really stopping your punch is your own body, not the air, as your arm socket will have to pull back on your arm to keep your fist from flying away. This also will slow down your arm more slowly than a wall however, since the tendons and ligaments in your arm tend to stretch, reducing the deceleration compared to the wall, and thus the force. So, what I'm trying to say here, is that yes, the forces are always equal and opposite which is in line with Newton's laws. However, your assumption that force from hitting a wall with a punch is the same as a force from swinging your fist through the air is incorrect. I hope this clears things up for you.
{ "source": [ "https://physics.stackexchange.com/questions/213594", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/96103/" ] }
214,218
I have been told that metals are good reflectors because they are good conductors. Since an electric field in conductors cause the electrons to move until they cancel out the field, there really can't be electric fields in a conductor. Since light is an EM wave, the wave cannot enter the conductor, and it's energy is conserved by being reflected (I'm guessing in a similar fashion to a mechanical wave being reflected when it reaches a medium it cannot travel through, like a wave on a rope tied to a wall for example). I imagine then that more conductive materials are better reflectors of light. Wouldn't a perfect conductor then, such as a superconductor, be a perfect reflector of light? (Or reach some sort of reflective limit?)
Yes and no. Below the superconducting gap a superconductor is a near perfect reflector and superconductivity has its say in it. Reflectivity at normal incidence is given by the equation. $$ R = \left| \frac{1-\sqrt{\varepsilon}}{1+\sqrt{\varepsilon}} \right|^2 $$ where $\varepsilon$ is the complex-valued frequency-dependent dielectric function of the reflective material. Let's look at the dielectric function of a superconductor above and below the superconducting transition temperature: This is a plot of the real part of the optical conductivity (in arbitrary units) in the normal state (blue) and the superconducting state (orange). The relationship between the real part of the optical conductivity and the imaginary part of the dielectric function is given by $\varepsilon_0 \mathrm{Im}(\varepsilon) \omega = \mathrm{Re}(\sigma)$ The area under the curve must be conserved, therefore the missing part of the area is hidden in a delta-function at zero frequency (we must take it into account to perform a Kramers-Kronig transformation properly). This is important, because the delta function in the conductivity (that's the manifestation of dissipationless dc current!) leads to a $-a/\omega^2$ term in the real part of the dielectric function. Large-by-magnitude values of the dielectric function give a good coefficient of reflection. The other part of the dielectric function is $\mathrm{Re}(\varepsilon)$ and is obtained by doing a Kramers-Kronig transformation: Now this can be plugged into the expression for reflectivity: As you can see, and this is due to the vastly negative real part of the dielectric function, reflectivity below the gap is near 100%. EDIT - actually, since the real part of $\varepsilon$ is negative, and within the superconducting gap the imaginary part is exactly zero (with caveats, such as s-wave vs. d-wave superconductors) $R$ would be exactly 100%. Above the gap the reflectivity is actually slightly worse. Now because the superconducting gap lies at energies far lower than those of visible light, the reflectivity for visible lightly is barely affected. As superconductors are often bad conductors in their normal state, their visible light reflectivity leaves much to be desired. Stick to silver.
{ "source": [ "https://physics.stackexchange.com/questions/214218", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/96021/" ] }
214,485
Sound is propagated by waves. Waves can interfere. Suppose there are two tenors standing next to each other and each singing a continuous middle-C. Will it be the case that some people in the audience cannot hear them because of interference? Would it make a difference if they were two sopranos or two basses and singing a number of octaves higher or lower? How does this generalize to an array of n singers? Given a whole choir, to what extent are their voices less than simply additive because of this? Is it possible that, for some unfortunate member of the audience, the choir appears to be completely silent--if only for a moment?
The main issue in the setting of an orchestra or choir is the fact that no two voice or instruments maintain exactly the same pitch for any length of time. If you have two pure sine wave source that differ by just one Hertz, then the interference pattern between them will shift over time - in fact at any given point you will hear a cycle of constructive and destructive interference which we recognize as beats, but the exact time when each member of the audience will hear the greatest or least intensity will vary with their position. Next let's look at the angular distribution of signal. If two tenors are singing a D3 of 147 Hz (near the bottom of their range) the wavelength of the sound is 2 m: if they stand closer together than 1 m there will be no opportunity to create a 180 degree phase shift anywhere. If they sing near the top of their range, the pitch is closer to 600 Hz and the wavelength 0.5 m. But whatever interference pattern they generate, a tiny shift in frequency would be sufficient to move the pattern - so no stationary observer would experience a "silent" interference - even of the fundamental frequency. Enter vibrato : most singers and instruments deliberately modulate their frequency slightly - this makes the note sound more appealing and allows them to make micro corrections to the pitch. It also makes the voice stand out more against a background of instruments and tends to allow it to project better (louder for less effort on the part of the singer). This is used by soloists but more rarely by good choirs - because in the choir you want to blend voices, not have them stand out. At any rate, the general concept here is incoherence : the different source of sound in a choir or orchestra are incoherent, meaning that they do not maintain a fixed phase relationship over time. And this means they do not produce a stationary interference pattern. A side effect of interference is seen in the volume of a choir: if you add the amplitudes of two sound sources that are perfectly in phase, your amplitude doubles and the energy / intensity quadruples. A 32 man choir would be over 1000 times louder than a solo voice - and this would be achieved in part because the voices could only be heard "right in front" of the choir (perfectly coherent voices would act like a phased array ). But since the voice are incoherent, there is no focusing, no amplification, and they can be heard everywhere. Note that incoherence is a function of phase and frequency - every note is a mix of frequencies, and although a steady note will in principle contain just a fundamental and its harmonics, their exact relationship is very complicated. Even if you took a single singer's voice, and put it into two speakers with a delay line feeding one of the speakers, I believe you would still not find interference because of the fluctuations in pitch over even a short time. Instead, your ear would perceive this as two people singing. And finally - because a voice (or an instrument) is such a complex mix of frequencies, there is in general no geometric arrangement of sources and receiver in which all frequencies would interfere destructively at the same time. And the ear is such a complex instrument that it will actually "synthesize" missing components in a perceived note - leading to the strange phenomenon where for certain instruments, the perceived pitch corresponds to a frequency that is not present - as is the case with a bell, for example.
{ "source": [ "https://physics.stackexchange.com/questions/214485", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/85871/" ] }
214,950
I'm a complete layperson. As I understand, dark matter theoretically only interacts with the gravitational force, and doesn't interact with the other three fundamental forces: weak nuclear force, strong nuclear force, and electromagnetism. Those are my understandings going in. If I'm wrong, please correct me. I've done some googling, and I haven't found anything confirming or denying that dark matter is affected by either of the fundamental nuclear forces. So since dark matter only interacts with gravity, what causes any dark matter particle to be repelled from another? If they can pass freely through each other, and they are gravitationaly attracted to each other, why don't such particles clump together in a single 'point' in space? It seems to me that particles occupying a single 'space' are philosophically not distinct particles, but I don't know how actual physics would play into this. Edit This article , author's credentials unknown, but implicitly claims to be a physicist or astronomer, says "...[P]hysicists generally take all dark matter to be composed of a single type of particle that essentially interacts only through gravity." Edit 2 The author is this Lisa Randall , "Professor of Science on the physics faculty of Harvard University."
Great question. Observations show that Dark Matter (DM) only noticeably interacts gravitationally, although it's possible that it may interact in other ways "weakly" (e.g. in the 'WIMP' model --- linked) . Everything following has no dependence on whether DM interacts purely/only gravitationally, or just predominantly gravitationally --- so I'll treat it as the former case for convenience. Observable matter in the universe ' clumps ' together tremendously: in gas clouds, stars, planets, disks, galaxies, etc. It does this because of electromagnetic (EM) interactions which are able to dissipate energy. If you roll a ball along a flat surface it will slow down and eventually stop (effectively 'clumping' to the ground), because dissipative forces (friction) are able to transfer its kinetic energy away. On the other hand, imagine you drill a perfect hole, straight through the center of the Earth, and you drop a ball down it. ( Assuming the hole and the earth are perfectly symmetrical... ) the ball will just continually oscillate back and forth from each side of the earth to the other --- because of conservation of energy. Just like a frictionless pendulum (no rubbing, no air resistance). This is how dark matter interacts, purely gravitationally. Even if there was no hole through the center of the earth, the DM will just pass straight through and continue to oscillate back and forth, always reaching the same initial height. To zeroth order, dark matter can only 'clump' as much as its initial energy ( obtained soon after the big-bang ) allows. One example of such a 'clump' is a 'Dark Matter Halo' in which galaxies are embedded. DM Halos are (effectively) always larger than the normal (baryonic) matter inside them --- because the normal matter is able to dissipate energy and collapse farther.
{ "source": [ "https://physics.stackexchange.com/questions/214950", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/84975/" ] }
214,953
Ex: A wooden block is lying on a table. I am told that because the block is still, the microscopic surface irregularities form more complicated interlocking structures. Is it because the force of the block on the table deforms the molecular structure of the table and wood to eventually reach an equilibrium state? This state then is more connected and harder to change than a block simply gliding over the table? Also, how fast does this happen?
Great question. Observations show that Dark Matter (DM) only noticeably interacts gravitationally, although it's possible that it may interact in other ways "weakly" (e.g. in the 'WIMP' model --- linked) . Everything following has no dependence on whether DM interacts purely/only gravitationally, or just predominantly gravitationally --- so I'll treat it as the former case for convenience. Observable matter in the universe ' clumps ' together tremendously: in gas clouds, stars, planets, disks, galaxies, etc. It does this because of electromagnetic (EM) interactions which are able to dissipate energy. If you roll a ball along a flat surface it will slow down and eventually stop (effectively 'clumping' to the ground), because dissipative forces (friction) are able to transfer its kinetic energy away. On the other hand, imagine you drill a perfect hole, straight through the center of the Earth, and you drop a ball down it. ( Assuming the hole and the earth are perfectly symmetrical... ) the ball will just continually oscillate back and forth from each side of the earth to the other --- because of conservation of energy. Just like a frictionless pendulum (no rubbing, no air resistance). This is how dark matter interacts, purely gravitationally. Even if there was no hole through the center of the earth, the DM will just pass straight through and continue to oscillate back and forth, always reaching the same initial height. To zeroth order, dark matter can only 'clump' as much as its initial energy ( obtained soon after the big-bang ) allows. One example of such a 'clump' is a 'Dark Matter Halo' in which galaxies are embedded. DM Halos are (effectively) always larger than the normal (baryonic) matter inside them --- because the normal matter is able to dissipate energy and collapse farther.
{ "source": [ "https://physics.stackexchange.com/questions/214953", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/73458/" ] }
215,323
I've just bought this case for my mobile phone. It has three credit card slots, which isn't strange at all, but in combination with a strong magnet closure, it almost seems a bit stupid, because, who would want their credit cards demagnetized? However, the the manufacturer states that it has an Extra strong magnetic closure which will not demagnetize your credit cards I find this interesting. From what I've heard, all kinds of magnets (though, not for example the magnetic field caused by the phone itself) demagnetizes credit cards, so I wonder what is so special with this magnet, and how it can even be extra strong .
Sounds like a Halbach Array - a specific configuration that maximizes flux on one side and minimizes it on the other
{ "source": [ "https://physics.stackexchange.com/questions/215323", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/96963/" ] }
215,353
I am trying to figure out how undefined formulas in mathematics relates to physics. Take the following formula for terminal velocity. $$V_\text{terminal} = \sqrt{ mg \over{c \rho A}} $$ Say we have a an air density of 0; $ \rho = 0 $ (a vacuum) Logic tells me the particle would continue to accelerate and never reach terminal velocity, but in mathematics this formula would be undefined. Obviously this is one of many examples of what can happen in physics problems, but what does undefined actually mean in terms of physics? I hope I am explaining myself clearly.
Yes, the particle would continue to accelerate and would never reach a terminal velocity. But that is not what this equation tells you. This equation tells you what the terminal velocity is, given the parameters of the function. When in a vacuum, there is no terminal velocity. It is not zero, it is not infinity. A terminal velocity literally does not exist and that is exactly what the equation tells you.
{ "source": [ "https://physics.stackexchange.com/questions/215353", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/94702/" ] }
215,747
I am a student of class 9. When I was going through magnetism and read that an earth is a magnet I got some doubts. My question is: is earth really a magnet? Doesn anyone have any proof that earth is a magnet? Is there a magnetic core at the center of the earth? Has anyone reached the core of the earth?
Earth has a magnetic field. You can verify this yourself; it is why a compass works. Just take any magnet and hang it carefully from a string. As long as there's nothing else magnetic around and it's well-balanced and free to rotate, it will line up with Earth's magnetic field. We have measured the Earth's magnetic field all over the surface and up into outer space using satellites. The magnetic field is fairly weak; on the surface of Earth it is about a hundred times weaker than a simple refrigerator magnet, which is why we don't notice it often in daily life. No one has reached the core of the Earth; our knowledge about it is inferred using physics, mathematics, and geology. Whether or not Earth "is a magnet" is a semantic issue, but the existence of the magnetic field is not in doubt. This magnetic field is important to life on Earth because it deflects a lot of the harmful radiation that reaches Earth through space. It is also responsible for the auroras that appear near Earth's poles. The magnetic field is not caused by a part of the Earth being magnetized like a refrigerator magnet. Instead, it is caused by a the motion of liquid metal inside the Earth, which causes currents that generate a magnetic field. The metal moves because the Earth is different temperatures at different spots, because of gravitational forces, and because of Earth's rotation. This phenomenon is called "convection" and you can see it when you boil a pot of water. The physics behind generation of Earth's magnetic field is called "magnetohydrodynamics". The equations involved are very complicated and difficult to solve, but there is little doubt about the fundamental mechanism. By examining old rocks, we know that Earth's magnetic field periodically switches directions. We can write computer programs that simulate Earth's magnetic field, but there is still uncertainty about details such as when the next reversal will be or how long the field will last.
{ "source": [ "https://physics.stackexchange.com/questions/215747", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/96173/" ] }
215,809
In my sophomore year of high school, my P.E. teachers kept on complaining about how phones didn't have a network connection in our gym, regardless of model, service provider, etc. A couple of feet outside the gym, cellular reception was crystal clear, however, as you moved your phone towards the gym wall, it rapidly diminished in strength. A couple of months after taking AP Physics C, my mind randomly drifted back to this event, and it came to me that this is what one would expect to happen if the gym were a giant Faraday cage. Is this even possible or likely considering usage of fairly standard building materials and structural design? Here's an image of the building (it's the one in the foreground) if that helps.
In general the answer is "yes it is possible" - but in your case the answer is "that is not a Faraday cage". Radio waves are (partially) reflected by any discontinuity in dielectric constant of the medium they propagate through. The ones that propagate (through walls etc) will also experience attenuation. A faraday cage is a continuous conducting structure with no openings that are "large compared to the wavelengths of interest". Your building has windows that are much larger than that. The wavelength of a cell phone signal (typical frequency 1800 or 1900 MHz so around 15 cm) is small compared to windows and signal would penetrate - meaning that it is not a Faraday cage. On the other hand walls do provide significant attenuation depending on the material - and waves that have to diffract through the window would also be much weaker when they got to you. If the gym was sufficiently far from the nearest cell tower it is easy to get a "dead spot" in reception. For reference, according to this link a concrete wall provides 10 to 15 dB of attenuation - which may be enough to drop the signal from "OK" to "not OK", depending on the signal strength outside.
{ "source": [ "https://physics.stackexchange.com/questions/215809", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60472/" ] }
215,918
Wikipedia says that the Earth's orbit's axis is $a=149\ 598\ 000\ \mathrm{km}$ and its eccentricity is $e=0.016\ 7086$, but if we use these values to find distance at aphelion and perihelion we get $A = 149\ 598\ 000 \times (1+e) = 152\ 097\ 753$, and $P = 149\ 598\ 000 \times (1-e) = 147\ 098\ 426$, whereas wiki says respectively: $A = 151\ 930\ 000$ and $P = 147\ 095\ 000$. The reported figures are different and, also, the ratio of the figures is greater at A: 0.0166 than at P: 0.000023 Can you, please, explain this difference? what are the exact (current) values?
I generally regard NASA as authoritative, and they report the orbital parameters on their Earth Fact Sheet . I note that they disagree with Wikipedia about the aphelion though they agree on the perhelion, semi-major axis and eccentricity: NASA Wikipedia Aphelion 152.10 151.93 Perhelion 147.09 147.095 Semi-major 149.60 149.598 Eccentricity 0.0167 0.0167086 Since the NASA figures are consistent I assume there is an error on the Wikipedia page.
{ "source": [ "https://physics.stackexchange.com/questions/215918", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
215,964
If we cut a magnet, the magnetic field develops. But if we cut it vertically, then will the magnetic field develop?
I generally regard NASA as authoritative, and they report the orbital parameters on their Earth Fact Sheet . I note that they disagree with Wikipedia about the aphelion though they agree on the perhelion, semi-major axis and eccentricity: NASA Wikipedia Aphelion 152.10 151.93 Perhelion 147.09 147.095 Semi-major 149.60 149.598 Eccentricity 0.0167 0.0167086 Since the NASA figures are consistent I assume there is an error on the Wikipedia page.
{ "source": [ "https://physics.stackexchange.com/questions/215964", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/96173/" ] }
215,987
I am wondering, how we are able to see so much stars in the sky even though they are light years away from us. Even light take years to reach from those stars to earth? Then, how we are seeing those stars?
I generally regard NASA as authoritative, and they report the orbital parameters on their Earth Fact Sheet . I note that they disagree with Wikipedia about the aphelion though they agree on the perhelion, semi-major axis and eccentricity: NASA Wikipedia Aphelion 152.10 151.93 Perhelion 147.09 147.095 Semi-major 149.60 149.598 Eccentricity 0.0167 0.0167086 Since the NASA figures are consistent I assume there is an error on the Wikipedia page.
{ "source": [ "https://physics.stackexchange.com/questions/215987", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/97268/" ] }
216,276
If a blackbody has a temperature such that its peak frequency was well within our audible range, for example $1\ \mathrm{kHz}$ , what would that sound like if we used Planck's law to plot its spectral curve in the frequency domain and performed a transformation (like an inverse FFT) to obtain a waveform? Planck's law tells us the peak wavelength and electromagnetic spectral emission curve of an ideal radiator an some temperature $T$ . If we know the peak frequency, $f$ , then we can work backwards to figure its peak wavelength, $\lambda$ . For our example, if $f=1\ \mathrm{kHz}$ , then $\lambda\approx170\,471\ \mathrm m$ , so $T\approx17\ \mathrm{nK}$ . If we plot the power spectral density of a 17 nanokelvin blackbody as a function of frequency and performed an inverse FFT on that curve, what would the resulting waveform sound like? According to this Wikipedia article , blackbody radiation is just thermal noise (Johnson–Nyquist noise); if that's what I'm looking for, what does it sound like? Just to clarify, I'm looking for a waveform, maybe a WAV file, rather than a verbal description.
This problem can be solved with noise-shaping. Since the shape of the spectrum is known, it can be used as a base for the power spectral density: $$ P(f,T)=\frac{ 2 h f^3}{c^2} \frac{1}{e^\frac{h f}{k_\mathrm{B}T} - 1} $$ where $k_\mathrm{B}$ is the Boltzmann constant, $h$ is the Planck constant, and $c$ is the speed of light. This outputs the relative power of each band as a continuous function of frequency, $f$, and temperature, $T$. Since the output quantity must be expressed in decibels (dBr) to be meaningful for audio, we simply use a log scale and add an offset (a gain ) to normalize the peak to 0. The equation of the EQ curve is: $ E(f,T) = 10 \log{ P(f,T) } + G_{t}(T) $ where $G_{t}(T)$ is the gain required to normalize the peak to 0 dB. The required gain depends on the inverse cube of the temperature plus a constant, $G$ (187 dB); thus, $ G_{t}(T) = G - 10 \log T^3 $. The leading coefficient $10$ converts bels to decibels. Simplifying gives us: $$ E(f,T) = 10 \log{ \left( \frac{ 2 h f^3}{c^2 T^3} \frac{1}{e^\frac{h f}{k_\mathrm{B}T} - 1} \right)} + G $$ tl;dr: We obtain our waveform by applying an EQ to gaussian white noise from AudioCheck.net . Examples: 17 nanokelvins is the temperature at which black noise has a peak frequency of 1 KHz. Its passband is limited to 1 Hz to 12 KHz. 30 nanokelvins is the lowest temperature at which black noise has a passband that spans the entire hearing range . 55 nanokelvins is the temperature at which black noise has a peak frequency of approximately 3 KHz , the most sensitve frequency of human ears. 340 nanokelvins is the temperature at which black noise has a peak frequency of just under 20 KHz , which is the limit of human hearing. Most of the audible spectrum is a linear upward ramp, which is very similar to violet noise . At higher temperatures, the frequency domain will be almost identical to violet noise. All EQ filter parameters are in the descriptions of the tracks on SoundCloud.
{ "source": [ "https://physics.stackexchange.com/questions/216276", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21248/" ] }
216,629
If stars are formed by the collapse of dust clouds under gravity, how is the pressure of the dust cloud overcome? As more material gathers together, gravity will increase, but pressure will also increase. If I am not mistaken, both will increase as the volume shrinks, but gravity as a function of the square of the radius of the gas cloud, and pressure as a function of the cube of its radius. By this reasoning, we would not expect gravity to be able to overcome the hydrostatic pressure, and compress material together sufficiently to form a star. What then, is the accepted explanation that allows stars to form under gravity, from dust clouds?
The answer lies in something called the virial theorem and the fact that the contraction of a gas cloud/protostar is not adiabatic - heat is radiated away as it shrinks. You are correct, a cloud that is in equilibrium will have a relationship between the temperature and pressure in its interior and the gravitational "weight" pressing inwards. This relationship is encapsulated in the virial theorem, which says (ignoring complications like rotation and magnetic fields) that twice the summed kinetic energy of particles ( $K$ ) in the gas plus the (negative) gravitational potential energy ( $\Omega$ ) equals zero. $$ 2K + \Omega = 0$$ Now you can write down the total energy of the cloud as $$ E_{tot} = K + \Omega$$ and hence from the virial theorem that $$E_{tot} = \frac{\Omega}{2},$$ which is negative. If we now remove energy from the system, by allowing the gas to radiate away energy, such that $\Delta E_{tot}$ is negative , then we see that $$\Delta E_{tot} = \frac{1}{2} \Delta \Omega$$ So $\Omega$ becomes more negative - which is another way of saying that the star is attaining a more collapsed configuration. Oddly, at the same time, we can use the virial theorem to see that $$ \Delta K = -\frac{1}{2} \Delta \Omega = -\Delta E_{tot}$$ is positive . i.e. the kinetic energies of particles in the gas (and hence their temperatures) actually become hotter. In other words, the gas has a negative heat capacity. Because the temperatures and densities are becoming higher, the interior pressure increases and may be able to support a more condensed configuration. However, if the radiative losses continue, then so does the collapse. This process is ultimately arrested in a star by the onset of nuclear fusion which supplies the energy that is lost radiatively at the surface (or through neutrinos from the interior). So the key point is that collapse inevitably proceeds if energy escapes from the protostar. But warm gas radiates. The efficiency with which it does so varies with temperature and composition and is done predominantly in the infrared and sub-mm parts of the spectrum - through molecular vibrational and rotational transitions. The infrared luminosities of protostars suggest this collapse takes place on an initial timescale shorter than a million years and this timescale is set by how efficiently energy can be removed from the system.
{ "source": [ "https://physics.stackexchange.com/questions/216629", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/78868/" ] }
216,712
Let's say I have a ball attached to a string and I'm spinning it above my head. If it's going fast enough, it doesn't fall. I know there's centripetal acceleration that's causing the ball to stay in a circle but this doesn't have to do with the force of gravity from what I understand. Shouldn't the object still be falling due to the force of gravity?
We have the ball orbiting at a distance $R$ from the centre of rotation and the string inclined at angle $\theta$ with respect to the horizontal. Two main forces act on the ball: gravity $mg$ ($m$ is the mass of the ball, $g$ the Earth's gravitational acceleration) and $F_c$, the centripetal force needed to keep the ball spinning at constant rate. $F_c$ is given by: $$F_c=\frac{mv^2}{R},$$ where $v$ is the orbital velocity, i.e. the speed of the ball on its circular trajectory. Trigonometry also tells us that if $T$ is the tension in the string, then: $$T\cos\theta=F_c.$$ Similarly, as the ball is not moving in the vertical direction, thus $F_{up}$: $$T\sin\theta=F_{up}=mg.$$ From this relation we can infer: $$T=\frac{mg}{\sin\theta}.$$ And so: $$\frac{mg}{\tan\theta}=F_c=\frac{mv^2}{R}.$$ Or: $$\tan\theta=\frac{gR}{v^2}.$$ From this follows that for small $\tan\theta$ and thus small $\theta$ we need large $v$. But at lower $v$, $\theta$ increases. Also note that $\theta$ is invariant to mass $m$.
{ "source": [ "https://physics.stackexchange.com/questions/216712", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/95688/" ] }
217,545
I read about Joule's experiment proving the transformation of mechanical work into heat. But say I have a bowl with some water, and I start turning a spoon in it very fast, thus doing work — the water won't get hotter! What am I missing? I think maybe the work I put is simply kinetic, and won't turn into heat. But then how do you explain Joule's experiment?
Well first you have the energy in the form of kinetic energy of the spinning water. Once you let that water settle, it DOES get hotter. The only problem is that water has a high specific heat (it takes a LOT of energy to heat up water), so you don't notice the water getting hotter since the amount it's heating up is not very noticeable. Coincidentally, it is this property of water that makes the earth a habitable planet--we have moderate temperatures compared to other planets because our oceans, bays, and lakes can absorb or release large amounts of heat to moderate the atmospheric temperatures. If you want a more observable experiment, try taking a piece of metal (maybe a paper clip?) and bending it back and forth a lot of times. Although it'll eventually break, you should be able to notice it getting hotter
{ "source": [ "https://physics.stackexchange.com/questions/217545", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/67976/" ] }
218,051
Gravity causes anything with energy to accelerate toward the source. Black holes, for example, have such strong gravity that they pull in light and don't let any escape. But can acceleration still apply to light? The speed of light is constant, of course, but why are photons affected by gravity yet aren't accelerated by it? Edit: My main question is why photons aren't affected in the same way as most other particles. I'm perfectly aware that it cannot surpass lightspeed, but I want to know what makes it unaffected by acceleration while other particles are affected.
You don't feel acceleration. When onboard the ISS, you are accelerating towards the earth (down) due to gravity: if you didn't, you would just fly away from the planet. Because you and the ISS are accelerating exactly the same way, you don't feel a thing. You don't feel a force if it's accelerating you: you feel pressure caused by opposing forces. Here on the ground, I feel the floor beneath my feet opposing my normal gravitational acceleration. If you run a thruster on the ISS, then the ISS starts accelerating differently than you do, and eventually, one of the walls is going to collide with you. Then you'll feel that wall interfering with your own gravitation acceleration, and feel something like weight. Light undergoes acceleration due to gravity: look up 'gravitational lensing' for that. To understand how light can accelerate with a constant speed, you have to understand the difference between speed and velocity, and what acceleration really means. Speed is a 'scalar', just a number with no direction. If you're travelling 30 KPH, that's your speed. Velocity is a 'vector', a number with a direction. Driving 30 KPH north is much different than driving 30 KPH south: clearly, you'll end up in different locations regardless of your speed. Acceleration is not a change in speed, it is a change in velocity. Think of a car. There are usually three ways to accelerate a car. To increase your speed (scalar), step on the accelerator, and you'll feel your seat back pushing into you harder as it accelerates you with the car. To decrease the speed (scalar), hit the brake and you'll feel your safety straps accelerating you with the car. But what happens when you turn? Your speed stays roughly the same (exactly the same if you're skilled enough), but you are changing your direction. Your 30 KPH north is becoming 30 KPH west, and the change in direction is an acceleration. Depending on whether your car is build to drive on the right or the left, you'll have a tendency to either push against your door or into your passenger's lap. That's still acceleration. If a photon is passing by something heavy, it will be accelerated towards that object, changing its course but not its speed. If a photon is going towards or away from something heavy, it can't properly accelerate by changing speed. I'm not a physicist, but I believe that it increases or decreases energy by changing its frequency. In other words, things that you would expect to increase its speed will instead increase its frequency ('blue-shifting' it if it's visible light), and what you would expect to decrease its speed will instead decrease its frequency ('red-shifting' if it's visible light).
{ "source": [ "https://physics.stackexchange.com/questions/218051", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/93522/" ] }
218,539
Planets orbit around stars, satellites orbit around planets, even stars orbit each other. So the question is: Why don't galaxies orbit each other in general, as it's rarely observed? Is it considered that 'dark energy' is responsible for this phenomenon?
There are plenty of satellite galaxies orbiting larger galaxies. The question is how long are you willing to wait for an orbit? The Milky Way has a mass $M$ of something like $6\times10^{11}$ solar masses, or $10^{42}\ \mathrm{kg}$. The small Magellanic Cloud is at a distance $R$ of $2\times10^5$ light years, or $2\times10^{21}\ \mathrm{m}$. A test mass orbiting a mass $M$ at a separation $R$ will have a period of $$ P = 2\pi \sqrt{\frac{R^3}{GM}} = \text{2 billion years}. $$ Such a system could undergo at most $7$ orbits in the entire history of the universe. The universe isn't old enough for the nearest major galaxy to have completed a single orbit around us at its current separation. Even if you did wait long enough, galaxies aren't particularly good at holding their shape. If you put them in a situation where gravity is strong enough to bend their path into a closed orbit, odds are they will also be tidally torn apart by that same gravity. And we see this all the time, as for example with the Mice Galaxies:
{ "source": [ "https://physics.stackexchange.com/questions/218539", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
218,558
I hate to use this medium in the wrong way, and I understand the fundamental principles to this question. However, The question goes that you must resolve the resultant force (R=500N) into its components along the u and v axis, as shown in the following diagram. Sorry for it being sideways. From this I figured that the force must be 500N in the u axis, and 0N in the v axis. Is this correct, or am I missing something fundamental, and there is an actual magnitude of force in the v direction.
There are plenty of satellite galaxies orbiting larger galaxies. The question is how long are you willing to wait for an orbit? The Milky Way has a mass $M$ of something like $6\times10^{11}$ solar masses, or $10^{42}\ \mathrm{kg}$. The small Magellanic Cloud is at a distance $R$ of $2\times10^5$ light years, or $2\times10^{21}\ \mathrm{m}$. A test mass orbiting a mass $M$ at a separation $R$ will have a period of $$ P = 2\pi \sqrt{\frac{R^3}{GM}} = \text{2 billion years}. $$ Such a system could undergo at most $7$ orbits in the entire history of the universe. The universe isn't old enough for the nearest major galaxy to have completed a single orbit around us at its current separation. Even if you did wait long enough, galaxies aren't particularly good at holding their shape. If you put them in a situation where gravity is strong enough to bend their path into a closed orbit, odds are they will also be tidally torn apart by that same gravity. And we see this all the time, as for example with the Mice Galaxies:
{ "source": [ "https://physics.stackexchange.com/questions/218558", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/9259/" ] }
218,668
I know that for an object to be transparent, visible light must go through it undisturbed. In other words, if the light energy is sufficiently high to excite one of the electrons in the material, then it will be absorbed, and thus, the object will not be transparent. On the other hand, if the energy of the light is not sufficient to excite one of the electrons in the material, then it will pass through the material without being absorbed, and thus, the object will appear transparent. My question is: For a non-transparent object like a brick, when the light is absorbed by an electron, it will eventually be re-emitted. When the light is re-emitted won't the object appear transparent since the light will have essentially gone through the object?
For an object to be transparent, the light must be emitted in the same direction with the same wavelength as initially. When light strikes a brick, some is reflected in other directions, and the rest is re-emitted in longer, non-visible wavelengths. That is why a brick is opaque to visible light. Some materials we consider transparent, like glass, are opaque to other wavelengths of light. Most window glass these days, for example, is coated with infrared- and ultraviolet-reflective films to increase insulative capacity. You can see through these fine with your eyes, but an infrared-based night vision system would see them as opaque objects. Another example is that most materials are transparent to radio waves, which is why both radio broadcasts and radio telescopes are so successful.
{ "source": [ "https://physics.stackexchange.com/questions/218668", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/93423/" ] }
218,678
I am computer science student and I am working on a project which needs to know how the vibration of phone is damped by applied force (when human touches the phone). I have read many articles about "forced mass-spring system". The following figure is the model I expect to have for my scenario: (phone is placed at table, vibrated by vibration motor, and damped by hand) However, when I try to solve the vibration amplitude of this system. The vibration amplitude is not related to the applied force (Fh). I also simulate this system by Simulink and gets the same results: Even though the math looks right, (the applied force only changes the position of equilibrium), but it is very counter-intuitive in the real-world (i.e., vibration should be decreased when enough force is applied). Moreover, based on my experimental measurement via laser vibrometer. The vibration amplitude does decrease when force is applied by hand. I have no background at this mechanical system. I have tried my best to read online tutorial and got this system model but it doesn't work (at least not fits real world scenario). Any help to point out what is wrong in my system is very welcome. Any keyword that I should Google is also very helpful. Please help me :(
For an object to be transparent, the light must be emitted in the same direction with the same wavelength as initially. When light strikes a brick, some is reflected in other directions, and the rest is re-emitted in longer, non-visible wavelengths. That is why a brick is opaque to visible light. Some materials we consider transparent, like glass, are opaque to other wavelengths of light. Most window glass these days, for example, is coated with infrared- and ultraviolet-reflective films to increase insulative capacity. You can see through these fine with your eyes, but an infrared-based night vision system would see them as opaque objects. Another example is that most materials are transparent to radio waves, which is why both radio broadcasts and radio telescopes are so successful.
{ "source": [ "https://physics.stackexchange.com/questions/218678", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/98577/" ] }
219,306
I have studied some of Einstein's Theory of General Relativity, and I understand that it states that gravity isn't a force but rather the effects of objects curving space-time. If this is true, then why are we instructed in middle school that it is a force?
Because Newtonian gravity , where it indeed is considered a force, is a good enough approximation to the situations you consider in middle school (and beyond). General relativistic effects are very weak at the ordinary scales we humans look at, and it would be overkill to introduce the full-blown machinery of general relativity (which demands a considerably more advanced mathematical treatment than ordinary Newtonian forces) to treat situations where the error incurred by just using the Newtonian version is negligible. Additionally, even in the general relativistic treatment you might still consider the effect on moving particles to be a "force", just like you can consider the centrifugal force to be a fictitious force that appears in rotating coordinate systems, see also the answers to Why do we still need to think of gravity as a force?
{ "source": [ "https://physics.stackexchange.com/questions/219306", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/96908/" ] }
221,100
Have a look at this clip I made of my candle burning: I can accept that the molten wax gets sucked into the wick (though not quite sure why: capilary action?). But why does it also seem to get thrown outwards when it reaches the wick? I would expect a build-up of residue (particles, dust) near the wick, if anything.
That is simply convection. The wick does suck molten wax, and it goes up by capillary to the middle of the flame, but that movement is way to slow to explain the fast particles in your video. Moreover they are moving in the opposite direction! Convection happens because the wick is hot, and it makes the wax around also hot, so the wax expands a little and gets lighter. Since hot wax is lighter it will go up (because of buoyancy). And since it goes up around the wick, it will go to the sides when it reaches the surface, then cools in contact with air and goes down. Note how the red arrows curve to the sides when they reach the surface. That is what you see in the moving particles. In your video you can even see the black particles moving towards the wick when they go down, doing full circles, just as in the graphic. Usually, in convection the heat comes from below, as in this graphic from Wikipedia . The effect is similar, but instead of having the heat source at the bottom, it would be just in between two red arrows.
{ "source": [ "https://physics.stackexchange.com/questions/221100", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/32287/" ] }
221,395
While I was making a morning coffee at work, some sugar from the spoon started to fly away, seemingly towards some foam cups. Can this be explained by magnetism?
Electro-magnetism is a good guess, simply because it's the only force you commonly see that's powerful enough. It's not very useful as an explanation, though, because almost everything you see around you is due to electro-magnetism (e.g. the way the spoon holds together in the first place, or the light that allows you to see the sugar, or the way the water "sticks" to having a fluid surface, or the way the individual atoms of the sugar stick together...). The real question is "What kind of EM is responsible?" It's not ferro-magnetism (the kind you see with old-school fridge magnets) - neither the foam nor the sugar are ferro-magnetic. It's not para-magnetism either. It's not due to molecular nor atomic bonds (the kind that holds molecules together, or the residual force that causes e.g. hydrogen bridges) - the distance involved is too large. It's not dia-magnetism (remember those frog levitation videos?). That would require massive magnetic fields. I could go on, but there's plenty of other cases that obviously don't apply, so let's skip to the end: The interesting point is that plastics are usually electric isolants. This is very important, because it means that when they acquire a charge imbalance, it's not equalised very quickly - the current can't flow readily. This means that it's possible for one side to have a slight positive charge, while the other side has a slight negative charge (in a metal, in contrast, the charges would "mix" to maintain an overall neutral charge). This leads to another common sight of electromagnetism in the common household – static electricity. And this is most likely what's happening here - the cup is statically charged, which causes attraction between the slightly charged cup, and the sugar which is also slightly charged in turn (if we assume the cup is slightly positively charged, it will attract negative charges in the sugar, causing a slight charge imbalance in the sugar as well - and now you have slightly positive + slightly negative, resulting in a net attraction). Finally, to answer your title question, Can sugar be affected by a magnetic field? Yes. Sugar is made out of parts which interact using the electro-magnetic force, and thus it can be affected by a magnetic field. In the end, it's that simple. Even if it was even a tiny residual force (1 proton + 1 electron aren't exactly "zero" charge), it would allow interaction. So if you want to be exact, only the objects that don't interact in EM at all can be non-affected by a magnetic field - for example, neutral neutrinos, or the (more or less) hypothetical dark energy and dark matter (the thinking is that they're dark precisely because they don't interact with EM - this includes visible light as well as magnetism).
{ "source": [ "https://physics.stackexchange.com/questions/221395", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/100026/" ] }
222,390
They say that gravity is technically not a real force and that it's caused by objects traveling a straight path through curved space, and that space becomes curved by mass, giving the illusion of a force of gravity. That makes perfect sense for planetary orbits, but a lot less sense for the expression of gravity that we are the most familiar with in our day-to-day lives: "what goes up must come down." Imagine that I hold a ball in my hand, several feet off the ground, with my fingers curled around it, and my hand is above the ball. Then I open my fingers, releasing my grip on it, being very careful to not impart any momentum to the ball from my hand as I do so. An object at rest remains at rest unless acted upon by an outside force. If the ball is not moving (relative to my inertial reference frame), it has no path to travel that's any different from the Earth's path through space. It should remain at rest, hanging there in the air. And yet it falls, demonstrating that an outside force (gravity) did indeed act upon it. How does curved space explain this?
If you have a look at my answer to When objects fall along geodesic paths of curved space-time, why is there no force acting on them? this explains how on a curved surface two moving observers will appear to exprience a force pulling them together. However two stationary observers will feel no force. The force only becomes apparent when you move on the curved surface. This is true in general relativity as well, but what is easily forgotten by newcomers to GR is that in GR we consider motion in spacetime, not just space. You are always moving in spacetime because you can't help moving in time. Your speed in spacetime is known as the four-velocity , and in fact the magnitude of the four-velocity (technically the norm ) is always $c$. So you can't help moving through spacetime (at the speed of light!) and when spacetime is curved this means you will experience gravitational forces. You are probably familiar with Newton's first law of motion. This says that the acceleration of a body is zero unless a force acts on it. Newton's second law gives us the equation for the acceleration: $$ \frac{d^2x}{dt^2} = \frac{F}{m} $$ The general relativity equivalent to this is called the geodesic equation: $$ {d^2 x^\mu \over d\tau^2} = - \Gamma^\mu_{\alpha\beta} u^\alpha u^\beta \tag{1} $$ This is a lot more complicated than Newton's equation, but the similarity should be obvious. On the left we have an acceleration, and on the right we have the GR equivalent of a force. The objects $\Gamma^\mu_{\alpha\beta}$ are the Christoffel symbols and these tell us how much spacetime is curved. The quantity $u$ is the four-velocity. Now let's consider the particular example you describe of releasing a ball. You say the ball is initially stationary. If it was stationary in spacetime, i.e. the four velocity $u = 0$, then the right hand side of equation (1) would always be zero and the acceleration would always be zero. So the ball wouldn't fall. But the four velocity isn't zero. Suppose we use polar coordinates $(t, r, \theta, \phi)$ and write the four-velocity as $(u^t, u^r, u^\theta, u^\phi)$. If you're holding the ball stationary in space the spatial components of the four velocity are zero: $u^r = u^\theta = u^\phi = 0$. But you're still moving through time at (approximately) one second per second, so $u^t \ne 0$. If we use the geodesic equation (1) to calculate the radial acceleration we get: $$ {d^2 r \over d\tau^2} = - \Gamma^r_{tt} u^t u^t $$ The Christoffel symbol $\Gamma^r_{tt}$ is fiendishly complicated to calculate so I'll do what we all do and just look it up: $$ \Gamma^r_{tt} = \frac{GM}{c^2r^2}\left(1 - \frac{2GM}{c^2r}\right) $$ and our equation for the radial acceleration becomes: $$ {d^2 r \over d\tau^2} = - \frac{GM}{c^2r^2}\left(1 - \frac{2GM}{c^2r}\right) u^t u^t \tag{2} $$ Now, I don't propose to go any further with this because the maths gets very complicated very quickly. However it should be obvious that the radial acceleration is non-zero and negative. That means the ball will accelerate inwards. Which is of course exactly what we observe. What is interesting is to consider what happens in the Newtonian limit, i.e. when GR effects are so small they can be ignored. In this limit we have: $d\tau = dt$ so $d^2r/d\tau^2 = d^2r/dt^2$ $1 \gg \frac{2GM}{c^2r}$ so the term $1 - \frac{2GM}{c^2r} \approx 1$ $u^t \approx c$ If we feed these approximations into equation (2) we get: $$ {d^2 r \over dt^2} = - \frac{GM}{c^2r^2}c^2 = - \frac{GM}{r^2} $$ and this is just Newton's law of gravity!
{ "source": [ "https://physics.stackexchange.com/questions/222390", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4039/" ] }
222,736
Can we create an amount of energy at a point in space and destroy an equal amount of energy at another point in space, with both the processes occurring simultaneously? This will not violate energy conservation, as the total energy in the universe is constant. So is it possible?
Not if the laws of physics (particularly the laws of gravity) are as we understand them. In general relativity, there are a set of equations, called the Einstein field equations, that relate the curvature of space (roughly speaking, how much gravity there is) to how energy and momentum are distributed in space and time. To be consistent, these equations require that the energy change in any region of space is entirely due to energy flowing into/out of that region. If you try to set up the mathematical equations that say "energy disappears from this region and reappears over here without passing through the points in between", and ask what the gravity field produced by this energy distribution is, you get nonsense; the equations work out to things like 1 = 0. In technical language, general relativity requires that energy be locally conserved (i.e., conserved in every tiny region of space) rather than globally conserved (i.e., conserved in total, but allowing for the energy in small regions to be non-conserved.) Now, it's possible that our understanding of general relativity is flawed, and maybe there are small-scale violations of this principle. The so-called Steady State theory was a theory that cosmologists came up with '40s and '50s as an alternative to the Big Bang model; and it required that matter (and therefore energy) was continually being created in the Universe. However, this model has now been discredited; and almost everyone who works on alternative models of gravity these days still requires that energy be locally conserved rather than globally conserved. EDIT #1: For those are interested, here are the technical details. The Einstein field equations are $$ G^{\mu \nu} = 8 \pi G T^{\mu \nu} $$ in units where $c = 1$. The object on the left-hand side is the Einstein tensor , which describes the curvature of spacetime; the object on the right is the stress-energy tensor , which describes how energy and momentum are distributed in space and moving through space. We can take the "four-divergence" of both sides (which is just like the regular divergence in 3D vector calculus, only with some time derivatives added in as well.) This is denoted as $$ \nabla_\mu G^{\mu \nu} = 8 \pi G \nabla_\mu T^{\mu \nu}. $$ But it is always true that $\nabla_\mu G^{\mu \nu} = 0$; the way this tensor is constructed, its spacetime divergence automatically vanishes (the so-called Bianchi identity for this tensor.) This implies then that $$ \nabla_\mu T^{\mu \nu} = 0. $$ which, as @SebastianRiese pointed out in his answer, means that energy (and momentum) are locally conserved rather than globally conserved. This means that if you have a theory where energy is not locally conserved ($\nabla_\mu T^{\mu \nu} \neq 0$), then you will have to modify Einstein's equations, or you get a contradiction. EDIT #2: As far as virtual particles go, that's a trickier business. We don't yet know how gravity behaves on the quantum scale; and a full description of the situation would include the quantum-mechanical response of the gravitational field to the quantum-mechanical virtual particles. So in some sense, virtual particles are beyond the scope of this answer. That said, it's possible to define the expectation value of the stress-energy tensor for a quantum-mechanical field, and to try to couple that to the Einstein tensor. This yields what is called the "semi-classical Einstein equation": $$ G^{\mu \nu} = 8 \pi G \langle T^{\mu \nu} \rangle $$ This expectation value would include all the energy and momentum of the "virtual particle sea" that one talks about in standard QFT. The question then arises whether $\nabla_\mu \langle T^{\mu \nu} \rangle = 0$, as it must if we're going to write down the above equation. Answering this sort of question is tricky—we're now dealing with QFT in curved spacetime, after all—but it can be shown that we can always define our stress-energy operator such that its expectation value satisfies $\nabla_\mu \langle T^{\mu \nu} \rangle = 0$, along with the other nice properties we want it to have. If you're really dedicated to the nitty-gritty of this, see Section 4.6 of Wald's Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics . But be warned that QFT in curved spacetime looks very different than QFT in flat spacetime; in particular, there's no well-defined notion of "virtual particles" at all.
{ "source": [ "https://physics.stackexchange.com/questions/222736", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/100426/" ] }
223,283
Human beings invented the wheel to get rid of the friction between the wheel and the road. But were we able to reduce it to zero? Is there any residual friction? This question is about only the friction between the wheel and the road. I understand that there will be friction in other places, e.g. between the wheel and axle or between the vehicle and the atmosphere.
As many others point out, there is friction present, otherwise the wheel wouldn't grap the surface and pull the car forward. But you are talking about a different kind of friction. There is a possibility of different kinds of friction: Kinetic friction , if the wheel ever slides and skids over the asphalt. This is friction between objects that slide over one another. Static friction , which is what the other answers talk about. This is friction that holds something still. It always works in the direction that prevents two objects from sliding. The point on the wheel that touches the ground experiences static friction, because it is standing still in that very negligibly small moment. But rolling friction is what you are refering to. Ideally there is no kinetic friction, and static friction only grabs the asphalt and doesn't reduce the speed (on horizontal surface and without wheel torque). All other forces that do work against the wheel rotation (except friction around the axle, as you also point out) are collectively called rolling friction . Rolling friction happens of several different reasons. For example, the rubber tires contract and expand and thus dissipate energy. The energy is taken from the rotation, and this factor counts as rolling friction. Also the ground underneath might deform. The deformation costs energy and will as well cause a surface with normal forces that do not act radially (towards the wheel's center) anymore. Such forces will cause torques that might counteract the rotation. See the following picture from this source : Without rolling friction (in an ideal world), the car will continue to roll and never stop. I believe this is the actual question that you have. Because you are right that in this sense, friction counteracting the motion has been eliminated as you describe.
{ "source": [ "https://physics.stackexchange.com/questions/223283", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/5077/" ] }
223,564
What is a state in physics? While reading physics, I have heard many a times a "___" system is in "____" state but the definition of a state was never provided (and googling brings me totally unrelated topic of solid state physics), but was loosely told that it has every information of the system you desire to know. On reading further, I have found people talking of Thermodynamic state, Lagrangian, Hamiltonian, wave-function etc etc which I think are different from one another. So in general I want to know what do we mean by state in physics and is there a unique way to describe it?
The definition of a state of a system, in physics, strongly depends on the area of physics one is dealing with and it comes as one of the initial definitions once such underlying theory has to be set up. In particular one has: classical mechanics: a state of a system is a point $m\in TQ$ (or equivalently $T^*Q)$ in the tangent bundle of the configuration space (or the phase space, respectively). Such state is identified on a local chart with a set of coordinates $(q_i, \dot{q}_j)\in\mathbb{R}^N$ representing positions and velocities of all the particles at a given time $t$. Such description is equivalent to require the uniqueness of the solution of the Newton's equations once initial conditions are specified. thermodynamics: a state is a set of extensive variables $(X_1,X_2,\ldots,X_N)$ that uniquely specify the value of the entropy function as $S(X_1,X_2,\ldots,X_N)\in\mathbb{R}$. Such variables represent the macroscopic extensive parameters (as volume, number of particles, total energy and so on and so forth) from which one can derive the corresponding associated intensive variables taking derivatives of the entropy as, for instance, $p=T(\partial S/\partial V)$ and similars. quantum mechanics: a state is any element $|\psi\rangle\in\mathcal{H}$ of a Hilbert space together with a collection of self-adjoint operators $(A_1,\ldots,A_n, H)$. Special role is played by the Hamiltonian $H$, whose action mirrors classical mechanics giving the evolution in time of the state $|\psi(t)\rangle$. A collection of states (i. e. an ensamble) is instead described by a density matrix $\rho$ such that the expectation value of any operator on the ensamble can be defined as $\langle O \rangle = \textrm{tr}(\rho O)$. field theories: very subtle as the definition of a state strongly depends on the theory at hand (quantum gravity, loop quantum gravity, string theory, QFT all have slightly different definitions of states). EDIT: as per the suggestions in the comments below, more complex states and descriptions may and do arise, therefore the above is supposed to only be taken as general walkthrough.
{ "source": [ "https://physics.stackexchange.com/questions/223564", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/73136/" ] }
223,824
I was thinking about why the poles are colder and I came out with three possible explanations. 1. The atmosphere blocks light beams coming to poles The Sun is seen only slightly over the horizon at the poles, so the light beams have longer way to the poles. If only this case were true, then the total difference between the poles and the equator would be the same as the energy that was absorbed by the atmosphere. 2. Sun is only slightly over the horizon at the poles. Poles are often in shadow of the parts of the Earth which are closer to the equator. 3. Poles are farther from the Sun The longer is the way to Sun the lower is the chance to catch a light from the Sun. Please, correct me wherever I'm wrong. I'd like to know Which of these three is most correct? How significant are the described effects of the other explanations?
No, the main reason the poles are colder is because the surface is angled with respect to the sun rays. That is the same reason why in winter it is colder than in summer. You can think that in the poles the winters are harder and the summers are softer, while in the equator it is the other way around. That is also the reason why some solar panel are motorized: to keep them perpendicular to the sun rays and get the most out of them. About your hypothesis: If the atmosphere were to absorb the energy, there still would be heat at the poles, maybe in the air instead of in the ground, but still there. In polar winter there is constant darkness (24-hour night) up to 6 months in the geometrical pole. But in polar summer there is constant sun and it is still quite cold. So while the shadow actually makes a difference (polar winter is way colder than polar summer), that is not why the poles are colder than equator. The Earth is 150000000 km from the sun, and the radious of the Earth is about 6500 km, way to small to make a difference.
{ "source": [ "https://physics.stackexchange.com/questions/223824", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/41078/" ] }
223,831
This '50 years' number is floating around media pretty consistently, even the ITER road map claims that the first commercial reactor will happen post-2050. But why is that? I understand that there is some homework to do before fusion can go commercial, but is there any reason to believe that the R&D time does not simply scale with the amount of resources invested? EDIT: In plain words, what are the assumptions that went into this '50 years' estimate? (invested manhours, computer hours, etc.)
No, the main reason the poles are colder is because the surface is angled with respect to the sun rays. That is the same reason why in winter it is colder than in summer. You can think that in the poles the winters are harder and the summers are softer, while in the equator it is the other way around. That is also the reason why some solar panel are motorized: to keep them perpendicular to the sun rays and get the most out of them. About your hypothesis: If the atmosphere were to absorb the energy, there still would be heat at the poles, maybe in the air instead of in the ground, but still there. In polar winter there is constant darkness (24-hour night) up to 6 months in the geometrical pole. But in polar summer there is constant sun and it is still quite cold. So while the shadow actually makes a difference (polar winter is way colder than polar summer), that is not why the poles are colder than equator. The Earth is 150000000 km from the sun, and the radious of the Earth is about 6500 km, way to small to make a difference.
{ "source": [ "https://physics.stackexchange.com/questions/223831", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/82896/" ] }
223,833
The problem would be straightforward if the current $I$ were constant through the toroid. However, this is not the case. The current is time dependent according to $I = I_{o} e^{-\gamma t}$. So, if I'm right, a classic Ampere's Law approach doesn't work here. Because the $B$ field through the center of the toroid would change over time, it would cause an electric field, which must be taken into account in the magnetic field. The toroid in question has a cross sectional radius $r$, internal radius $a$, and outer radius $b$. I feel like I am missing something, or approaching the problem incorrectly, as I do not know how to find the $E$ field induced by the changing $B$ field. From there, I do not actually know what the math would look like. I also have to calculate the self-inductance of the toroid after this, which I would do by finding the flux through the cross section of the toroid and comparing it to the current through the same area. But, I can't do that yet, because I don't have the magnetic field to find the flux!
No, the main reason the poles are colder is because the surface is angled with respect to the sun rays. That is the same reason why in winter it is colder than in summer. You can think that in the poles the winters are harder and the summers are softer, while in the equator it is the other way around. That is also the reason why some solar panel are motorized: to keep them perpendicular to the sun rays and get the most out of them. About your hypothesis: If the atmosphere were to absorb the energy, there still would be heat at the poles, maybe in the air instead of in the ground, but still there. In polar winter there is constant darkness (24-hour night) up to 6 months in the geometrical pole. But in polar summer there is constant sun and it is still quite cold. So while the shadow actually makes a difference (polar winter is way colder than polar summer), that is not why the poles are colder than equator. The Earth is 150000000 km from the sun, and the radious of the Earth is about 6500 km, way to small to make a difference.
{ "source": [ "https://physics.stackexchange.com/questions/223833", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/101223/" ] }
223,838
Imagine a binary planet where one of the planet is of the size and mass of Jupiter (planet A) while the other would be like the Earth's moon (planet B). Imagine that this system is extremely stable (lasts billions or many more order of magnitude of time) and is put inside a constant flux of small to medium sized asteroids (not big enough to break the planets apart). I'm wondering whether planet B would benefit from the bigger's planet gravitational field in such a way that in the long run it would tend to have the same size/mass than planet A. Or whether this is wrong and planet A would always have a huge disparity regarding the mass and size compared to planet B. Here are some thoughts: Compared to being alone, originally at least, planet A is being shielded from the asteroids because of planet B. While for planet B, compared to being alone under the flux of asteroids, there is a huge benefit of getting hit by asteroids because of planet A's gravitational field. And so at first it seems to me that the rate of being hit would favor planet B compared to planet A. By rate I mean "x asteroids per y units of mass". The problem is likely solvable by computational simulation but I'm not skilled enough to do it. I'd like to hear your opinions on the question. Edit: For this problem we'll neglect relativistic effects. To make things simpler we can consider the densities of the planets and asteroids to be the same and that the shape of both planets is always spherical.
No, the main reason the poles are colder is because the surface is angled with respect to the sun rays. That is the same reason why in winter it is colder than in summer. You can think that in the poles the winters are harder and the summers are softer, while in the equator it is the other way around. That is also the reason why some solar panel are motorized: to keep them perpendicular to the sun rays and get the most out of them. About your hypothesis: If the atmosphere were to absorb the energy, there still would be heat at the poles, maybe in the air instead of in the ground, but still there. In polar winter there is constant darkness (24-hour night) up to 6 months in the geometrical pole. But in polar summer there is constant sun and it is still quite cold. So while the shadow actually makes a difference (polar winter is way colder than polar summer), that is not why the poles are colder than equator. The Earth is 150000000 km from the sun, and the radious of the Earth is about 6500 km, way to small to make a difference.
{ "source": [ "https://physics.stackexchange.com/questions/223838", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75916/" ] }
223,935
Suppose I want to transport some logs from the ground to the roof of a tower. Originally I can use a lift, or some cables, or even move the logs upwards manually; then the energy is converted to the potential energy of logs. Now, if I build a tall water tank from ground to the roof of the tower, and fill it with water, and suppose that I can push the logs from the bottom of the tank with a well shaped door, then let the logs float to the roof of the tower. What is the energy source that used to convert to the potential energy of logs?
The work you need to do (to insert the log) against the pressure of the fluid at that depth is equal to the work done by the fluid to get the log up to the height you desire. If you consider a log of volume $V$ and a tank of depth $h$, the pressure at that depth would be $\rho gh$, where $\rho$ is the density of the fluid, and $g$ the acceleration due to gravity. The work you need to do to insert the log into the fluid at that depth is $\rho ghV$. (the pressure times the volume) The buoyant force on the log due to the fluid is $\rho Vg$, so the work done by the buoyant force to lift the log up by a height $h$ is $\rho Vgh$. (the force multiplied by the displacement, because the force is constant here.) Both these quantities are equivalent, so the energy source here is you.
{ "source": [ "https://physics.stackexchange.com/questions/223935", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/90098/" ] }
224,351
I am trying to process an image in good quality to appear blurred to a normal person and good to a person suffering from myopia as seen in this source . Is it possible that a picture that is blurry will appear normal to a person suffering from myopia (farsightedness)?
A quick footnote to Nathaniel's answer: If an image looks blurred to you it's because you are viewing it in a plane that isn't the focal plane. If you put a screen where I've drawn the red dotted line then the image on the screen will look blurred. If you measure the light in the red dotted plane then at every point in that plane the light wave will have an intensity and a relative phase. If you know the intensity and phase then you can reconstruct the in focus image using the Huygens construction , and indeed the process is known as Huygen's deconvolution . The trouble is that when you take a photograph the photographic process only records the light intensity and it loses the phase. So if you're starting from a photograph you've lost half the information originally present, i.e. the relative phase, and that means it's impossible to reconstruct a perfectly focussed image. A blurred photograph won't look normal to anyone - myopic or otherwise. However it is usually possible to improve the blurred picture to some extent, which is why Huygens deconvolution software is so widely available.
{ "source": [ "https://physics.stackexchange.com/questions/224351", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/101479/" ] }
224,836
Orbitals, both in their atomic and molecular incarnations, are immensely useful tools for analysing and understanding the electronic structure of atoms and molecules, and they provide the basis for a large part of chemistry and in particular of chemical bonds. Every so often, however, one hears about a controversy here or there about whether they are actually physical or not, or about which type of orbital should be used, or about whether claimed measurements of orbitals are true or not. For some examples, see this , this or this page. In particular, there are technical arguments that in a many-body setting the individual orbitals become inaccessible to experiment, but these arguments are not always specified in full, and many atomic physics and quantum chemistry textbooks make only casual mention of that fact. Is there some specific reason to distrust orbitals as 'real' physical quantities in a many-electron setting? If so, what specific arguments apply, and what do and don't they say about the observability of orbitals?
Generally speaking, atomic and molecular orbitals are not physical quantities, and generally they cannot be connected directly to any physical observable. (Indirect connections, however, do exist, and they do permit a window that helps validate much of the geometry we use.) There are several reasons for this. Some of them are relatively fuzzy: they present strong impediments to experimental observation of the orbitals, but there are some ways around them. For example, in general it is only the square of the wavefunction, $|\psi|^2$, that is directly accessible to experiments (but one can think of electron interference experiments that are sensitive to the phase difference of $\psi$ between different locations). Another example is the fact that in many-electron atoms the total wavefunction tends to be a strongly correlated object that's a superposition of many different configurations (but there do exist atoms whose ground state can be modelled pretty well by a single configuration). The strongest reason, however, is that even within a single configuration $-$ that is to say, an electronic configuration that's described by a single Slater determinant , the simplest possible many-electron wavefunction that's compatible with electron indistinguishability $-$ the orbitals are not recoverable from the many-body wavefunction, and there are many different sets of orbitals that lead to the same many-body wavefunction. This means that the orbitals, while remaining crucial tools for our understanding of electronic structure, are generally on the side of mathematical tools and not on the side of physical objects. OK, so let's turn away from fuzzy handwaving and into the hard math that's the actual precise statement that matters. Suppose that I'm given $n$ single-electron orbitals $\psi_j(\mathbf r)$, and their corresponding $n$-electron wavefunction built via a Slater determinant, \begin{align} \Psi(\mathbf r_1,\ldots,\mathbf r_n) & = \det \begin{pmatrix} \psi_1(\mathbf r_1) & \ldots & \psi_1(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \psi_n(\mathbf r_1) & \ldots & \psi_n(\mathbf r_n) \end{pmatrix}. \end{align} Claim If I change the $\psi_j$ for linear combinations of them, $$\psi_i'(\mathbf r)=\sum_{j=1}^{n} a_{ij}\psi_j(\mathbf r),$$ then the $n$-electron Slater determinant $$ \Psi'(\mathbf r_1,\ldots,\mathbf r_n) = \det \begin{pmatrix} \psi_1'(\mathbf r_1) & \ldots & \psi_1'(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \psi_n'(\mathbf r_1) & \ldots & \psi_n'(\mathbf r_n) \end{pmatrix}, $$ is proportional to the initial determinant, $$\Psi'(\mathbf r_1,\ldots,\mathbf r_n)=\det(a)\Psi(\mathbf r_1,\ldots,\mathbf r_n).$$ This implies that both many-body wavefunctions are equal under the (very lax!) requirement that $\det(a)=1$. The proof of this claim is a straightforward calculation. Putting in the rotated orbitals yields \begin{align} \Psi'(\mathbf r_1,\ldots,\mathbf r_n) &= \det \begin{pmatrix} \psi_1'(\mathbf r_1) & \cdots & \psi_1'(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \psi_n'(\mathbf r_1) & \cdots & \psi_n'(\mathbf r_n) \end{pmatrix} \\&= \det \begin{pmatrix} \sum_{i}a_{1i}\psi_{i}(\mathbf r_1) & \cdots & \sum_{i}a_{1i}\psi_{i}(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \sum_{i}a_{ni}\psi_{i}(\mathbf r_1) & \cdots & \sum_{i}a_{ni}\psi_{i}(\mathbf r_n) \end{pmatrix}, \end{align} which can be recognized as the following matrix product: \begin{align} \Psi'(\mathbf r_1,\ldots,\mathbf r_n) &= \det\left( \begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nn} \\ \end{pmatrix} \begin{pmatrix} \psi_1(\mathbf r_1) & \cdots & \psi_1(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \psi_n(\mathbf r_1) & \cdots & \psi_n(\mathbf r_n) \end{pmatrix} \right). \end{align} The determinant then factorizes as usual, giving \begin{align} \Psi'(\mathbf r_1,\ldots,\mathbf r_n) &= \det \begin{pmatrix} a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{n1} & \cdots & a_{nn} \\ \end{pmatrix} \det \begin{pmatrix} \psi_1(\mathbf r_1) & \cdots & \psi_1(\mathbf r_n)\\ \vdots & \ddots & \vdots \\ \psi_n(\mathbf r_1) & \cdots & \psi_n(\mathbf r_n) \end{pmatrix} \\ \\&=\det(a)\Psi(\mathbf r_1,\ldots,\mathbf r_n), \end{align} thereby proving the claim. Disclaimers The calculation above makes a very precise point about the measurability of orbitals in a multi-electron context. Specifically, saying things like the lithium atom has two electrons in $\psi_{1s}$ orbitals and one electron in a $\psi_{2s}$ orbital is exactly as meaningful as saying the lithium atom has one electron in a $\psi_{1s}$ orbital, one in the $\psi_{1s}+\psi_{2s}$ orbital, and one in the $\psi_{1s}-\psi_{2s}$ orbital, since both will produce the same global many-electron wavefunction. This does not detract in any way from the usefulness of the usual $\psi_{n\ell}$ orbitals as a way of understanding the electronic structure of atoms, and they are indeed the best tools for the job, but it does mean that they are at heart tools and that there are always alternatives which are equally valid from an ontology and measurability standpoint. However, there are indeed situations where quantities that are very close to orbitals become accessible to experiments and indeed get measured and reported, so it's worth going over some of those to see what they mean. The most obvious is the work of Stodolna et al. [ Phys. Rev. Lett. 110 , 213001 (2013)] , which measures the nodal structure of hydrogenic orbitals (good APS Physics summary here ; discussed previously in this question and this one ). These are measurements in hydrogen, which has a single electron, so the multi-electron effect discussed here does not apply. These experiments show that, once you have a valid, accessible one-electron wavefunction in your system, it is indeed susceptible to measurement. Somewhat more surprisingly, recent work has claimed to measure molecular orbitals in a many-electron setting, such as Nature 432 , 867 (2004) or Nature Phys. 7 , 822 (2011) . These experiments are surprising at first glance, but if you look carefully it turns out that they measure the Dyson orbitals of the relevant molecules: this is essentially the overlap $$ \psi^\mathrm{D}=⟨\Phi^{(n-1)}|\Psi^{(n)}⟩ $$ between the $n$-electron ground state $\Psi^{(n)}$ of the neutral molecule and the relevant $(n-1)$-electron eigenstate $\Phi^{(n-1)}$ of the cation that gets populated. (For more details see J. Chem. Phys. 126 , 114306 (2007) or Phys. Rev. Lett. 97 , 123003 (2006) .) This is a legitimate, experimentally accessible one-electron wavefunction, and it is perfectly measurable.
{ "source": [ "https://physics.stackexchange.com/questions/224836", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/8563/" ] }
225,522
Why can't a free electron absorb a photon? But a one attached to an atom can.. Can you explain to me logically and by easy equations? Thank you..
It is because energy and momentum cannot be simultaneously conserved if a free electron were to absorb a photon. If the electron is bound to an atom then the atom itself is able to act as a third body repository of energy and momentum. Details below: Conservation of momentum when a photon ( $\nu$ ) interacts with a free electron, assuming that it were absorbed , gives us \begin{equation} p_{1} + p_{\nu} = p_{2}, \tag{1} \end{equation} where $p_1$ and $p_2$ are the momentum of the electron before and after the interaction. Conservation of energy gives us \begin{equation} \sqrt{(p_{1}^{2}c^{2} + m_{e}^{2}c^{4})} + p_{\nu}c = \sqrt{(p_{2}^{2}c^{2}+m_{e}^{2}c^{4})} \tag{2} \end{equation} Squaring equation (2) and substituting for $p_{\nu}$ from equation (1), we have $$ p_{1}^{2}c^{2} + m_{e}^{2}c^{4} + 2(p_{2}-p_{1})\sqrt{(p_{1}^{2}c^{2}+m_{e}^{2}c^{4})} + (p_{2}-p_{1})^{2}c^{2}=p_{2}^{2}c^{2}+m_{e}^{2}c^{4} $$ $$ (p_{2}-p_{1})^{2}c^{2} - (p_{2}^{2}-p_{1}^{2})c^{2} + 2(p_{2}-p_{1})c\sqrt{(p_{1}^{2}c^{2}+m_{e}^{2}c^{4})} = 0 $$ Clearly $p_{2}-p_{1}=0$ is a solution to this equation, but cannot be possible if the photon has non-zero momentum. Dividing through by this solution we are left with $$ \sqrt{(p_{1}^{2}c^{2}+m_{e}^{2}c^{4})} - p_{1}c =0 $$ This solution is also impossible if the electron has non-zero rest mass (which it does). We conclude therefore that a free electron cannot absorb a photon because energy and momentum cannot simultaneously be conserved. NB: The above demonstration assumes a linear interaction. In general $\vec{p_{\nu}}$ , $\vec{p_1}$ and $\vec{p_2}$ would not be aligned. However you can always transform to a frame of reference where the electron is initially at rest so that $\vec{p_1}=0$ and then the momentum of the photon and the momentum of the electron after the interaction would have to be equal. This then leads to the nonsensical result that either $p_2=0$ or $m_e c^2 = 0$ . This is probably a more elegant proof.
{ "source": [ "https://physics.stackexchange.com/questions/225522", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
226,882
When I toss a coin in Mars, is the planets atmosphere rare enough that I'd rotate with the planet (at its angular velocity), but not the coin?
It depends on where on Mars you toss the coin, and how high you toss it. In a rotating frame of reference, an object in motion appears to be affected by a pair of fictitious forces - the centrifugal force, and the Coriolis force. Their magnitude is given by $$\mathbf{\vec{F_{centrifugal}}}=m\mathbf{\vec\omega\times(\vec\omega\times\vec{r})}\\ \mathbf{\vec{F_{Coriolis}}}=-2m\mathbf{\vec\Omega}\times\mathbf{\vec{v}}$$ The question is - when are these forces sufficient to move the coin "away from your hand" - in other words, for what initial velocity $v$ is the total displacement of the coin greater than 10 cm (as a rough estimate of what "back in your hand" might look like; obviously you can change the numbers). The centrifugal force is only observed when the particle is rotating at the velocity of the frame of reference - once the particle is in free fall, it no longer moves along with the rotating frame of reference and the centrifugal force "disappears". For an object moving perpendicular to the surface of the earth, the Coriolis force is strongest at the equator, becoming zero at the pole; it is a function of the velocity of the coin. We will calculate the expression as a function of latitude - recognizing that it will be a maximum at the equator. As a simplifying assumption, we assume the change in height is sufficiently small that we ignore changes in the force of gravity; we also ignore all atmospheric drag (in particular, the wind; if the opening scene of "The Martian" were to be believed, it can get pretty windy on the Red Planet.) Finally we will assume that any horizontal velocity will be small - we ignore it for calculating the Coriolis force, but integrate it to obtain the displacement. The vertical velocity is given by $$v = v_0 - g\cdot t$$ and the total time taken is $t_t=\frac{2v_0}{g}$. At any moment, the Coriolis acceleration is $$a_C=2\mathbf{\Omega}~v\cos\theta$$ Integrating once, we get $$v_h = \int a\cdot dt \\ = 2\mathbf{\Omega}\cos\theta\int_0^t(v_0-gt)dt\\ = 2\mathbf{\Omega}\cos\theta\left(v_0 t-\frac12 gt^2\right)$$ And for the displacement $$x_h = \int v_h dt \\ = 2\mathbf{\Omega}\cos\theta\int_0^t \left(v_0 t-\frac12 gt^2\right)dt\\ = 2\mathbf{\Omega}\cos\theta \left(\frac12 v_0 t^2-\frac16 gt^3\right)$$ Substituting for $t = \frac{2v_0}{g}$ we get $$x_h = 2\mathbf{\Omega}\cos\theta v_0^3\left(\frac{4}{g^2} - \frac{4}{3 g^2}\right)\\ = \frac{16\mathbf{\Omega}\cos\theta v_0^3}{3g^2}$$ The sidereal day of Mars is 24 hours, 37 minutes and 22 seconds - so $\Omega = 7.088\cdot 10^{-5}/s$ and the acceleration of gravity $g = 3.71 m/s^2$. Plugging these values into the above equation, we find $x_h = 2.75\cdot 10^{-5}v_0^3 m$, where velocity is in m/s. From this it follows that you would have to toss the coin with an initial velocity of about 15 m/s for the Coriolis effect to be sufficient to deflect the coin by 10 cm before it comes back down. On Earth, such a toss would result in a coin that flies for about 3 seconds, reaching a height of about 11 m. It is conceivable that someone could toss a coin that high - but I've never seen it. further reading AFTERTHOUGHT Your definition of "vertical" needs to be carefully thought through. There is a North-South component of the centrifugal "force" that is strongest at 45° latitude, and that will cause a mass on a string to hang in a direction that is not-quite-vertical. If you launch your coin in that direction, you will not observe a significant North-South deflection during flight, but if you were to toss the coin "vertically" (in a straight line away from the center of Mars), there will in fact be a small deviation. The relative magnitude of the centrifugal force and gravity can be computed from $$\begin{align}a_c &= \mathbf{\Omega^2}R\sin\theta\cos\theta \\ &= \frac12 \mathbf{\Omega^2}R\\ &= 8.5~\rm{mm/s^2}\end{align}$$ If you toss the coin at 15 m/s, it will be in the air for approximately 8 seconds. In that time, the above acceleration will give rise to a displacement of about 27 cm. This shows that your definition of "vertical" really does matter (depending on the latitude - it doesn't matter at the poles or the equator, but it is significant at the intermediate latitudes, reaching a maximum at 45° latitude).
{ "source": [ "https://physics.stackexchange.com/questions/226882", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/78810/" ] }
228,077
There is one thing I sometimes wonder about ever since I was a child. Do people who wear eye glasses see objects in different size than those who don't?(Technically different size means different projected image size on the retina.) For example do myopic people who wear diverging lens see the world smaller, than a person with healthy eyes? Or conversely do hyperopic people who wear magnifying lenses to see sharp have a zoomed-in view of world? EDIT: Sorry all, I wanted to mean "field of view" instead of line of sight.
Yes. I am myopic and I see a slight double image along the edge of my glasses: This means that the field of view inside the frame is bigger - zoomed out - than what it would be with the same frames but without the lenses. Similarly, objects are slightly but perceptibly smaller than they are without the glasses, as is clear from the size of the mugs at bottom left of the image. Zooming in on them and putting the inside- and outside-the-glasses versions in direct contrast gives a clearer picture: As has been pointed out, the effect is strongly dependent on your exact position with respect to the lens, and it is exaggerated in the picture above because the camera is artificially far from the lens. Visually, and with the glasses correctly worn, the effect is milder and it looks closer to the door handle below: For the handle to look like this I need to be about 2m from the door; I have about 2.5 dioptres in that direction if my memory serves. In practice, though, your field of clear vision is restricted to the frame of the glasses , which do not cover a lot of your peripheral vision. This means that in practice you have a much reduced line of sight compared with someone without glasses, or who uses contact lenses. Whatever amplification was gained through this mechanism - the optics of which have been described in detail in other answers - is completely lost to this, and it's something to be aware of with people wearing glasses - our field of view is rather restricted, regardless of whether one is myopic or hyperopic.
{ "source": [ "https://physics.stackexchange.com/questions/228077", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/7743/" ] }
228,579
I'm interested in whether the scale of processes that occur in the brain is small enough to be affected by quantum mechanics. For instance, we ignore quantum mechanics when we analyze a game of tennis because a tennis ball is much too large to be affected by quantum mechanics. However, signals in the brain are mostly (all?) electrical, carried by electrons, and electrons are definitely 'small' enough to be affected by quantum mechanics. Does that mean the only way we will be able to further understand how the mind works is through an application of quantum mechanics?
Quantum mechanics has almost no bearing on the operation of the brain, except insofar as it explains the existence of matter. You say that signals are carried by electrons, but this is very imprecise. Rather, they are carried by various kinds of chemical signals, including ions. Those signals are released into a warm environment that they interact with over a very short timescale. Quantum mechanical processes like interference and entanglement only continue to show effects that differ from classical physics when the relevant information does not leak into the environment. This issue has been explained the context of the brain by Max Tegmark in The importance of quantum decoherence in brain processes . In the brain, the leaking of information should take place over a time of the order $10^{-13}-10^{-20}$s. The timescale over which neurons fire etc. is $0.001-0.1$s. So your thoughts are not quantum computations or anything like that. The brain is a classical computer.
{ "source": [ "https://physics.stackexchange.com/questions/228579", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/60080/" ] }
228,887
I have just finished watching the new Star Wars movie (The Force Awakens), and during the end credits, text is shown upon a background of stars. Wearing the 3D glasses, I noticed that the text appears in the foreground, and the stars appear in the background. On removing the glasses, I then noticed that the text is crisp, clear and drawn only once on the screen, whereas the stars are all drawn twice, with a constant distance (from left to right) between the two instances of each star. However, from my understand of how 3D glasses work, objects whose two instances are further apart should appear closer to the eyes, and objects whose two instances are close together (or at the same location, as with the text), should appear further away from the eyes. So why did I experience the opposite effect when I watched the credits?
The others have already provided good explanations, but since it sounded like an interesting question and I already sketched up a diagram, I thought I would show it, too. As already mentioned, if you have an object that is to be shown as the exact same distance as the distance between you and the screen, it's very easy to represent that: It's just a single object on the screen that looks the same to both eyes. If, on the other hand, you want to show an object which is far away, then you need to 'trick' your eyes by showing two separate images on the screen, one for the left eye and another for the right eye. That's indicated by the two hollow green dots on the screen on the diagram below. And if you want to show an object which is closer to you than the actual screen distance, then to trick your eyes two images at the locations of the hollow blue dots need to be presented on screen. Note that for objects that are to appear closer than the screen distance that the placement of the images is reversed: The image for the left eye is to the right of the image for the right eye, and the image for the right eye is to the left of the image for the left eye. It's interesting to note that the apparent distances of the objects are all scaled to the actual distance of the observer to the screen, so audience members will have somewhat different impressions of the distances to the on-screen objects depending on how far from the screen they are seated. Perhaps something to take into account the next time you go to see a 3D movie. (P.S.: On the diagram the actual observer-to-screen distance is assumed to be 30 feet. Hence, an object to be presented at an apparent distance of 30 feet is represented by a single dot on the screen.)
{ "source": [ "https://physics.stackexchange.com/questions/228887", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/89783/" ] }
229,047
Audio of a shepard tone on youtube. So what is a Shepard tone ? A Shepard tone, named after Roger Shepard, is a sound consisting of a superposition of sine waves separated by octaves. When played with the base pitch of the tone moving upward or downward, it is referred to as the Shepard scale. This creates the auditory illusion of a tone that continually ascends or descends in pitch, yet which ultimately seems to get no higher or lower. ( wikipedia ). A computer simulated Shepard tone goes on and on and on....... It never ends, literally. We feel (our brain perceives) that the amplitude or the frequency or whatever is increasing gradually but after some time we feel that that tone is repeating again, starting from same point. So, the frequency of the tone changes periodically like a sine wave. But why we, the human voice cannot produce that tone? How hard we try, we cannot produce. This may be due to exhaustion or the capacity of lungs. Our voice seems to get saturated after a certain limit further which we cannot produce the sound. Why? If the frequency of the tone changes periodically like a sine wave, we should be able to continue producing the tone from where we started it. But no, this does not happen. Why? PS- my terminology may be wrong. So, feel free to edit it.
The human voice box produces a fundamental frequency and its harmonics because the mechanism is like that of a relaxation oscillator . However, we have limited control over the relative amplitude of the harmonics (we do have some - that is how we change the "color" of a tone we sing, and the sound of vowels). In order to produce the Shepard scale, you need to be able to control the relative amplitude of the different harmonics - especially the ratio of the lowest two harmonics. To a limited extent we do this when we change the vowel that we sing - with the "oo" sound having few "really high" harmonics, while the "ah" has lots. For example, from the hyperphysics site we get this image: showing that there is a lot or harmonic content in the voice. But it's not "evenly distributed" - so if you were to drop by an octave, you are creating a sound that is sufficiently different that you don't really get the feeling that you have an "eternal" scale. I suspect the most important problem is that you would want to re-introduce the lowest harmonic with a slowly increasing amplitude, so that the note "returns to the lower range" without ever appearing to jump there. But the mechanism of the vocal chords is too simple to allow it. Incidentally, when sopranos sing very high notes, many people lose the ability to distinguish what vowel they are singing since the harmonics are further apart, and the ear distinguishes between vowels by estimating the shape of the frequency envelope in the range up to a few kHz; when there are very few harmonics in that range, the shape cannot be determined. The "high C" (C7) has a frequency of 2093 Hz, so there might be just a couple of harmonics available to figure out the sound. That makes vowels in the highest register hard to distinguish.
{ "source": [ "https://physics.stackexchange.com/questions/229047", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/85785/" ] }
229,168
I have heard several pseudoscientific explanations about the Heisenberg Uncertainty Principle and find them hard to believe. As a mathematician mainly focusing on functional analysis, I have a considerable interest in this matter. Although I still have no idea about it , I aim to truly understand it one day. For now, the question is: Can the Heisenberg Uncertainty Principle be explained so that a non-scientist can at least get a correct idea of what it says and what not? One of the most heard explanations is the one that says naïvely that while trying to determine the position of a particle you send energy (light) so you modify its speed, therefore making it impossible to determine the speed. If I understand well this should only be a some sort of metaphor to explain it, right?
The best intuitive analogy I've heard is with classical sound waves. Consider a musical instrument playing a pure sine wave of frequency $\nu$ and amplitude $A$, and no other harmonic frequencies at all. Graphing this in frequency-amplitude space ($x$-axis=frequency, $y$=amplitude) gives you a $\delta$-function-like point function with value $y=A$ at $x=\nu$, and zero everywhere else. That represents your exact knowledge of the note's frequency. But at what time was the note played? A pure sine wave extends from $-\infty<t<\infty$. Any attempt to play a shorter note necessarily introduces additional components/harmonics in its Fourier decomposition. And the shorter the interval $t_0<t<t_1$ you want, the broader your frequency spectrum has to become. Indeed, imagine an instantaneous sound. Neither your ear, nor any apparatus, can say anything about its frequency at all -- you'd have to sense some finite portion of the waveform to analyse its shape/components, but "instantaneous" precludes that. So, you can't simultaneously know both a note's frequency and the time it's played, due to the Fourier conjugate nature of frequency/time. The better you know one, the worse you know the other. And, as @annav mentioned, that's analogous to the nature of conjugate quantum observables. Edit: to address @sanchises remark about some "crude MSPaint drawings"... For simplicity (i.e., my own simplicity generating the following "crude drawings"), I'm illustrating an almost-square wave below, rather than a sine wave. Suppose you wanted to produce a sound wave with a one-cycle duration, looking something like, So the "tails" are zero in both directions, indicating the sound's finite duration. But if we try generating that with just two fourier components, we can't get those zero-tails. Instead, it looks like, As you see, we can't "localize" the sound's duration with just two frequencies. To get a better approximation, four components looks like, And that still fails to accomplish much by way of "localization". Next, eight components looks like, And that's beginning to exhibit the behavior we're looking for. Sixteen looks like, And I could go on. The initial illustration above was generated with 99 components, and looks pretty much like the intended square wave. Comment: you guys coincidentally stepped into one of my little programs when mentioning drawings. See http://www.forkosh.com/onedwaveeq.html for a discussion, although not about uncertainty. To get the above illustrations, I used the following parameters in that "Solver Box" at top, nrows=100&ncols=256&ncoefs=99&fgblue=135&f=0,0,0,0,0,0,1,1,1,1,1,-1,-1,-1,-1,-1,0,0,0,0,0,0,0&gtimestep=1&bigf=1 Just change the ncoefs=99 to generate the corresponding drawings above.
{ "source": [ "https://physics.stackexchange.com/questions/229168", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/103729/" ] }
229,480
Considering that $7$ TeV is more or less the same kinetic energy as a mosquito flying, why is it considered to be a great amount of energy at the LHC? I mean, a giant particle accelerator that can only provide 7 TeV of energy? (14 in the mass center, if I understood well). Is it because particles are so small that this amount of energy, in proportions, is then really huge?
$7\ \mathrm{TeV}$ is not that much kinetic energy, that has been covered by your question and previous answers. However, in the context of a proton , with a rest mass of $1.672\times10^{−27}~\mathrm {kg}$ (very, very little mass), when a single proton has $7\ \mathrm{TeV}$ then it is traveling at a specific speed: $$E= mc^2$$ \begin{align}E& = E_0 + E_\mathrm k\\ E_\mathrm k&=E- E_0\\ &= mc^2 -m_0c^2\\ &= \gamma m_0c^2 - m_0c^2\\ &= m_0c^2 \left(\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}- 1\right)\\ \implies 1+ \frac{E_\mathrm k}{m_0c^2}&= \frac{1}{\sqrt{1-\frac{v^2}{c^2}}}\\ \implies {\sqrt{1-\frac{v^2}{c^2}}}&=\frac{1}{1 + \frac{E_\mathrm k}{m_0c^2}}\\ \implies 1-\frac{v^2}{c^2} &= \left(\frac{1}{1 + \frac{E_\mathrm k}{m_0c^2}}\right)^2 \\ \implies \frac{v}{c}&=\sqrt{1 -\left(\frac{1}{1 + \frac{E_\mathrm k}{m_0c^2}}\right)^2 } \end{align} For a proton at $7\ \mathrm{TeV}$ , this is $99.9999991\ \%$ times the speed of light Source Now, keep in mind this is for each proton in two beams , each proton having $7\ \mathrm{TeV}$ , traveling through a superconductor cooled by helium, and then colliding for a sum of $14\ \mathrm{TeV}$ in each proton-proton collision. Each beam contains $2\,808$ 'bunches' of protons, and each 'bunch' contains $1.15 \times 10^{11}$ protons, so each beam then consists of $362\ \mathrm{MJ}$ (megajoules). This gives the total kinetic energy of $724\ \mathrm{MJ}$ just in the beams alone: about 7 times the kinetic energy as landing a 55-tonne aircraft at typical landing speed according to the Joules Orders of Magnitude Wikipedia page, or with that $59~\mathrm{ m/s}$ landing speed, 218.9 tonnes, so about a safely loaded Airbus A330-200 ( maximum takeoff weight of 242 tonnes ) Add to that the energy required to keep the ring supercooled enough so it remains superconductive, accelerate the beam in the first place, keep accelerating it so it doesn't lose velocity, light and heat and power the facility. "At peak consumption, usually from May to mid-December, CERN uses about 200 megawatts of power, which is about a third of the amount of energy used to feed the nearby city of Geneva in Switzerland. The Large Hadron Collider (LHC) runs during this period of the year, using the power to accelerate protons to nearly the speed of light. CERN's power consumption falls to about 80 megawatts during the winter months." - Powering CERN
{ "source": [ "https://physics.stackexchange.com/questions/229480", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/90893/" ] }
230,054
The energy of a moving object is $E = mv^2\;.$ That is it increases with velocity squared. I walk at say 3 miles per hour, or lets round that down to 1 meter per second for a slow walk. I weigh less than $100~\mathrm{kg}\;,$ but lets just round that up to $100~\mathrm{ kg}$ for convenience (it is just after Christmas). So when I walk along the pavement, I have $100~\mathrm{kg\; m^2 s^{-2}}$, 100 joules of kinetic energy. Now I get on a passenger jet, which is cruising at around 500 knots, call that 250 meters per second. In my seat I have $100\times 250^2 = 6250000$ joules of kinetic energy. But when I walk down the aisle I have $100\times 251^2 = 6300100$ joules of kinetic energy. The difference between these is: 50100 joules. It feels the same to me, walking down the pavement as walking down the aisle of the plane. I didn't have to make some huge effort to get up to speed on the plane, yet I needed 500 times the energy to do it. How is this possible, and where did the energy come from?
Due to momentum being conserved, when you accelerate yourself forwards relative to the plane, the tangential force you're applying to the floor will accelerate the rest of the plane backwards . Since the plane has a lot more mass than you, its velocity will not change by very much. Thus, an inertial observer who was initially at rest with respect to the plane (and you) will see both you and the plane gain kinetic energy (due to your muscle work). The vast majority of the additional kinetic energy goes into you, though. However, an observer on the ground will see the rest-of-the-plane slow down slightly, which means that it loses quite a bit of kinetic energy due to its large mass and velocity. This loss of kinetic energy from the plane cancels out the additional kinetic energy the ground observer thinks you gained , so the ground observer's energy ledger still balances. (Mathematically, to the ground observer $v$ is, to a first approximation, both the ratio between your your gained momentum and your gained kinetic energy, and the ratio between the plane's lost momentum and its lost kinetic energy. So conservation of momentum leads to conservation of total energy, to first order. The term that comes from your muscle work is a second-order effect). Both observers agree on the amount of energy your muscles contribute (at least as long as relativistic effects can be ignored).
{ "source": [ "https://physics.stackexchange.com/questions/230054", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/78868/" ] }
230,228
I understand that the LHC found the Higgs boson by pumping so much energy into a tiny space (via near light speed proton-proton collisions) that a Higgs boson appeared momentarily, then instantly decayed. They detected the products of the decay, and deduced that a Higgs boson must have been there. That's fine. What I don't get is: 1) Isn't the universe full of Higgs bosons, making up the Higgs field? If so, why do we need to make one ourselves? Why can't we detect the ones that are there already, like we can other bosons such as photons? 2) When we've made our Higgs out of pure energy, why does it instantly decay into other particles? 3) Does it actually directly decay into other particles, or is it rather the case that it just turns back into pure energy, and then that energy produces other, less massive particles?
Isn't the universe full of Higgs bosons, making up the Higgs field? No. In particle physics, it is understood that the underlying (more fundamental) object is the field, not the particles. Particles are excitations of the fields that can be measured, and always carry certain properties like charge, mass, spin etc. The field that you are most familiar with is the electromagnetic field, its excitations being the photons. In another field the excitations are electrons, in another still there are gluons etc. And there is a Higgs field, whose excitations are the Higgs bosons. The Higgs field, in contrast to the electromagnetic field has a non-zero value even if there are no Higgs bosons there. To have an analogy in mind, think of a room full of air. When I speak, there are sound waves moving around the air. The air is the Higgs field, the sound waves are the Higgs bosons. Why can't we detect the ones that are there already, like we can other bosons such as photons? Higgs bosons are very massive, as particles go, so they require a lot of energy to be created in collisions. Additionally they have a number of decays pathways, so when they are created, they decay rapidly. So, even if Higgs bosons are created all the time in the atmosphere, or in supernovae or other events, they are rare and hard to detect. That is why we set up an experiment that can reproduce millions of collisions a second so to accumulate enough data. When we've made our Higgs out of pure energy, why does it instantly decay into other particles? This is kind of misguided. There is no clear meaning of "pure" energy. Energy is a quantity that is assigned to various phenomena, yet is common and interchangeable between them all. We speak of kinetic energy, potential energy, mass-energy, etc. but none of these forms is "purer" in any specific sense. In the particle collisions, the kinetic and rest mass energy of the protons is concentrated in a small part of spacetime, and can be redistributed in the kinetic, potential and mass energy of other particles. Once a particle is formed, it does not really matter what way it has been formed. Just like a radioactive nucleus has the same probability of decaying in the next $10$ minutes irrespective of how long it has survived until now, a Higgs boson will decay with a certain probability into the particles it can decay to. Does it actually directly decay into other particles, or is it rather the case that it just turns back into pure energy, and then that energy produces other, less massive particles? Here we end up a little in metaphysics.You will have different answers depending on the interpretation of QM you choose. All we observe is the protons that go in the collision, and the shower of particles that comes out after the collision, together with their energy. That's all. Quantum theory will give you the statistics of these observations, but not what happens between the two observations; that is (for now) metaphysics, because it is unobservable. Strictly speaking, no Higgs boson has been observed, in the sense that no Higgs boson has collided with the detectors. We have calculated how the existence of the Higgs field will affect the measurements, we found that it would affect them in a particular way, we did the experiments and indeed found that signature. The experiments and theory match so well that it is inescapable that there is a Higgs field, even though we have not "seen" (with our eyes) any Higgs bosons. To speak about the exact way in which one particle comes into existence and decays is a bit beyond present physics (also worth exploring in other questions).
{ "source": [ "https://physics.stackexchange.com/questions/230228", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/42047/" ] }
230,346
Diamond and graphite are both made of the same atom, carbon. Diamond has a tetrahedron structure while graphite has a flat hexagonal structure. Why is diamond transparent while graphite is not (at least not with more than a couple of layers)?
The answer lies in the band structure of the two materials. The band structure describes how the electrons in a solid are bound, and what other energy states are available to them. Very simply, the band gap for transparent diamonds is very wide (see this link ): Normally, diamond is not a conductor: all the electrons live in the "valence band", and you need a photon with at least 5.4 eV of energy to push an electron into the conduction band. In the process, that photon would be absorbed. A photon with less energy cannot give its energy to an electron, because that electron "has nowhere to go". And since visible light has energies of between 1.65 and 3.1 eV , only UV photons have enough energy to be absorbed by pure diamond. That same link also describes how impurities give rise to color in diamond: for example, nitrogen atoms produce an "intermediate" energy level, and this gives rise to more energetic electrons that could jump the gap to the conduction band and absorb light. By contrast, graphite is a conductor. As a conductor, it has electrons in the conduction band already. You know this, because even a tiny voltage will give rise to a current - this tells us that the electrons didn't need to be "lifted" into the conduction band first. And since electrons will absorb any amount of energy easily, the material absorbs all wavelengths of light: which makes it black.
{ "source": [ "https://physics.stackexchange.com/questions/230346", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/103999/" ] }
230,363
Suppose I have a still (granite) cube on a flat surface with friction with gravity evenly pulling down on everything. I know how to calculate how much force something such as electromagnetism would require to move the cube and overcome static friction but how fast would I have to throw a (granite) ball at the cube to make it overcome static friction? How much momentum and how much kinetic energy would the ball require?
The answer lies in the band structure of the two materials. The band structure describes how the electrons in a solid are bound, and what other energy states are available to them. Very simply, the band gap for transparent diamonds is very wide (see this link ): Normally, diamond is not a conductor: all the electrons live in the "valence band", and you need a photon with at least 5.4 eV of energy to push an electron into the conduction band. In the process, that photon would be absorbed. A photon with less energy cannot give its energy to an electron, because that electron "has nowhere to go". And since visible light has energies of between 1.65 and 3.1 eV , only UV photons have enough energy to be absorbed by pure diamond. That same link also describes how impurities give rise to color in diamond: for example, nitrogen atoms produce an "intermediate" energy level, and this gives rise to more energetic electrons that could jump the gap to the conduction band and absorb light. By contrast, graphite is a conductor. As a conductor, it has electrons in the conduction band already. You know this, because even a tiny voltage will give rise to a current - this tells us that the electrons didn't need to be "lifted" into the conduction band first. And since electrons will absorb any amount of energy easily, the material absorbs all wavelengths of light: which makes it black.
{ "source": [ "https://physics.stackexchange.com/questions/230363", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/73720/" ] }
230,495
As far as I understand, one requires that in order for the scalar product between two vectors to be invariant under Lorentz transformations $x^{\mu}\rightarrow x^{\mu^{'}}=\Lambda^{\mu^{'}}_{\,\,\alpha}x^{\alpha}$, we require that the metric $\eta_{\mu\nu}$ transform as $\eta_{\mu\nu}\rightarrow \eta_{\mu^{'}\nu^{'}}=\Lambda^{\alpha}_{\,\,\mu^{'}}\eta_{\alpha\beta}\Lambda^{\beta}_{\,\,\nu^{'}}$. [Since we require that $x^{\mu^{'}}x_{\mu^{'}}=x^{\alpha}x_{\alpha}\Rightarrow x^{\mu^{'}}x_{\mu^{'}}=\eta_{\mu^{'}\nu^{'}}x^{\mu^{'}}x^{\nu^{'}}=\eta_{\mu^{'}\nu^{'}}\Lambda^{\mu^{'}}_{\,\,\alpha}\Lambda^{\nu^{'}}_{\,\,\beta}x^{\alpha}x^{\beta}=x^{\alpha}x_{\alpha}=\eta_{\alpha\beta}x^{\alpha}x^{\beta}$]. What confuses me, is that I've been reading up on the cosmological constant problem and in several sets of notes it is claimed that the contribution of the vacuum energy density to the energy-momentum tensor should be of the form $$T^{vac}_{\mu\nu}=-\rho^{vac}g_{\mu\nu}$$ the argument being that the vacuum must be Lorentz invariant and the only Lorentz invariant tensor is the metric tensor $\eta_{\mu\nu}$ (apart from the Levi-Civita tensor (density)). I don't see how this is the case by looking at $\eta_{\mu^{'}\nu^{'}}=\Lambda^{\alpha}_{\,\,\mu^{'}}\eta_{\alpha\beta}\Lambda^{\beta}_{\,\,\nu^{'}}$, how is it obvious that this is Lorentz invariant? Shouldn't it be something like $\eta_{\mu^{'}\nu^{'}}=\eta_{\mu\nu}$? Apologies if this is a stupid question, but I'm just having a mental block over it.
I believe it can be useful to define the following concepts (I won't be very formal here for pedagogical reasons): Any event can be described through four real numbers, which we take to be: the moment in time it happens, and the position in space where it takes place. We call this four numbers the coordinates of the event. We collect these numbers in a tuple, which we call $x\equiv (t,\boldsymbol r)$. These numbers depend, of course, on which reference frame we are using: we could, for example, use a different origin for $t$ or a different orientation for $\boldsymbol r$. This means: for $x$ to make sense, we must pick a certain reference frame. Call it $S$ for example. Had we chosen a different frame, say $S'$, the components of the same event would be $x'$, i.e., four real numbers, in principle different from those before. We declare that the new reference frame is inertial if and only if $x'$ and $x$ are related through $$ x'=\Lambda x \tag{1} $$ for a certain matrix $\Lambda$, that depends, for example, on the relative orientations of both reference frames. There are certain conditions $\Lambda$ must fulfill, which will be discussed in a moment. We define a vector to be any set of four real numbers such that, if its components in $S$ are $v=(v^0,\boldsymbol v)$, then in $S'$ its components must be $$ v'=\Lambda v \tag{2} $$ For example, the coordinates $x$ of an event are, by definition, a vector, because of $(1)$. There are more examples of vectors in physics, for example, the electromagnetic potential , or the current density , the momentum of a particle, etc. It turns out that it is really useful to define the following operation for vectors: if $u,v$ are two vectors, then we define $$ u\cdot v\equiv u^0 v^0-\boldsymbol u\cdot\boldsymbol v\tag{3} $$ The reason this operation is useful is that it is quite ubiquitous in physics: there are many formulas that use this. For example, any conservation law , the wave equation , the Dirac equation , the energy-momentum relation , etc. We define the operation $\cdot$ through the components of the vectors, but we know these components are frame-dependent, so if $\cdot$ is to be a well-defined operation, we must have $$ u\cdot v=u'\cdot v' \tag{4} $$ because otherwise $\cdot$ would be pretty useless. This relation $(4)$ won't be true in general, but only for some matrices $\Lambda$. Thus, we declare that the matrices $\Lambda$ can only be those which make $(4)$ to be true. This is a restriction on $\Lambda$: only some matrices will represent changes of reference frames. Note that in pure mathematics, any invertible matrix defines a change of basis. In physics only a subset of matrices are acceptable changes of basis. So, what are the possible $\Lambda$'s that satisfy $(4)$? Well, the easier way to study this is to rewrite $(3)$ using a different notation: define $$ \eta=\begin{pmatrix} 1 &&&\\&-1&&\\&&-1&\\&&&-1\end{pmatrix} \tag{5} $$ This is just a matrix that will simplify our discussion. We should not try to find a deep meaning for $\eta$ (it turns out there is a lot of geometry behind $\eta$, but this is not important right now). Using $\eta$, its easy to check that $(3)$ can be written as $$ u\cdot v=u^\mathrm{T}\eta v \tag{6} $$ where in the r.h.s. we use the standard matrix product. If we plug $v'=\Lambda v$ and $u'=\Lambda v$ here, and set $u\cdot v=u'\cdot v'$, we find that we must have $$ \Lambda^\mathrm{T} \eta \Lambda=\eta \tag{7} $$ This is a relation that defines $\Lambda$: any possible change of reference frame must be such that $(7)$ is satisfied. If it is not, the $\Lambda$ cannot relate two different frames. This relation is not in fact a statement of how $\eta$ transforms (as you say in the OP), but actually a restriction of $\Lambda$. It is customary to say that $\eta$ transforms as $(7)$, which will be explained in a moment. For now, just think of $(7)$ as what are the possible matrices $\Lambda$. At this point, it is useful to introduce index notation . If $v$ is a vector, we call its components $v^\mu$, with $\mu=0,1,2,3$. On the other hand, we write the components of changes of frames $\Lambda^\mu{}_\nu$. With this notation, $(2)$ can be written as $$ v'^\mu=\Lambda^\mu{}_\nu v^\nu \tag{8} $$ Also, using index notation, the product of two vectors can be written as $$ u\cdot v=\eta_{\mu\nu}u^\mu v^\nu \tag{9} $$ where $\eta_{\mu\nu}$ are the components of $\eta$. Index notation is useful because it allows us to define the following concept: a tensor is an object with several indices, e.g. $A^{\mu\nu}$. But not any object with indices is a tensor: the components of a tensor must change in different frames of reference, such that they are related through $$ \begin{align} &A'^{\mu\nu}=\Lambda^\mu{}_\rho \Lambda^\nu{}_\sigma\ A^{\rho\sigma} \\ &B'^\mu{}_\nu=\Lambda^\mu{}_\rho(\Lambda^\mathrm{T})_\nu{}^\sigma\ B^\rho{}_\sigma\\ &C'^{\mu\nu}{}_\pi{}^\tau=\Lambda^\mu{}_\rho \Lambda^\nu{}_\sigma (\Lambda^\mathrm{T})_\pi{}^\psi \Lambda^\tau{}_\omega\ C^{\rho\sigma}{}_\psi{}^\omega \end{align}\tag{10} $$ and the obvious generalisation for more indices: for every upper index, there is a factor of $\Lambda$, and for every lower index, a factor of $\Lambda^\mathrm{T}$. If the components of an object with indices don't satisfy $(10)$ then that object is not a tensor. According to this definition, any vector is a tensor (with just one index). I don't like to use index notation too much: $v'=\Lambda v$ is easier that $v'^\mu=\Lambda^\mu{}_\nu v^\nu$, don't you think?. But sometimes we have to use index notation, because matrix notation is not possible: when using tensors with three or more indices, matrices cannot be used. Tensors with one index are just vectors. You'll hear sometimes that matrices are tensors with two indices, which is not quite true: if you remember from your course on linear algebra, you know that when you make a change of basis, matrices transform like $M\to C^\mathrm{T} M C$, which is like $(10)$ in the case of one upper/one lower index. Therefore, matrices are like tensors with one uppe/one lower index. This is the reason we wrote $\Lambda$ as $\Lambda^\mu{}_\nu$. This is a matrix, but it is also a tensor. Also, $(7)$ pretty much looks like $(10)$, right? This is the reason people say $(7)$ expresses the transformation properties of $\eta$. While not false, I you recommend not to take this too seriously: formally, it is right, but in principle $\eta$ is just a set of numbers that simplifies our notation for scalar products. It turns out you can think of it as a tensor, but only a-posteriori . In principle, it is not defined as a tensor, but it turns out it is. Actually, it is a trivial tensor (the only one!) whose components are the same in every frame of reference (by definition). If you were to calculate what are the components of $\eta$ in another frame of reference using $(10)$, you'll find out that they are the same. This is stated as the metric is invariant . We actually define it to be invariant. We define what a change of reference frame through the restriction of $\eta$ being invariant. It doesn't make sense to try to prove $\eta$ is invariant, as this is a definition. $(7)$ doesn't really prove $\eta$ is invariant, but actually defines what a change of reference is. For completeness I'd like to make the following definitions: We say an object is invariant if it takes the same value on any frame of reference. You can check that if $v$ is a vector, then $v\cdot v$ takes the same value on any frame, i.e., $v^2$ is invariant. We say an object is covariant if it doesn't take the same value on every frame of reference, but the different values are related in a well defined way: the components of a covariant object must satisfy $(10)$. This means tensors are covariant by definition. For example, a vector is not invariant because its components are frame-dependent. But as vectors are tensors, they are covariant. We really like invariant objects because they simplify a lot of problems. We also like covariant objects because, even though these objects are frame-dependent, they transform in a well-defined way, making them easy to work with. You'll understand this better after you solve many problems in SR and GR: in the end you will be thankful for covariant objects. So, what does it mean for $\eta$ to be invariant? It means its components are the same in every (inertial) frame of reference. How do we prove this? we actually can't, because we define this to be true. How can we prove $\eta$ is the only invariant tensor? We can't, because it is not actually true. The most general invariant tensor is proportional to the metric. Proof: let $N^\mu{}_\nu$ be an invariant tensor by definition. Then, as it is a tensor, we have $$ N'=\Lambda^\mathrm{T}N\Lambda \tag{11} $$ But we also must have $N'=N$ for it to be invariant. This means $\Lambda^\mathrm T N\Lambda=N$. Multiply on the right by $\eta \Lambda^\mathrm{T} \eta$ and use $(7)$ to get $[N,\Lambda^\mathrm{T}]=0$. By Shur's Lemma, $N$ must be proportional to the identity. QED. And what about the Levi-Civita symbol ? we are usually told that it is also an invariant tensor, which is not actually true: it is invariant, but it is not a tensor, it is a pseudo-tensor . In SR it doesn't satisfy $(10)$ for any $\Lambda$, but only for a certain subset of matrices $\Lambda$ (check Proper Orthochronus Lorentz Group), and in GR it is a tensor density (discussed in many posts on SE). The proof of the covariance of the LC symbol is usually stated as follows (you'll have to fill in the details): the definition of the determinant of a matrix is can be stated as $\text{det}(A)\varepsilon^{\mu\nu\sigma\rho}=\varepsilon^{abcd}A^\mu{}_a A^\nu{}_b A^\rho{}_c A^\sigma{}_d$. The proper Orthochronus Lorentz Group consists of the subset of matrices with unit determinant, i.e., $\text{det}(\Lambda)=1$. If you use this together with the definition of $\text{det}$, you get $\varepsilon^{\mu\nu\rho\sigma}=\varepsilon^{abcd}\Lambda^\mu{}_a\Lambda^\nu{}_b\Lambda^\rho{}_c\Lambda^\sigma{}_d$, which is the same as $(10)$ for the object $\varepsilon^{\mu\nu\rho\sigma}$. This proves that, when restricted to this subset of the Lorentz Group, the Levi-Civita symbol is a tensor. Raising and Lowering indices : this is something that is usually made more important that it really is. IMHO, we can fully formulate SR and GR without even mentioning raising and lowering indices. If you define an object with its indices raised, you should keep its indices where they are. In general there is no good reason as why would someone want to move an index. That being said, I'll explain what these are, just for completeness. The first step is to define the inverse of the metric. Using matrix notation, the metric is its own inverse: $\eta \eta=1$. But we want to use index notation, so we define another object, call it $\zeta$, with components $\zeta^{\mu\nu}=\eta_{\mu\nu}$. With this, you can check that $\eta\eta=1$ can be writen as $\eta_{\mu\nu}\zeta^{\nu\rho}=\delta^\mu_\rho$, where $\delta$ is the Kronecker symbol. For now, $\delta$ is just a symbol that simplifies the notation. Note that $\zeta$ is not standard notation, but we will keep it for the next few paragraphs. (People usually use the same letter for both $\eta$ and $\zeta$, and write $\eta_{\mu\nu}=\eta^{\mu\nu}$; we'll discuss why in a moment. For now, note that these are different objects , with different index structure: $\eta$ has lower indices and $\zeta$ has upper indices) We can use $\eta$ and $\zeta$ to raise and lower indices, which we now define. Let's say you have a certain tensor $A^{\mu\nu}{}_\rho$. We want to define what it means to raise the index $\rho$: it means to define a new object $\bar A$ with components $$ \bar A^{\mu\nu\rho}\equiv \zeta^{\rho\sigma}A^{\mu\nu}{}_\sigma \tag{12} $$ (this is called to raise the index $\rho$ for obvious reasons) Using $(10)$ you can prove that this new object is actually a tensor. We usually drop the bar $\bar{\phantom{A}}$ and write $A^{\mu\nu\rho}$. We actually shouldn't do this: these objects are different. We can tell them apart from the index placement, so we relax the notation by not writing the bar. In this post, we'll keep the bar for pedagogical reasons. In an analogous way, we can lower an index, for example the $\mu$ index: we define another object $\tilde A$, with components $$ \tilde A_\mu{}^\nu{}_\rho\equiv \eta_{\mu\sigma} A^{\sigma\nu}{}_\rho \tag{13} $$ (we lowered $\mu$) This new object is also a tensor. The three objects $A,\bar A,\tilde A$ are actually different, but we can tell them apart through the indices placement, so we can drop the tildes and bars. For now, we won't. We'll discuss the usefulness of these operations in a moment. For now, note that if you raise both indices of the metric, you get $$ \bar{\bar{\eta}}^{\mu\nu}\equiv\zeta^{\mu\rho}\zeta^{\nu\sigma} \eta_{\rho\sigma}=\zeta^{\mu\rho}\delta^\nu_\rho=\zeta^{\mu\nu} \tag{14} $$ which means that $\bar{\bar{\eta}}=\zeta$. As we usually drop the bars, this means that we can use the same letter $\eta$ for both objects. In principle, they are different: $\eta_{\mu\nu}$ is the metric, and $\zeta^{\mu\nu}$ is its inverse. In practice, we use $\eta_{\mu\nu}$ and $\eta^{\mu\nu}$ for both these objects, and even call them both metric . From now on, we will use $\eta$ both for the metric and its inverse, but we keep the bars for other objects. With this in mind, we get the following important result: $$ \eta_{\mu\nu}\eta^{\nu\rho}=\delta_\mu^\rho \tag{15} $$ which is actually a tautology: it is the definition of the inverse of the metric. So, what is the use of these operations? for example, what do we get if we lower the index of a vector $v$? Well, we get a new tensor, but it is not a vector (you can check that $(2)$ is not satisfied), so we call it a covector . This is not really important in SR, but in other branches of physics vectors and covectors are really really different. So, what is the covector associated to $v$? Call this covector $\bar v$. Its components will be $\bar v_\mu=\eta_{\mu\nu} v^\nu$ by definition. Why is this useful? Well, one reason is that by lowering an index, the scalar product $\cdot$ turns into standard matrix product: $$ u\cdot v=\bar u v \tag{16} $$ as you can check (compare this to $(3)$ or $(6)$). So in principle, raising and lowering indices is supposed to simplify notation. Actually, in the end, you'll see that people write $uv$ instead of $u\cdot v$ or $u_\mu v^\mu$. So you see that the notation is simplified without the need of raising/lowering any index. The following fact is rather interesting: we know that if we raise both indices of the metric we get the metric again. But what do we get if we raise only one index to the metric? that is, what is $\bar \eta$?, or, put it another way, what is $\eta^\mu{}_\nu$? Well, according to the definition, it is $$ \eta^\mu{}_\nu=\eta_{\nu\rho}\eta^{\mu\rho}=\delta^\mu_\nu \tag{17} $$ where I used $(15)$. This means that $\bar \eta=\delta$: the metric is the same object as the Kronecker symbol, which is a cool result. As we know that raising and lowering indices from a tensor results in a new tensor, we find that the Kronecker symbol is actually a tensor! We can even prove this from the definition of tensors, i.e., we can check that $(10)$ is satisfied for $\delta$. But we don't need to: we know that it must be true (check it if you want to). As a side note: you (as many people) write prime marks on the indices, while I (as many others) write the primes on the tensors. IMHO the latter convention is the best, because it is the tensor what is changing, not the indices. For example, what you wrote $\eta_{\mu'\nu'}=\eta_{\mu\nu}$ looks better when written $\eta'_{\mu\nu}=\eta_{\mu\nu}$, because the $\mu\nu$ component of both objects are equal, and not the $\mu'$ is equal to the $\mu$ component (which actually makes no sense and makes the indices mismatched).
{ "source": [ "https://physics.stackexchange.com/questions/230495", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/72139/" ] }
230,685
Four-legged chairs are by far the most common form of chair. However, only three legs are necessary to maintain stability whilst sitting on the chair. If the chair were to tilt, then with both a four-legged and three-legged chair, there is only one direction in which the chair can tilt whilst retaining two legs on the ground. So why not go for the simpler, cheaper, three-legged chair? Or how about a more robust, five-legged chair? What is so special about the four-legged case? One suggestion is that the load supported by each leg is lower in a four-legged chair, so the legs themselves can be weaker and cheaper. But then why not 5 or 6 legs? Another suggestion is that the force to cause a tilt is more likely to be directed forwards or sideways with respect to the person's body, which would retain two legs on the floor with a four-legged chair, but not a three-legged chair. A third suggestion is that four-legged chairs just look the best aesthetically, due to the symmetry. Finally, perhaps it is just simpler to manufacture a four-legged chair, again due to this symmetry. Or is it just a custom that started years ago and never changed?
Suppose the leg spacing for a square and triangular chair is the same then the positions of the legs look like: If we call the leg spacing $2d$ then for the square chair the distance from the centre to the edge is $d$ while for the triangular chair it's $d\tan 30^\circ$ or about $0.58d$. That means on the triangular chair you can only lean half as far before you fall over, so it is much less stable. To get the same stability as the square chair you'd need to increase the leg spacing to $2/\tan 30^\circ d$ or about $3.5d$ which would make the chair too big. A pentagonal chair would be even more stable, and a hexagonal chair more stable still, and so on. However increasing the number of legs gives diminishing increases in stability and costs more. Four-legged chairs have emerged (from several millennia of people falling off chairs) as a good compromise.
{ "source": [ "https://physics.stackexchange.com/questions/230685", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/89783/" ] }
230,703
This question is about why we have a universal speed limit (the speed of light in vacuum). Is there a more fundamental law that tells us why this is? I'm not asking why the speed limit is equal to $c$ and not something else, but why there is a limit at all. EDIT: Answers like "if it was not.." and answers explaining the consequences of having or not having a speed limit are not -in my opinion- giving an answer specifically to whether there is a more fundamental way to derive and explain the existence of the limit.
Imagine that there is a person who prefers to measure the amount of money in his bank account with the value $V$ . The equation is $V = C\tanh N$ , where $N$ is the actual amount of money in dollars. This person will also be confused: Why is there a limit ( $C$ ) on the amount of money that I can have? Is there any law that says the value of my money, $V$ , cannot be more than $C$ ? The answer is that he is just using a "wrong" variable to measure his assets. $V$ is not additive — it is a transform of an additive variable, $N$ , which he has to use in order for everything to make sense. And there is no "law of the universe" that limits the value of $V$ — such a limit is just a product of his own stubbornness. The same thing applies to measure speed — it is the "wrong" variable to describe the rate of motion; speed is not additive. The "correct" variable is called " rapidity " — it is additive, and there is no limit on it.
{ "source": [ "https://physics.stackexchange.com/questions/230703", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75628/" ] }
230,751
(Inspired by Why are four legged chairs so common? ) I've been wondering for a while... Why do most wheeled office chairs have 5 wheels? My guess would be that while stability vs. simplicity results in 4 legs, adding mobility to the equation may result in the need for 5 wheels. Edit: This question is about a mobile chair
Consolidating some of the points made in the answers to the question you linked, and comments: When constructing a chair, 4 legs is easy when you use traditional (wooden) construction - 90 degree angles, and easy to make stackable. A little bit harder than three legs because you have to make sure they are all the same length (or the chair will wobble). Once you have a "office chair" with a hydraulic center post, the construction argument goes away. That leaves us with greater freedom to pick number of legs. The considerations are: more legs = more material = more expensive, heavier (quantified below) more legs = tipping stability is more uniform in all directions more legs = better spread of the load: wheels don't sink so far into the carpet odd number = greater stability against wobble (see below) All engineering design is a question of tradeoffs; in this case, I think that the first point argues for fewer legs, and the second / third point for more legs. The question then becomes: what is the additional value, and the additional cost, of one more leg? Below, I calculate the cost of adding more legs for the same stability and cost - this makes some assumptions but concludes five is indeed optimal. But there is another important factor (tip of the hat to my daughter for this concept): when the floor is uneven, a chair will be not be supported by all its legs - it will "wobble". Now if you have four legs, this wobble will happen along one of the diagonals of the square, and this line will be below (or very close to) the center of gravity. That makes the energy needed to go from one side to the other very small. When you have five legs, the center of gravity is always displaced relative to the line of support. So five legs provide greater stability on an uneven floor. As you add more legs, the "diagonal of support" gets close to the center. Even numbered regular polygons always have the potential of having the line of support going through the center, making them the worst choice (incidentally this shows that a trapezoidal arrangement of four legs is slightly better than a square... you will sometimes see that, and now you know why). All of that makes five the optimal number of legs - good stability in all directions. Note that from a construction perspective, it only makes sense to give a chair five legs when you start with a (metal or plastic) center post - the older (square wooden legs) construction makes four a more sensible number as the other answer stated. Once you want the chair to have vertical adjustment, a single center post makes sense - and then you have the flexibility to choose the number of legs. Finally, a reference from a large supplier of office furniture : The National Institutes of Health recommend a five–point chair base for maximum stability and minimal chance of the chair tipping. In fact, Tom Reardon, executive director of the Business and Institutional Furniture Manufacturer's Association, says furniture manufacturers stopped making chairs with four–point bases in the 1980s because they weren't considered as safe as five–point chair bases. UPDATE I thought more about the problem of optimization, and think I can explain that five legs is best. Assume that the chair has to support a constant weight $W$ , and that we want a constant stability. Stability is determined by the shortest “tipping distance” $D$ . For a radial distance $R$ , a chair with $n$ legs has $$D = R \cos\frac{\pi}{n}$$ So we can define a “stability factor” $S=\frac{1}{R\cos\frac{\pi}{n}}$ Thus, for constant $S$ we get $$R\propto \frac{1}{\cos\frac{\pi}{n}} \tag1$$ Next, we look at the stress on each leg. The stress will be greatest when the tipping torque $\Gamma$ is directly in line with just one leg. At that point, $$\Gamma = W\cdot R$$ Now we want to calculate the shape (section) of the leg that can support this torque. The maximum stress $\sigma$ for a rectangular beam of width $w$ and height $h$ is proportional to $wh^2$ , and the mass of the leg of length $R$ is $whR\rho$ ; if we assume a constant aspect ratio $\frac{w}{h}$ , then mass is proportional to area times length: $$m \propto h^2 R \tag2$$ where the first term is a function of the strength, and the second term a function of the stability. Similarly, for given torque $W\cdot R$ we can write the bending stress as $$\sigma = \frac{My}{I}$$ where $M$ is the bending stress, $y$ is the perpendicular distance to the neutral axis, and $I_x$ is the second moment of area about the neutral axis $x$ . For a rectangular section, $y \propto h^4$ . For constant $\sigma$ , the maximum will occur at the outer edge of the beam where $y=\frac{h}{2}$ , leading to $$h^3 \propto W\cdot R$$ For given weight $W$ , it follows that $$h\propto R^{1/3} \tag3$$ Substituting $(3)$ into $(2)$ we get $$m \propto R^{5/3}$$ For constant breaking strength, we get the total mass of $n$ legs: $$M = n\cdot m \propto n R^{5/3}$$ For constant stability, we use $(1)$ to obtain $$M \propto \frac{n}{\cos^{\frac53}\frac{\pi}{n}}$$ We can evaluate this for n between 3 and 7, and obtain $M$ as a function of the number of legs: n=3: 9.524 n=4: 7.127 n=5: 7.118 <--- lowest value n=6: 7.625 n=7: 8.329 This shows that indeed the structure with five legs needs the lowest mass to support a certain torque - if we can equate "mass" with "cost", and stability is indeed the main driver, this proves that a chair with five legs is optimal.
{ "source": [ "https://physics.stackexchange.com/questions/230751", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/12068/" ] }
230,973
That is basically my question, it arose when I saw an article (here is the scientific paper , which should be free to read) saying two Caltech scientists might have found the 9th planet of the solar system.
The problem with finding a new planet in our solar system is not that it is too faint, but knowing where to look in a big, big sky. This putative planet 9 is likely to be in the range 20-28th magnitude (unless it is a primordial, planet-mass black hole, in which case it will be invisible except for any accretion luminosity). This is faint (especially at the faint end), but certainly not out of reach of today's big telescopes. I understand that various parts of the sky are currently being scoured, looking for a faint object with a (very) large parallax. The problem is that whilst it is comparatively easy to search large areas of the sky quite quickly if you are interested in bright objects; to do deep searches you are normally limited (by time) to small areas. And you have to repeat your observations to find an object moving with respect to the background stars. If planet 9 had been a gas giant, it would have been self-luminous, due to gravitational contraction, and would have been picked up by infrared surveys like 2MASS and WISE. But the suggestion is that it is rocky or icy, is only observable in reflected light from the Sun and is hence a very faint object at visible wavelengths. With exoplanets around other stars that can be hundreds or thousands of light years away, you know where to look - basically close to the star. The solid angle that you have to search is comparatively small. That being said, there are other problems to overcome, mostly the extreme contrast in brightness between planet and star, which means that the only directly imaged exoplanets (or low-mass companions) to other stars are much more massive (by at least an order of magnitude) than the possible new planet 9. Indeed if these objects existed in our solar system we would have easily found them already in infrared all-sky surveys such as 2MASS and WISE. The smaller planets that have been found around other stars are not found by directly imaging them. They are found indirectly by transiting their parent star or through the doppler shift caused by their gravitational pull on their parent star. For an object in our solar system that is far away from the Sun then the first of these techniques simply isn't possible - planet 9 will never transit in front of the Sun from our point of view. The second technique is also infeasible because (a) the amplitude of the motion induced in the Sun would be too small to detect and (b) the periodic signal one would be looking for would have a period of about 20,000 years! All of the indirectly detected exoplanets have periods of about 15 years or less (basically similar to the length of time we have been monitoring them). It is also worth emphasising, that if we observed our solar system, even from a nearby star, it is unlikely we would pick up planet 9, but we would find Jupiter, Saturn and possibly one of the inner planets if it happened to transit. In other words, our census of exoplanets around other stars is by no means complete. See If Alpha Centauri A's solar system exactly mirrored our own, what would we be able to detect? for more details.
{ "source": [ "https://physics.stackexchange.com/questions/230973", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/102546/" ] }
231,017
I read a book in which one chapter gave a speech about the fundamental constants of the Universe, and I remember it stated this: If the mass of an electron, the Planck constant, the speed of light, or the mass of a proton were even just slightly different (smaller or bigger) than what they actually are, then the whole Universe would not exist as we know it. Maybe we all wouldn't exist. This speech works for all the fundamental known constants of the Universe but one: the Boltzmann constant. Its value is well known but even if its value were $10$ times bigger or if it were exactly $1$ , or $45.90$ or $10^6$ well... the Universe would remain the same as it is now. The Boltzmann constant is not really fundamental to the existence of the Universe. Maybe they weren't the exact words, but the concept is correct. Now I ask: is that true, and why?
We can understand all of this business if we visit the statistical mechanics notion of temperature, and then connect it to experimental realities. Temperature is a Lagrange multiplier (and should have dimensions of energy) First we consider the statistical mechanics way of defining temperature. Given a physical system with some degree of freedom $X$, denote the number of possible different states of that system when $X$ takes the value $x$ by the symbol $\Omega(x)$. From statistical considerations we can show that modestly large systems strongly tend to sit in states such that $\Omega(x)$ is maximized. In other words, to find the equilibrium state $x_\text{eq}$ of the system you would write $$ \left. \left( \frac{d\Omega}{dx} \right) \right|_{x_\text{eq}} = 0$$ and solve for $x_\text{eq}$. It's actually more convenient to work with $\ln \Omega$ so we'll do that from now on. Now suppose we add the constraint that the system has a certain amount of energy $E_0$. Denote the energy of the system when $X$ has value $x$ by $E(x)$. In order to find the equilibrium value $x_\text{eq}$, we now have to maximize $\ln \Omega$ with respect to $x$, but keeping the constraint $E(x)=E_0$. The method of Lagrange multipliers is the famous mathematical tool used to solve such problems. One constructs the function $$\mathcal{L}(x) \equiv \ln \Omega(x) + t (E_0 - E(x))$$ and minimizes $\mathcal{L}$ with respect to $x$ and $t$. The parameter $t$ is the Lagrange multiplier; note that it has dimensions of inverse energy. The condition $\partial \mathcal{L} / \partial x = 0$ leads to $$t \equiv \frac{\partial \ln \Omega}{\partial x} \frac{\partial x}{\partial E} \implies t = \frac{\partial \ln \Omega}{\partial E} \, .$$ Now remember the thermodynamic relation $$\frac{1}{T} = \frac{\partial S}{\partial E} \, .$$ Since the entropy $S$ is defined as $S \equiv k_b \ln \Omega$ we see that the temperature is actually $$T = \frac{1}{k_b t} \, .$$ In other words, the thing we call temperature is just the (reciprocal of the) Lagrange multiplier which comes from having fixed energy when you try to maximize the entropy of a system, but multiplied by a constant $k_b$. Logically, $k_b$ doesn't need to exist If not for the $k_b$ then temperature would have dimensions of energy! You can see from the discussion above that $k_b$ is very much just an extra random constant that doesn't need to be there. Entropy could have been defined as a dimensionless quantity, i.e. $S \equiv \ln \Omega$ without the $k_b$ and everything would be fine. You'll notice in calculations that $k_b$ and $T$ almost always shows up together; it's no accident and it's basically because, as we said, $k_b$ is just a dummy factor which converts energy to temperature. But then there's history :( Folks figured out thermodynamics before statistical mechanics. In particular, we had thermometers. People measured the "hotness" of stuff by looking at the height of a liquid in a thermometer. The height of a thermometer reading was the definition of temperature; no relation to energy. Entropy was defined as heat transfer divided by temperature. Therefore, entropy has dimensions of $[\text{energy}] / [\text{temperature}]$.$^{[a]}$ We measured the temperatures $T$, pressures $P$, volumes $V$, and number of particles $N$ of some gasses and found that they always obeyed the ideal gas law $^{[b,c]}$ $$P V = N k_b T \, .$$ This law was known from experiment for a long time before Boltzmann realized that entropy is actually proportional to the logarithm of the number of available microstates, a dimensionless quantity. However, since entropy was already defined and had this funny temperature dimensions, he had to inject a dimensioned quantity for "backwards compatibility". He was the first to write $$ S = k_b \ln \Omega$$ and this equation is so important that it's on his tomb . Connecting temperature and energy In practice, it is actually rather difficult to measure temperature and energy in the same system over many orders of magnitude. I think that it's for this reason that we still have independent temperature and energy standards and units. Summary Boltzmann's constant is just a conversion between energy and a made-up dimension we call "temperature". Logically, temperature should have dimensions of energy and Boltzmann's constant is just a dummy that converts between the two for historical reasons. Boltzmann's constant contains no physical meaning whatsoever. Note that the value of $k_b$ isn't the real issue; values of constants depend on the units system you use. The important point is that, unlike the speed of light or the mass of the proton, $k_b$ doesn't refer to any unit-independent physical thing in Nature. Temperature is the Langrange multiplier that comes from imposing fixed energy on the problem of maximizing entropy. As such, it logically has dimensions of energy. Boltzmann's constant $k_b$ only exists because people defined temperature and entropy before they understood statistical mechanics. You will always see $k_b$ and $T$ together because the only logically relevant parameter is $k_b T$, which has dimensions of energy. Notes $[a]$: Note that if temperature had dimensions of energy then under this definition entropy would have been dimensionless (as it "should" be). $[b]$: Actually, this law was originally written as $PV = n R T$ where $n$ is the number of moles of a substance and $R$ is the ideal gas constant. That's not really important though because you can group Avogadro's number in with $R$ to get $k_b$. $R$ and $k_b$ have equivalent "status". $[c]$: Note again how $k_b$ and $T$ show up together.
{ "source": [ "https://physics.stackexchange.com/questions/231017", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/90893/" ] }
231,441
The following is written in my textbook as the reason for weightlessness felt in satellites: The gravitational pull is counterbalanced by the centripetal force. This introduces two problems: For satellites orbiting around Earth, gravity is, in fact, the centripetal force. If the gravitational pull and centripetal force on a satellite are both in the same direction, then how do they cancel each other out?
The statement: The Gravitational Pull is counterbalanced by the Centripetal Force is rubbish. The satellite undergoes a centripetal acceleration because it is acted on by the gravitation force. Some people call the force which causes a centripetal acceleration the centripetal force. So in such a case the gravitational force and the centripetal force are one and the same thing. This is your statement 1. The reason for thinking that you are weightless when in an orbiting satellite is that the satellite and your good self would be accelerating towards the Earth at exactly the same rate. So there is no normal reaction between you and the satellite. You must have a force acting on you because otherwise you would not accelerate and hence not be in orbit.
{ "source": [ "https://physics.stackexchange.com/questions/231441", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
231,447
I've noticed an odd phenomenon with an adjustable-focus LED flashlight that I've bought. If I shine it at the ceiling of my room at the widest possible angle, the walls get noticeably brighter than if I shine it at the ceiling of my room at the narrowest possible angle. Of course, in both cases, the total light emitted is the same, and so is the center of the lit area. Shouldn't the walls be lit by the same amount in both case? Why does the surface area of reflection affect the outcome when the amount of light is the same?
The statement: The Gravitational Pull is counterbalanced by the Centripetal Force is rubbish. The satellite undergoes a centripetal acceleration because it is acted on by the gravitation force. Some people call the force which causes a centripetal acceleration the centripetal force. So in such a case the gravitational force and the centripetal force are one and the same thing. This is your statement 1. The reason for thinking that you are weightless when in an orbiting satellite is that the satellite and your good self would be accelerating towards the Earth at exactly the same rate. So there is no normal reaction between you and the satellite. You must have a force acting on you because otherwise you would not accelerate and hence not be in orbit.
{ "source": [ "https://physics.stackexchange.com/questions/231447", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/853/" ] }
231,732
I was reading the following answer from this question : In physics, you cannot ask / answer why without ambiguity. Now, we observe that the speed of light is finite and that it seems to be the highest speed for the energy. Effective theories have been built around this limitation and they are consistent since they depend of measuring devices which are based on technology / sciences that all have c built in. In modern sciences, one doesn't care of what is happenning, but of what the devices measure. I think this raises a good point, can we get a false positive of a theory being right just because the instruments doing the measuring have that theory built in? For example, if it were the case that there is a particle which travels faster than light, could we even tell that's the case since some of our methods of measuring distance involve relativity (which assumes the speed of light as an upper bound)?
can we get a false positive of a theory being right just because the instruments doing the measuring have that theory built in? This sounds dangerously close to a contradiction-in-terms, so let me carefully read you as saying that the instruments doing the measuring "are interpreted according to that theory," possibly by calculations that sit between their transducers and the screen which we read numbers off of. And the answer then is, "sort of: yes that's a real thing that can happen, yes it can throw you for a loop (especially if it means that you're not measuring what you think are measuring), but no, the theory is not really in trouble in those cases." For example, when the OPERA neutrino debacle was unfolding, there were lots of guesses about what was going wrong , and many of them were guesses that "you're expecting the GPS satellite system works according to this theory; maybe it works according to that one instead." It turned out that the theory was correct but the timing-signal cables were faulty; but that was a very real possibility that a lot of people took very seriously. The three big perspectives on theory choice in science Why, however, would the theory not be in trouble? This is due to a pervasive fiction that we teach kids in high school, because we've got a bad textbook selection process that creates echo-chambers and no real strong impetus for good scientists to take "shifts" teaching high-school science or reviewing textbooks -- and even then, not much communication between the philosophers of science and the scientists. So I forgive you if you've never heard any of this! The fairy tale is that scientists first observe the facts of the world, look for patterns in those facts, propose those patterns as a "hypothesis", collect more facts, and see if those new facts confirm or deny that hypothesis. If a hypothesis has no denying-facts, then we call it a "scientific theory". One reason that you should find this suspect is that it's really hard to track down who we should ascribe this to: it comes from some time after Francis Bacon and Isaac Newton, but I haven't been able to really track down whose model it really was. In any case I regard this as a "fairy tale" because in my professional career in science I have never seen someone say "okay, we have to stop what we're doing and start observing facts and see what some patterns are, so that we can form a hypothesis." To the extent that this stuff happens it's totally subconscious ; it's not a "method" by any normal sense of that term. One of the first big advances was due to Karl Popper , who wanted to discriminate science from pseudoscience by suggesting that science was very different from other fields of endeavor in a particular way: good science "sticks its necks out," or perhaps "has a stake", in what it says about the world. He observed that pseudoscience usually could be compatible with any statement of fact, whereas proper science has some sort of experiments which you can run which would hypothetically prove that idea wrong. This isn't really embodied in the above "scientific method" but it's not too hard to add it. A lot of people take it a step further than maybe Popper would have, using it to say that scientific beliefs can be split into only two essential categories: "falsified" and "not yet falsified", rather than "false" and "true". Everything gets reconceptualized as a ruthless survival-of-the-fittest system. Unfortunately, then the boat was rocked even more by Thomas Kuhn , who pointed out something which you also see in the OPERA experiment above: theories can be "established". That's very different from Popper's model, which says that we're eagerly looking to cut down all of our theories. In fact in the OPERA statement most physicists pushed back against the report of superluminal neutrinos: "no, you must be wrong, there is something wrong with your timing circuitry or your GPS models or something ." In fact Kuhn went a step further: he said that true theories are unfalsifiable , which is like knifing Popper in the back! Kuhn pointed out that there are two levels: "theory" and "model". A scientific theory is a paradigm -- a way of structuring how you think about the world, how you ask questions, and how you decide what questions are worth asking. You need to already have some theory in order to offer explanations in the first place, because theories define the "ground rules" of how their models work, so that you can model a phenomenon. It's like the theory is the paper, paint, and brush whereas the model is the painting you make with them. When something is different between the picture and what you see in the world, you can shelve your model as a limited approximation, then tear off a new sheet of theory-paper and mix new theory-paint, and try a different painting -- maybe one which is similar in the broad strokes but changes when you start to paint it with a finer brush. Kuhn regarded this model-selection process as Popperian. But every once in a while, Kuhn said, we see these scientific revolutions where the art supplies themselves get redesigned. He didn't have a great model for how exactly this happens, but definitely thought that it was crucial to scientific progress. His idea was basically, "everyone gets together, agrees that they're painting with too broad a brush, comes up with a better brush-shape and better pigments and paper and what have you, lots of ideas are floated, people keep whichever is aesthetically pleasing while serving the purpose, and then we settle back down into our normal scientific work again." Of course, people didn't like this because it seemed to open the door back to pseudoscience: might astrology now claim that it is a "scientific theory" but that many "astrological models" could be proposed and scrapped underneath? When Kuhn tried to defend himself, he didn't have much of an answer for this. He basically said that scientists have a lot of aesthetic values that they want out of their theories, and those aesthetic values banned astrology due to it not being very simple or parsimonious with how we think about the rest of the cosmos -- a principle sometimes called Ockham's Razor. Kuhn's work was really important, but it was clear that there was no good answer to this question. I will go one step farther than I think he did, and say that theories tend to be Turing-complete for some domain. We now know that if you include the centrifugal and Coriolis forces, you can put the Earth at the center of the solar system. Geocentrism is 100% as valid as heliocentrism. "Was this not the subject of a huge scientific revolution? How awful, that in Newton's laws we can prove that it makes no difference!" Well, yes -- but what was at stake was not "which theory is right ?" but rather "which theory is better ?" To my mind this tension was best resolved by Imre Lakatos , whose works take a little effort to understand. You have to understand that theory choice is actually driven by lazy grad students . You can regard a scientific theory as the "genes" for its scientific models, and those genes make it easy or hard for that model to predict surprising new observations which can be confirmed by experiment -- which spurs an interesting publication which can excite other researchers into making more publications with that theory. The theories that simplify the models will thus "reproduce;" the ones that don't will get ignored by these grad students who don't want to do all of that work if they can easily avoid it. So, heliocentrism won because it did not need the "epicycles" that the geocentrists used. Those epicycles were totally valid -- it's the Fourier series of some periodic motions, after all -- but they were complicated and hard to use. Putting the Sun at a fixed point in your coordinates meant that you didn't need to calculate them. Lazy grad students therefore chose heliocentric coordinates. Theory choice is therefore by natural selection. A more recent example: we now have a deterministic quantum mechanics via "pilot waves"; why is nobody using it? Because it's complicated! Nobody really likes the Copenhagen interpretation philosophically, but it's dead simple to use and predicts all of the same outcomes. So that's why the theory is not at stake in such circumstances: only the model is at stake, until you get to larger concerns about "it's too hard to model this in that way." Such concerns will seldom be because the instruments assume some other theory -- more likely, it's because some new instrument comes out that forces new measurements which complicate our entire understanding of an existing theory. Then someone comes along with something like quantum mechanics, "hey, we can do this all a lot better if we just predict averages rather than exact values, and if we use complex numbers to form our additive probability space." This changes the questions you ask; you gloss over "how did this photon get here rather than somewhere else?" for questions of "what are the HOMO and LUMO for this molecule?"
{ "source": [ "https://physics.stackexchange.com/questions/231732", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21904/" ] }
231,891
Is the butterfly effect real? It is a well-known statement that a butterfly, by flapping her wings in a slightly different way, can cause a hurricane somewhere else in the world that wouldn't occur if the butterfly had moved her wings in a slightly different way. Now, this can be interpreted as a figure of speech, but I think it's actually meant to be true. I can't imagine though that this is true. I mean, the difference in energy (between the two slightly different wing flaps), which actually can be zero (the only difference being the motion of the air surrounding the close neighborhood of the two slightly different flapping pairs of wings), is simply too small to cause the hurricane 10 000 miles away. So how can this in heavens name be true? By asking I'm making the implicit and realistic assumption that in the atmosphere no potential energies can be released when a small difference in the air conditions occurs (like, for example, the release of energy contained in the water of a dam when the water has reached a critical level and a small perturbation can cause the dam to break, with catastrophic consequences).
Does the flap of a butterfly's wing in Brazil set off a tornado in Texas? This was the whimsical question Edward Lorenz posed in his 1972 address to the 139th meeting of the American Association for the Advancement of Science. Some mistakenly think the answer to that question is "yes." (Otherwise, why would he have posed the question?) In doing so, they miss the point of the talk. The opening sentence of the talk immediately after the title (wherein the question was raised) starts with Lest I appear frivolous in even posing the title question, let alone suggesting it might have an affirmative answer ... Shortly later in the talk, Lorenz asks the question posed in the title in more technical terms: More generally, I am proposing that over the years minuscule disturbances neither increase nor decrease the frequency of occurrences of various weather events such as tornados; the most they may do is to modify the sequences in which they occur. The question which really interests us is whether they can do even this—whether, for example, two particular weather situations differing by as little as the immediate influence of a single butterfly will generally after sufficient time evolve into two situations differing by as much as the presence of a tornado. In more technical language, is the behavior of the atmosphere unstable with respect to perturbations of small amplitude? The answer to this question is probably, and in some cases, almost certainly. The atmosphere operates at many different scales, from the very fine (e.g., the flap of a butterfly wing) to the very coarse (e.g., global winds such as the trade winds). Given the right circumstances, the atmosphere can magnify perturbations at some scale level into changes at a larger scale. Feynman described turbulence as the hardest unsolved problem in classical mechanics and it remains unsolved to this day. Even the problem of non-turbulent conditions is an unsolved problem (in three dimensions), and hence the million dollar prize for making some kind of theoretical progress with regard to the Navier-Stokes equation. Update: So is the butterfly effect real? The answer is perhaps. But even more importantly, the question in a sense doesn't make sense. Asking this question misses the point of Lorenz's talk. The key point of Lorenz's talk, and of the ten years of work that led up to this talk, is that over a sufficiently long span of time, the weather is essentially a non-deterministic system. In a sense, asking which tiny little perturbation ultimately caused a tornado in Texas to occur doesn't make sense. If the flap of one butterfly's wing in Brazil could indeed set off a tornado in Texas, this means the flap of the wing of another butterfly in Brazil could prevent that tornado from occurring. (Lorenz himself raised this point in his 1972 talk.) Asking which tiny little perturbation in a system in which any little bit of ambient noise can be magnified by multiple orders of magnitude doesn't quite make sense. Atmospheric scientists use some variant of the Navier-Stokes equation to model the weather. There's a minor (tongue in cheek) problem with doing that: The Navier-Stokes equation has known non-smooth solutions. Another name for such solutions is "turbulence." Given enough time, a system governed by the Navier-Stokes equation is non-deterministic. This shouldn't be that surprising. There are other non-deterministic systems in Newtonian mechanics such as Norton's dome. Think of the weather as a system chock full of Norton's domes. (Whether smooth solutions exist to the 3D Navier-Stokes under non-turbulent conditions is an open question, worth $1000000.) Lorenz raised the issue of the non-predictability of the weather in his 1969 paper, "The predictability of a flow which possesses many scales of motion." Even if the Navier-Stokes equations are ultimately wrong and even if the weather truly is a deterministic system, it is non-deterministic for all practical purposes. In Lorenz's time, weather forecasters didn't have adequate knowledge of mesoscale activities in the atmosphere (activities on the order of a hundred kilometer or so). In our time, we still don't quite have adequate knowledge of microscale activities in the atmosphere (activities on the order of a kilometer or so). The flap of a butterfly's wing: That's multiple orders of magnitude below what meteorologists call "microscale." That represents a big problem with regard to turbulence because the magnification of ambient noise is inversely proportional to scale (raised to some positive power) in turbulent conditions. Regarding a simulation of $1.57\times10^{24}$ particles My answer has engendered a chaotically large number of comments. One key comment asked about a simulation of $1.57\times10^{24}$ particles. First off, good luck making a physically realistic simulation of a system comprising that many particles that can be resolved in a realistic amount of time. Secondly, that value represents a mere 0.06 cubic meters of air at standard temperature and pressure. A system of on the order of 10 24 particles cannot represent the complexities that arise in a system that is many, many orders of magnitude larger than that. The Earth's atmosphere comprises on the order of 10 44 molecules. A factor of 10 20 is beyond "many" orders of magnitude. It truly is many, many orders of magnitude larger than a system of only 10 24 particles.
{ "source": [ "https://physics.stackexchange.com/questions/231891", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/98822/" ] }
232,199
In other words, what stars near the Sun may have an impact on the Solar system equilibrium or the Earth life if they become supernova ? Is SN 1987 A too far ?
I worked this out a little while back in order to check something said on one of these Nova or other science show specials. I wanted to know how much energy would be required to remove the entire atmosphere of the Earth and whether a supernova (or other astronomical event) could possibly do this. Earth's Atmosphere Let's assume the following quantities: $M_{E}$ = mass of Earth $\sim 5.9742 \times 10^{24} \ kg$ $R_{E}$ = mean equatorial radius of Earth $\sim 6.378140 \times 10^{6} \ m$ $h_{E}$ = mean scale height of Earth's atmosphere $\sim 10 \ km$ $AU$ = astronomical unit $\sim 1.49598 \times 10^{11} \ m$ or $\sim 2.0626 \times 10^{5} \ parsecs$ Let's assume Earth's atmosphere has the following concentrations by volume: $N_{2} \sim 78.08$% $O_{2} \sim 20.95$% $Ar \sim 0.93$% $C O_{2} \sim 0.039$% To start, we find the total volume of Earth's atmosphere given by: $$ V_{atm} = 4 \pi \int_{a}^{b} \ dr r^{2} = \frac{4 \pi}{3} r^{3} \vert_{a}^{b} $$ where we assume $a = R_{E}$ and $b = R_{E} + h_{E}$, which gives us a volume of $V_{atm} \sim 5.120 \times 10^{18} \ m^{3}$. Thus, we can estimate fractional volumes of each constituent gas to be: $N_{2} \sim 3.998 \times 10^{18} \ m^{3}$ $O_{2} \sim 1.073 \times 10^{18} \ m^{3}$ $Ar \sim 4.762 \times 10^{16} \ m^{3}$ $C O_{2} \sim 1.997 \times 10^{15} \ m^{3}$ This allows us to estimate to total number of particles for each constituent gas using: $$ N_{j} = V_{j} \times \frac{ 1 }{ V_{atm} } \times N_{A} $$ where $N_{A}$ is the Avogadro constant and $V_{j}$ is the fractional volume of species $j$. This gives us the following values for $N_{j}$: $N_{2} \sim 1.074 \times 10^{44} \ molecules$ $O_{2} \sim 2.882 \times 10^{43} \ molecules$ $Ar \sim 1.279 \times 10^{42} \ molecules$ $C O_{2} \sim 5.365 \times 10^{40} \ molecules$ Now we estimate the total number of moles of each constituent gas using: $$ M_{j} = \frac{ N_{j} }{ N_{A} } $$ which gives us: $N_{2} \sim 1.784 \times 10^{20} \ moles$ $O_{2} \sim 4.786 \times 10^{19} \ moles$ $Ar \sim 2.124 \times 10^{18} \ moles$ $C O_{2} \sim 8.910 \times 10^{16} \ moles$ Ionizing Earth's Atmosphere As a first approximation, we can assume that if the atmosphere were ionized, it may be easier to lose it (e.g., see answer that discusses this). Thus, let's see how much energy is needed to ionize the atmosphere. We can look up the ionization energy for argon and the dissociation energy for each of the other molecules, given to be: $E_{i,Ar} \sim 1520.6 \ kJ \ mole^{-1}$ $E_{d,N2} \sim 945 \ kJ \ mole^{-1}$ $E_{d,O2} \sim 497 \ kJ \ mole^{-1}$ $E_{d,CO} \sim 360 \ kJ \ mole^{-1}$ $\rightarrow E_{d,CO2} \sim 720 \ kJ \ mole^{-1}$ Using these values and the number of moles of each species, we can estimate the total energy needed to ionize all the argon and dissociate all the other constituent gases, which gives us: $N_{2} \sim 1.686 \times 10^{26} \ J$ $O_{2} \sim 2.378 \times 10^{25} \ J$ $C O_{2} \sim 6.414 \times 10^{22} \ J$ $Ar \sim 3.230 \times 10^{24} \ J$ A typical supernova (i.e., Type Ia) releases something like $\sim 10^{44} \ J$ of total energy (Note that hypernovae can release more and other stellar events can produce more energy, but more on that later.). If we assume all of that energy is directly injected to ionize the atmosphere and that it radiates from the source in a spherically symmetric manner, then the intensity will decrease as $\sim r^{-2}$, where $r$ is the distance from the source emitter (i.e., supernova) to the absorber (i.e., Earth's atmosphere). Ignoring angle of incidence issues, the absorbing area of the Earth is just $4 \ \pi R_{E}^{2} \sim 5.099 \times 10^{8} \ km^{2}$ or $\sim 5.099 \times 10^{14} \ m^{2}$. We can estimate the minimum safe distance by comparing the energies and ignore any losses by the absorber, which gives us a zeroth approximation: $$ A_{source} \ E_{abs} = A_{abs} \ E_{source} \\ r_{source}^2 = r_{abs}^2 \frac{ E_{source} }{ E_{abs} } $$ where $source$ is the energy source (i.e., supernova here) and $abs$ is the absorber (i.e., Earth's atmosphere). If we solve for $r_{source}$ as our minimum safe distance for each constituent gas individually, we have: $r_{source}$ for $N_{2} \sim 4.906 \times 10^{15} \ m$ or $\sim 33,000 \ AU$ or $\sim 0.16 \ parsecs$ $r_{source}$ for $O_{2} \sim 1.307 \times 10^{16} \ m$ or $\sim 87,000 \ AU$ or $\sim 0.42 \ parsecs$ $r_{source}$ for $C O_{2} \sim 2.515 \times 10^{17} \ m$ or $\sim 1,680,000 \ AU$ or $\sim 8.15 \ parsecs$ $r_{source}$ for $Ar \sim 3.544 \times 10^{16} \ m$ or $\sim 237,000 \ AU$ or $\sim 1.15 \ parsecs$ So in the grand scheme of things, these distances are small enough to suggest that most stars are far enough away that they will not completely ionize our atmosphere. Energizing Earth's Atmosphere What if we tried to determine how much energy would be necessary to increase the particles mean kinetic energy such that their most probable speeds exceeded the escape speed of Earth's gravity? At STP the constituent gases considered have thermal speeds (i.e., most probable speeds) of: $N_{2} \sim 417.15 \ m/s$ $O_{2} \sim 390.31 \ m/s$ $C O_{2} \sim 332.82 \ m/s$ $Ar \sim 349.33 \ m/s$ The difference in kinetic energy between their STP energy and escape speed energy is given by: $$ \Delta K_{j} = \frac{ 1 }{ 2 } m_{j} \left( V_{esc}^{2} - V_{Tj}^{2} \right) $$ which is, for each constituent gas, given by: $N_{2} \sim 2.904 \times 10^{-18} \ J$ $O_{2} \sim 3.318 \times 10^{-18} \ J$ $C O_{2} \sim 4.565 \times 10^{-18} \ J$ $Ar \sim 4.143 \times 10^{-18} \ J$ If we multiply these values by the total number of particles we estimated previously, $N_{j}$, then we can estimate the total energy needed to effectively evaporate the atmosphere of each constituent gas. The energies needed are: $N_{2} \sim 3.120 \times 10^{26} \ J$ $O_{2} \sim 9.562 \times 10^{25} \ J$ $C O_{2} \sim 2.449 \times 10^{23} \ J$ $Ar \sim 5.300 \times 10^{24} \ J$ which corresponds to a total energy of $\sim 4.131 \times 10^{26} \ J$. Using a similar approach as for the ionization above, we get minimum safe distances of: $r_{source}$ for $N_{2} \sim 3.606 \times 10^{15} \ m$ or $\sim 24,000 \ AU$ or $\sim 0.12 \ parsecs$ $r_{source}$ for $O_{2} \sim 6.514 \times 10^{15} \ m$ or $\sim 44,000 \ AU$ or $\sim 0.21 \ parsecs$ $r_{source}$ for $C O_{2} \sim 1.287 \times 10^{17} \ m$ or $\sim 860,000 \ AU$ or $\sim 4.17 \ parsecs$ $r_{source}$ for $Ar \sim 2.767 \times 10^{16} \ m$ or $\sim 185,000 \ AU$ or $\sim 0.90 \ parsecs$ So again, these distances are small enough to suggest that most stars are far enough away that they will not completely evaporate our atmosphere. Answer The above estimates are for absolute devistation and are only valid given the assumptions. Note that an extinction level event probably would not require the total ionization or evaporation of Earth's atmosphere. Rather, only a fraction of the atmosphere would need to be ionized or evaporated to cause problems, as the two links provided by @BowlOfRed suggest. Update In my original post I eluded to more energetic events like hypernova but forgot to discuss them. Typically, hypernova are not much more than ~50 times as energetic as supernova, would not alter the above distances much. Gamma-ray bursts , again have comparable total energy releases, but here the energy is focused into a relatively narrow beam rather than spherical. Even so, the beam would need to be focused directly on Earth and the source relatively close to evaporate and/or ionize the Earth's atmosphere. I should also point out that a significant fraction (in some cases, nearly all the energy) of the energy in a supernova goes to neutrinos , which do effectively nothing to our atmosphere. Thus, the above values are grossly underestimated. Meaning, a supernova (or other huge energy release) would need to be significantly closer to cause the same effects. What I did not mention is that the entire atmosphere need not be ionized or evaporated for there to be significant problems . Simply ionizing a significant fraction of the $N_{2}$ could produce damaging levels of $NO_{x}$'s that lead to acid rain and other polluting effects. Further, a significant enhancement in the level of ionizing radiation could damage enough of the ozone layer to lead to large scale crop failures. Though an atmospheric chemist/physicist would be better suited to estimate the minimum safe distance for these effects.
{ "source": [ "https://physics.stackexchange.com/questions/232199", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
232,298
I don't know a lot about physics, but this seemed like the correct place to ask my question. I apologize in advance if it is a ridiculous one. It's actually a two part question: If the population of humans and other live forms increases, does that add to the overall mass of Earth? If that is the case could that extra mass ever be enough to change the orbit around the Sun?
Life forms are made up from materials already present in Earth. Thus, increasing population would not alter the overall mass of the planet, and can't impact its orbit.
{ "source": [ "https://physics.stackexchange.com/questions/232298", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/105352/" ] }
232,875
Iron has the highest binding energy per nucleon in the entirety of the known elements. But why Iron specifically? What makes it have the highest binding energy per nucleon?
The existence of nuclei is dependent on a number of quantum mechanical boundary conditions. They appear as solutions to a problem where there is a balance of: a) the attractive spill over color force that binds the quarks into a proton or a neutron, b) the repulsive electromagnetic force between protons, c) the Pauli exclusion principle, d) the instability of not strongly bound neutrons to a weak decay. There are additional factors entering once electrons get trapped around a nucleus, but that is another story. To answer "why" the element with 26 protons and 30 neutrons is stable (or the one with 26 protons and 32 neutrons) and has close to the maximum binding energy, one needs a specific quantum mechanical model for the collective potential of the above factors. Shell models are fairly successful in classifying the periodic table. The real answer about iron though would be phenomenological, that is what we observe and fit phenomenologically with the Weizsaecker formula, which is based on a liquid drop model. The way the effective potential works, the inclusion of more and more nucleons in the potential well after iron stops creating a deeper effective potential well, due to the increase of the effect of repulsive forces described above. Please note that it is Ni62 that is more tightly bound in the binding energy curve.
{ "source": [ "https://physics.stackexchange.com/questions/232875", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/83425/" ] }
233,125
I know that in order to travel in a circle I have to have a net centripetal force $F=mv^2/r$. I also know that my normal force and gravitational force cancel. How, then, am I traveling in a circle around the Earth as it spins?
The answer is that the normal force and the force of gravity do not cancel each other out; the force of gravity is slightly stronger, and it's stronger by an amount equal to $m v^2 / r$. In elementary physics problems we often ignore the rotation of the Earth to simplify the problem, since it is generally a very small contribution; we therefore conclude that $F_g = F_N$ for an object on a horizontal surface. Here's the formal answer to your question. We can set up Newton's 2nd law like so: $$F_\textrm{net} = m a$$ $$F_g + F_N = m a$$ We know from geometry that centripetal acceleration is given by $a = v^2 / r$, and it's pointing inward, which we'll call the "negative" direction, so $$F_g + F_N = - m (v^2 / r)$$ $$- F_g = F_N + m (v^2 / r)$$ I've formatted the last equation to show that the force of gravity is opposite in direction and stronger than the normal force by an amount $m v^2 / r$. Note: the actual centripetal force on the surface of the Earth actually varies in direction relative to the force of gravity as you move away from the equator. The angle of the ground with respect to gravity also changes, since the Earth is closer to a geoid than a sphere. The general situation at any latitude requires friction to resolve, as well as treating the forces as vectors, since they aren't collinear, but the general principle that the force of gravity is stronger than the normal force, and that extra force provides the centripetal acceleration that maintains our circular motion, is still valid.
{ "source": [ "https://physics.stackexchange.com/questions/233125", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/35268/" ] }
233,266
Let a (free) particle move in $[0,a]$ with cyclic boundary condition $\psi(0)=\psi(a)$. The solution of the Schrödinger-equation can be put in the form of a plane wave. In this state the standard deviation of momentum is $0$, but $\sigma_x$ must be finite. So we find that $\sigma_x\sigma_p=0$. Is something wrong with the uncertainty principle?
This is what happens if one cares not for the subtlety that quantum mechanical operators are typically only defined on subspaces of the full Hilbert space. Let's set $a=1$ for convenience. The operator $p =-\mathrm{i}\hbar\partial_x$ acting on wavefunctions with periodic boundary conditions defined on $D(p) = \{\psi\in L^2([0,1])\mid \psi(0)=\psi(1)\land \psi'\in L^2([0,1])\}$ is self-adjoint, that is, on the domain of definition of $p$ , we have $p=p^\dagger$ , and $p^\dagger$ admits the same domain of definition. The self-adjointness of $p$ follows from the periodic boundary conditions killing the surface terms that appear in the $L^2$ inner product $$\langle \phi,p\psi\rangle - \langle p^\dagger \phi,\psi\rangle = \int\overline{\phi(x)}\mathrm{i}\hbar\partial_x\psi(x) - \overline{\mathrm{i}\hbar\partial_x\phi(x)}\psi(x) = 0$$ for every $\psi\in D(p)$ and every $\phi\in D(p^\dagger) = D(p)$ , but not for $\phi$ with $\phi(0)\neq\phi(1)$ . Now, for the question of the commutator: the multiplication operator $x$ is defined on the entire Hilbert space, since for $\psi\in L^2([0,1])$ $x\psi$ is also square-integrable. For the product of two operators $A,B$ , we have the rule $$ D(AB) = \{\psi\in D(B)\mid B\psi\in D(A)\}$$ and $$ D(A+B) = D(A)\cap D(B)$$ so we obtain \begin{align} D(px) & = \{\psi\in L^2([0,1])\mid x\psi\in D(p)\} \\ D(xp) & = D(p) \end{align} and $x\psi\in D(p)$ means $0\cdot \psi(0) = 1\cdot\psi(1)$ , that is, $\psi(1) = 0$ . Hence we have $$ D(px) = \{\psi\in L^2([0,1])\mid \psi'\in L^2([0,1]) \land \psi(1) = 0\}$$ and finally $$ D([x,p]) = D(xp)\cap D(px) = \{\psi\in L^2([0,1])\mid \psi'\in L^2([0,1])\land \psi(0)=\psi(1) = 0\}$$ meaning the plane waves $\psi_{p_0}$ do not belong to the domain of definition of the commutator $[x,p]$ and you cannot apply the naive uncertainty principle to them. However, for self-adjoint operators $A,B$ , you may rewrite the uncertainty principle as $$ \sigma_\psi(A)\sigma_\psi(B)\geq \frac{1}{2} \lvert \langle \psi,\mathrm{i}[A,B]\rangle\psi\rvert = \frac{1}{2}\lvert\mathrm{i}\left(\langle A\psi,B\psi\rangle - \langle B\psi,A\psi\rangle\right)\rvert$$ where the r.h.s. and l.h.s. are now both defined on $D(A)\cap D(B)$ . Applying this version to the plane waves yields no contradiction.
{ "source": [ "https://physics.stackexchange.com/questions/233266", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/32109/" ] }
233,297
I know that we obviously get light (or we wouldn't be able to see them), but are there any other ways that they affect Earth and maybe just our solar system in general?
A lot (to put it mildly) of elements are created in stars and supernovae. These elements then travel through space until they fall to Earth (or, to be exact, some microscopic portion of them reach us). Earth itself wouldn't exist if stars hadn't generated elements which then clumped into dust, into minerals, and so on until a big ball of matter started to orbit the Sun. Here's a short quote from Wikipedia article on Cosmic ray : Data from the Fermi space telescope (2013) has been interpreted as evidence that a significant fraction of primary cosmic rays originate from the supernovae of massive stars. However, this is not thought to be their only source. Active galactic nuclei probably also produce cosmic rays. So I'll stand by my claim that stars are giving us mass (i.e. non-photons) as well as photons in real-time, not just as 5-billion-year-old space dust.
{ "source": [ "https://physics.stackexchange.com/questions/233297", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/105895/" ] }
233,409
I was reading this and it says that Microsoft put a server farm at the bottom of the ocean because it's cooler there. Particularly it seems to imply that it get's colder as you go deeper, "Since ocean water gets pretty cold toward the sea floor..." But I know that pressure causes heat, for example it is responsible for igniting fusion at the center of the sun.. what gives?
There's two main misconceptions in your question that cause your confusion. First, pressure doesn't cause higher temperature. This misconception is probably a result of a massive oversimplification with relation to the ideal gas equation. The actual relation is "increasing the pressure of an ideal gas while volume remains constant increases the temperature of the gas". Two notable things here: Water and other liquids are barely compressible, so they behave nothing like an ideal gas (which is perfectly compressible). Ideal liquid doesn't compress at all. Temperature only increases as you put more stuff in the same volume. That is, it isn't pressure that increases temperature, it's compression . If you compress a volume of air, the temperature will rise, and if you release it again, the temperature will drop again. Second, any closed system evolves toward thermal equillibrium. In simple terms, if you leave a hot coffee on your table, it will eventually cool down to room temperature. Even though compression increases temperature, this doesn't mean that constant pressure keeps producing more and more heat. When you compress a lot of air into a soccer ball, it will feel hot to the touch. But as it exchanges heat with the environment, it will cool down. This is very useful, of course, because it allows you to expend energy to cool things down, like in your A/C :) What effect this has on pressure in turn again depends on the properties of the material you're working with. If you have a volume of air in a bottle, as you cool it down, the gas pressure decreases. If you heat it up, the pressure increases. This is the reason why you need to tweak the pressure in your car's tires even if they aren't leaking - you need to adjust for current temperature. However, with a liquid, this isn't anywhere as simple. While there is a relation between temperature and density, it's nowhere near as big as in an ideal gas. The same goes with pressure and density - if it didn't, you wouldn't be able to walk (imagine that your legs would shorten by half every time you raised one leg - that just wouldn't work). So, let's put this to use in our ocean example. Undisturbed, water will tend to be "vertically ordered" by density. Usually, this means that warmer water will tend to rise up, while colder water will tend to go down. So the weird thing is actually how relatively warm in the depths. The ocean floor tends to be around the same temperature, regardless of how warm or cold the upper layers are. There's two main reasons for that, specific to water: The water anomaly - the peak of density occurs around 4 °C in water; both increasing and decreasing temperature from this point results in lower density. The effect is very important, because it means that even during winter, the bottom layers of lakes will have temperature around 4 °C even when the surface is frozen. And ice is actually a pretty good insulator too :) EDIT: As noted by David, this doesn't occur in ocean water, due to the high salinity which pushes the peak below freezing (around -4 °C). So in an ocean, the deepest layers are formed of water between about 0 °C to 3 °C. Ice - when water freezes, it forms ice, which has lower density than water. This is somewhat unusual (solids are usually higher-density than liquids), and it means that as water bodies start to freeze, it rises again. With supercooled water, this effect is even more pronounced - a water at -30 °C has about the same density as water at 60 °C. Oceans cool mostly by evaporation - the surface layers of water "spontaneously" changing state from liquid to gaseous. You get a balancing act between energy lost to evaporation, and incoming sunlight. However, there's a huge gap between the surface and the deeps, a lot of water mass - the incoming sunlight is nowhere near enough to warm ocean waters throughout. So you get warm surface waters, then a gradient of cooler and cooler water, and finally about 0-3 °C in the deep. To illustrate how big this gap is, about 90% of the worldwide ocean water is in the 0-3 °C range (hence the "nowhere near enough sunlight to heat the whole thing through"). Of course, a 4 °C body of water is great for cooling systems running at 40 °C and more. Air is actually a pretty good insulator, so air cooling gets tricky with large systems. Water, on the other hand, is pretty thermally conductive, and it easily convects, so cooling a huge data centre becomes almost trivial. EDIT: Let me address the Sun part, since there seems to be some confusion there as well. Nuclear fusion is something that happens very infrequently. Two nuclei must come very close together to fuse, and they need enough kinetic energy to overcome the repulsion between each other (since both have the same electric charge). The first problem is solved by increasing density. The more nuclei you have in the same volume, the higher the likelihood of close contact. This is where pressure comes in - that's how you get a higher density. Stars are made of plasma, and plasma is easily compressible, similar to a gas, so as pressure increases, so does density. How compressed is it? Well, the Sun's core, where the fusion reactions are actually happening, contains 34% of the Sun's mass, in only 0.8% of the Sun's volume. In the centre, the density is around 150 times the density of liquid water. The pressure is about 100 000 times the pressure in the Earth's core, and about 100 000 000 times the pressure of the water on the bottom of the Mariana trench. The second problem is solved by increasing the kinetic energy of the individual nuclei. In other words, increasing the temperature. Just like with compressing air, pressure is only a one-off deal in increasing temperature; the fusion reaction in the Sun was started using the residual heat of the collapse of matter forming the star (the gravitational potential energy) - I'm not sure how much of a factor was compression in particular. But again, this was only responsible for the initial ignition - today, the reaction is running entirely on the heat produced by fusion and the pressure supplied by gravity (which is actually lowered by the outward pressure of the energy released in the core - the two pressures form a stable equilibrium). As a side note, despite the high temperatures and pressures, the fusion reaction powering the Sun is incredibly weak. If we could magically reproduce the same conditions on the Earth, it wouldn't really be usable for power generation at all - the energy produced is about 300 Watts per cubic metre at the very centre . To have a comparison, this is comparable to power density of a compost heap, and less than the power density of human metabolism. Yes, your own body is producing more power than the same volume of the centre of the Sun. I unsuccessfully tried to find data on power density of fission reactors, but a single CANDU reactor produces about 900 MW (that's "million watts"), and it sure isn't three million times as big.
{ "source": [ "https://physics.stackexchange.com/questions/233409", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/99567/" ] }
233,798
In Richard Rhodes' book, The Making of the Atomic Bomb , I was reading about the Trinity nuclear test. High speed photos were taken and this one is from <1ms after the detonation. The book mentions the irregular spikes at the bottom of the image, but does not explain them. Is there a specific reason or explanation for these odd spikes in the relatively spherical explosion? Nuclear explosion photographed less than one millisecond after detonation. From the Tumbler-Snapper test series in Nevada, 1952, showing fireball and "rope trick" effects. The fireball is about 20 meters in diameter in this shot
The answer is in wikipedia The photograph on the right shows two unusual phenomena: bright spikes projecting from the bottom of the fireball, and the peculiar mottling of the expanding fireball surface. The surface of the fireball, with a temperature over 20,000 kelvin, emits huge amounts of visible light radiation (more than 100 times the intensity at the sun's surface). Anything solid in the area absorbs the light and rapidly heats. The "rope tricks" which protrude from the bottom of the fireball are caused by the heating, rapid vaporization and then expansion of mooring cables (or specialized rope trick test cables) which extend from the shot cab (the housing at the top of the tower that contains the explosive device) to the ground. Malik observed that when the rope was painted black, spike formation was enhanced, and if it were painted with reflective paint or wrapped in aluminium foil, no spikes were observed – thus confirming the hypothesis that it is heating and vaporization of the rope, induced by exposure to high-intensity visible light radiation, which causes the effect. Because of the lack of mooring ropes, no "rope trick" effects were observed in surface-detonation tests, free-flying weapons tests, or underground tests.
{ "source": [ "https://physics.stackexchange.com/questions/233798", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/90248/" ] }
234,495
It is possible to drive a nail into a piece of wood with a hammer, but it is not possible to push a nail in by hand. Why is this so?
There's many different things at work here. First, there's the issue of acceleration. Hammers are very hard and solid, so when you hit the nail head with the hammer, the energy and force of the blow is delivered at almost an instant. Hands, on the other hand, are rather soft, and will spread out the same amount of energy and acceleration over a longer time period, resulting in a lower force applied on the piece of wood. Different woods have different resistance to pressure, so it's still rather easy to push a nail through a sheet of balsa wood, for example, while it's much harder to push it through a sheet of oak. Second, the hammerhead actually accumulates a lot of energy in the duration of the swing, stored as kinetic energy in the head. That's why hammerheads are heavy (and the higher force you need, the heavier the head) - it allows you to store more kinetic energy with the same velocity of the head. The maximum velocity your muscles are capable of is a lot more limited than the amount of energy they can deliver, when considering something as tiny as a nail. Third, hammers work as an additional lever, allowing you to deliver more force as a trade-off with time. This works in tandem with the second point - a longer swing can give you more impact force. This also helps the hammerhead reach higher speeds than you would have while holding the head directly, as opposed to holding the shaft. Fourth, you're simply not going to hit as hard with your bare fist. Your body has built-in safety mechanisms that try rather hard to prevent injury, and you can hurt yourself quite a bit by hitting a nail head-on. Note that it's quite easy to drive nails just by using a wooden board pressed straight against your hand and hitting the nail - this spreads out the force of the blow over your hand, preventing pain and injury and allowing you to hit harder. Finally, raw force is probably the dominant factor here. Pushing allows you to use the full strength of your muscle, which is probably somewhere around your weight (with a rather large spread). On the other hand, hitting allows you to accumulate the strength of your muscles over the duration of the swing, allowing you to impart much bigger forces than would be possible with just pushing. Try driving nails just by pushing the hammer, and you'll see the difference rather easily - the only benefit you'll get from using a hammer is that you're not going to feel as much pain as when pushing against the much smaller nail head.
{ "source": [ "https://physics.stackexchange.com/questions/234495", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/106487/" ] }
234,734
Perhaps speaking of direction of an electron isn't quite correct. But does QM indicates a kind of way whether all electrons are going e.g. 'clockwise' or not? Of course QM just gives a probability where the electrons are, but can you emerge whether they are going, in some way, in the opposite direction of each other?
As you intuit, it is indeed pretty hard to define a sense of "direction" for an atomic electron within quantum mechanics when the electron doesn't have an orbit but it is instead some fuzzy ball of probability, but it is doable and in fact it is one of the central constructs in atomic physics. What ends up mattering is angular momentum , i.e. how much the electron's motion "turns" around the nucleus. As it happens, it is perfectly possible to define a fuzzy ball of probability for the electron which does not have a definite position and does not have a definite (linear) momentum, but which does have definite angular momentum. A bit weird, but that's what it is. Most atomic electrons are in states like this. As to your broader question - whether all the electrons are going around in the same direction or not - the answer is simply "it depends". Some atoms, like the noble gases, the alkaline earth metals, and the zinc-cadmium-mercury right-hand edge of the transition metals, have "full shells" which roughly means that for every electron going clockwise about a given axis there is another electron going counterclockwise. Some atoms, like vanadium, cobalt or nickel, have many unpaired electrons, and have a fairly large overall angular momentum of the electronic motion. In general, the angular momentum of any closed electron shell is zero (i.e. the electrons in the inner shells have one counterclockwise electron for every clockwise one) and it is the outermost, 'valence' shell that determines the angular momentum properties of the atom.
{ "source": [ "https://physics.stackexchange.com/questions/234734", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/103999/" ] }
235,079
Ok, this looks like a dumb question or even near trolling, but I really don't understand it. When air is heated over an oven plate, it rises. Obviously, I can check by blowing some smoke in. The common explanation is that hot air has less density than cold air, and consequently, it rises. Fair enough, the hot air will end above the cold air, but why is it rising in a column? With the same argument, I could deduce ( and I know that it's wrong ) that the cold air above is denser, so it will go down, pressing the hot air away sideways. What additional fact am I (and the common explanation) missing? (I'm pretty sure that the tags I found are not optimal.) Edit: in my mind I envision a picture of (red) hot air molecules separated more than the (blue) cold molecules which slip down between the red ones. I'm aware that this is a very crude model, and moreover ends in a wrong prediction. Edit (about the duplicate): I'm not sure if the other question is about the way in which the hot air raises. At least, the answers over there do not (or not clearly) address this aspect. The accepted answer here explains what is going on by stating formulas for pressure above the heat plate as well as next to it.
With the same argument, I could deduce (and I know that it's wrong) that the cold air above is denser, so it will go down, pressing the hot air away sideways. Replace your hot air with a helium balloon. You can see there's no force on the balloon to push it sideways. The buoyancy forces it to accelerate upward (and some cool air around it to accelerate downward). If you don't stop at one, but keep creating balloons (similar to you continuing to heat the air from the pan), then you'll get a trail that forms a column. The asymmetry in the situation is that you're creating a small amount of heated air in a large amount of cooler air. If you reversed the situation by placing a block of ice near the ceiling, then you would get a column of cooler air falling through the relatively warmer air. in my mind I envision a picture of (red) hot air molecules separated more than the (blue) cold molecules which slip down between the red ones. Molecules in a gas have a distribution of speeds. So the cooler gas has almost as many fast molecules as the warmer one does. But the problem here is that at such a scale, the size of your heated parcel is huge. A few molecules will do that at the edge (diffusion), but not quickly. The mean free path of an air molecule in your room is less than 100 nanometers, while the size of your heated parcel is probably several centimeters. Most will hit and remain close to their neighbors. It's much faster for the entire parcel to lift, so that process dominates.
{ "source": [ "https://physics.stackexchange.com/questions/235079", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/102232/" ] }
235,103
Because of the expanding of the universe a wave from a far galaxy or star can be streched to more red. But how is space doing that? There must be a kind of 'entanglement' between the wave and space. But what is that? And if that is true can you say that the speed of light is bordered by space itself?
With the same argument, I could deduce (and I know that it's wrong) that the cold air above is denser, so it will go down, pressing the hot air away sideways. Replace your hot air with a helium balloon. You can see there's no force on the balloon to push it sideways. The buoyancy forces it to accelerate upward (and some cool air around it to accelerate downward). If you don't stop at one, but keep creating balloons (similar to you continuing to heat the air from the pan), then you'll get a trail that forms a column. The asymmetry in the situation is that you're creating a small amount of heated air in a large amount of cooler air. If you reversed the situation by placing a block of ice near the ceiling, then you would get a column of cooler air falling through the relatively warmer air. in my mind I envision a picture of (red) hot air molecules separated more than the (blue) cold molecules which slip down between the red ones. Molecules in a gas have a distribution of speeds. So the cooler gas has almost as many fast molecules as the warmer one does. But the problem here is that at such a scale, the size of your heated parcel is huge. A few molecules will do that at the edge (diffusion), but not quickly. The mean free path of an air molecule in your room is less than 100 nanometers, while the size of your heated parcel is probably several centimeters. Most will hit and remain close to their neighbors. It's much faster for the entire parcel to lift, so that process dominates.
{ "source": [ "https://physics.stackexchange.com/questions/235103", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/103999/" ] }
235,248
LIGO has announced the detection of gravitational waves on 11 Feb, 2016. I was wondering why the detection of gravitational waves was so significant? I know it is another confirmation of general relativity (GR), but I thought we had already confirmed GR beyond much doubt. What extra stuff would finding gravitational waves teach us? Is the detection of gravitational waves significant in and of itself, or is there data which can be extracted from the waves which will be more useful?
Gravitational waves are qualitatively different from other detections. As much as we have tested GR before, it's still reassuring to find a completely different test that works just as well. The most notable tests so far have been the shifting of Mercury's orbit, the correct deflection of light by massive objects, and the redshifting of light moving against gravity. In these cases, spacetime is taken to be static (unchanging in time, with no time-space cross terms in the metric). Gravitational waves, on the other hand, involve a time-varying spacetime. Gravitational waves provide a probe of strong-field gravity. The tests so far have all been done in weak situations, where you have to measure things pretty closely to see the difference between GR and Newtonian gravity. While gravitational waves themselves are a prediction of linearized gravity and are the very essence of small perturbations, their sources are going to be very extreme environments -- merging black holes, exploding stars, etc. Now a lot of things can go wrong between our models of these extreme phenomena and our recording of a gravitational wave signal, but if the signal agrees with our predictions, that's a sign that not only are we right about the waves themselves, but also about the sources. Gravitational waves are a new frontier in astrophysics. This point is often forgotten when we get so distracted with just finding any signal. Finding the first gravitational waves is only the beginning for astronomical observations. With just two detectors, LIGO for instance cannot pinpoint sources on the sky any better than "somewhere out there, roughly." Eventually, as more detectors come online, the hope is to be able to localize signals better, so we can simultaneously observe electromagnetic counterparts. That is, if the event causing the waves is the merger of two neutron stars, one might expect there to be plenty of light released as well. By combining both types of information, we can gain quite a bit more knowledge about the system. Gravitational waves are also good at probing the physics at the innermost, most-obscured regions in cataclysmic events. For most explosions in space, all we see now is the afterglow -- the hot, radioactive shell of material left behind -- and we can only infer indirectly what processes were happening at the core. Gravitational waves provide a new way to gain insight in this respect.
{ "source": [ "https://physics.stackexchange.com/questions/235248", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/35268/" ] }
235,307
I've read a lot of conflicting answers in these forums. However, today saw the awesome announcement of gravitational waves. Two black holes merged: http://www.slate.com/blogs/bad_astronomy/2016/02/11/gravitational_waves_finally_detected_at_ligo.html Not only that, they merged FAST. In 1/5th of a second revolving around each other 250 times a second. The entire event was quicker than a heartbeat. Moreover, we observed this happening as distant outsiders. So now we can say for sure: Objects approaching the event horizon DO NOT appear to slow down Black holes CAN merge in a finite (and quick) amount of time And all this is wrt a frame of reference far, far away To quote the NYTimes article: One of them was 36 times as massive as the sun, the other 29. As they approached the end, at half the speed of light, they were circling each other 250 times a second. And then the ringing stopped as the two holes coalesced into a single black hole, a trapdoor in space with the equivalent mass of 62 suns. All in a fifth of a second, Earth time. However, everything I've read so far has let me to believe that an outside observer should never be able to measure the collision happening in a finite time. So what exactly is happening here? I must have read at lest 5 different versions of this so far everywhere in these forums over the past several years.
This presumably stems from the fact that in the coordinate system of an external observer nothing can ever cross the event horizon of a black hole . This is perfectly true, but if you were watching an object fall onto a stellar mass black hole it would red shift to invisibility in a few microseconds and it would look to you just like it crossed the horizon. More precisely, no matter how sensitive your measuring equipment there would be a time after which you could no longer detect that the object had not crossed the horizon, and for any physically reasonable equipment this time is extremely short. The same principle applies to the merging black holes. We have two objects that can't actually be real black holes because in any finite universe we know real black holes cannot exist . However they are experimentally indistinguishable from real black holes. As these two objects approach each other the spacetime geometry changes and approaches that of a single rotating black hole - the Kerr metric . We know the geometry can never actually become Kerr because that would take an infinite time. However the geometry approaches the Kerr geometry so quickly that after a fifth of a second it is experimentally indistinguishable from the Kerr geometry. Whether the black holes have merged or not depends on exactly what you mean by merged . They are certainly no longer two separate objects, and that happens in a short time and is observable. In this sense it seems reasonable to me to describe tham as having merged. If you insist the merger is complete only when the transition to the Kerr geometry is complete then this will take an infinite time so they will never merge. tl;dr - in any sensible meaning of the term merge the two black holes do indeed merge in a finite, and very short, time.
{ "source": [ "https://physics.stackexchange.com/questions/235307", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/106893/" ] }
235,450
In Feb 12, 2016 edition of Times of India, an article read [with the discovery of gravitational waves, we will be able to] Track Supernovas hours before they're visible to any telescope because the waves arrive Earth long before any light does, giving astronomers time to point telescopes like Hubble in that direction See also page 13 of the paper . Does this mean that gravitational waves reach us before light from a source? Can this be some printing mistake or am I interpreting it wrongly? Edit: Can there be special cases (as explained in some answers) where gravitational waves seem to reach before light waves from a source (though not violating the speed limit)?
It's an incredibly misleading statement, so it's not you. Gravitational waves propagate at the speed of light, so their detection by Earth-bound detectors is expected to correlate with the arrival of light from distant events assuming the source of light generation is identical (not spatially or temporally separated) to the source of the gravitational disturbance. In the case of a supernova, it's actually a dynamic process instead of a flip of a switch, and so the change in the magnitude of light emission can indeed lag behind by several hours from the start of collapse of the star's core - the detection of gravitational waves could allow us to "buy back" that several hour window by detecting the gravitational waves produced by core collapse instead of having to wait for the light magnitude increase. There's no disconnect here, just sloppy reporting. In many cases however, we infer gravitational events or influences have occurred or exist by witnessing a change in motion of light emitting (or reflecting) objects that are directly affected by the event/influence - think of a supermassive black hole at a galactic center that we can't observe directly, but infer its existence by the motion of stars in its vicinity. Or the orbital behavior of Neptune that suggested other massive objects yet to be found in our solar system. Depending on the nature of the event, we may have to infer that a black hole merger, for example, has happened by observing the changes in motion of objects we can see with traditional telescopes. This introduces a time-lag in addition to the normal speed-of-light timelag we're bound by whenever we look up at the night sky: Gravitational influence must travel at the speed of light from the site of the event to the light-emitting object that we can observe, and then the light from that object must travel to our telescopes, again at the speed of light. At the moment that the event happened, the light from the object we're observing with our telescopes had not yet felt the disturbance, so there's an additional lag in detection time that must be accounted for - we're not really observing the black hole in this example, we're observing a surrogate object. The ability to detect gravitational waves may allow us to "buy back" this additional lag by now 'directly' observing the inciting events... bound by the speed of light, of course.
{ "source": [ "https://physics.stackexchange.com/questions/235450", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/104481/" ] }
235,511
This is an attempt to gather together the various questions about time that have been asked on this site and provide a single set of hopefully authoritative answers. Specifically we attempt to address issues such as: What do physicists mean by time? How does time flow? Why is there an arrow of time?
What do physicists mean by time? We’ll start with the easy question what do physicists mean by time . Note that it’s easy to get mixed up between the concepts of time and the flow of time . When non-physicists talk about time they usually mean the flow of time i.e. the fact that in the human experience time flows inexorably onwards (at one second per second). We’ll get on to this, but for now we’ll ignore the question of why time flows and just address what time means to a physicist. If you want to locate some position in space one method is to set up some axes, e.g. $x$ , $y$ and $z$ axes, and you can then uniquely identify any point in space by its coordinates $(x, y, z)$ . To distinguish between events happening at the same point in space but at different times we need to specify when an event happened as well as where it happened, so we add a time coordinate $t$ . Events can then be uniquely located by their spacetime coordinates $(t, x, y, z)$ . To a physicist time is just a coordinate used to specify events in spacetime. In figure 1 above we have an $x$ axis stretching from $-\infty$ to $\infty$ , a $y$ axis stretching from $-\infty$ to $\infty$ and a $z$ axis stretching from $-\infty$ to $\infty$ . To these a physicist adds a $t$ axis stretching from $-\infty$ to $\infty$ , and that’s what time is - just a coordinate. But everyday experience tells us that time is special - certainly different from space - so what justifies the physicist’s view that time is just a coordinate? To understand this start with time in the everyday world as described by Newtonian mechanics. Suppose I set up a coordinate system with myself at the origin, $x$ to the East, $y$ to the North and $z$ straight up. For time I’ll use my wristwatch. And suppose you do the same, but let’s say you’re in a different country from me. Our two sets of coordinates won’t match, because our East, North and up axes point in different directions. Or suppose you are moving relative to me. Even ignoring the curvature of the Earth’s surface, our coordinates won’t match because your origin is constantly moving relative to my origin - what appears to be stationary to me is moving in your coordinates and vice versa . So spatial coordinates are observer dependent. However time is absolute. Assuming we both use Greenwich Mean Tim e (or some other similar standard) we will always both agree on the time no matter where we are on Earth or however we are moving relative to each other. In Newtonian mechanics time is special for this reason, so it makes sense to consider it separately from space. However since 1905 we have known that to properly describe the world around us we have to use special relativity, and in relativity time is not the same for all observers. Let’s go back to ordinary Newtonian mechanics for a moment, and suppose you’re moving relative to me along the $x$ axis at some speed $v$ . If we draw my time $t$ and position $x$ axes and your $t’$ and $x’$ axes they’d look like: Our two time axes point in the same direction, so we both agree on what it means to define a time axis. But now suppose you’re moving at relativistic speed $v$ and draw the same diagram. When we include special relativity our axes no longer point in the same direction. If I draw my time axis straight up then relative to me your time axis is rotated by an angle $\theta$ given by: $$ \tan(\theta) = \frac{v}{c} $$ So your time direction is a mixture of my time and space directions. You would see exactly the same - if you draw your time axis straight up then you’d see my time axis rotated by $-\theta$ . In effect we have different definitions of time, and indeed this is why we get time dilation in relativity. The point of all this is that in relativity time is not uniquely defined. When we consider the coordinates used by different observers we find that time and space get mixed up with each other. Time is no longer distinct from space, and that’s why physicists treat it as just one of the four coordinates that together make up four dimensional spacetime. How does time flow? The previous section explained what physicists mean by time, but made no mention of time flowing. This is because in relativity time doesn’t flow - more precisely the flow of time doesn’t exist as a concept. This is going to take some explaining, so let me attempt it using a simple example. Suppose I throw you a ball and you catch it. Everyday experience tells us that time flows forwards and as it does so the ball rises up from my hand then falls down to your hand. If we graph the height of the ball, $h$ , against time, $t$ , we’ll get something like: In Newtonian physics this has a nice simple interpretation: time flows forwards and the height is a function of time. We can write the height as $h(t)$ . But now let me draw a different graph. I’ll graph the height of the ball, $h$ , against the distance, $d$ , the ball travels horizontally as it moves from me to you: This looks awfully like the previous graph, and indeed I can write the height of the ball as a function of the horizontal distance travelled, $h(d)$ . But we wouldn’t say that distance $d$ flows forward and the height changes as it does so, because, well, time is different from distance. The two graphs are just different views of a four dimensional graph showing the trajectory of the ball in spacetime (I’m only going to draw three dimensions because I can’t do 4D graphs): In the previous section I went to some lengths to explain that time is just a coordinate, like the spatial coordinates, so this graph doesn’t show time flowing any more than distance or height are flowing. The trajectory of the ball is just a line in a 4D. In relativity we call graphs like the above world lines , where the world line is just the set of all spacetime points $(t, x, y, z)$ that the ball occupies during its trajectory. This world line is a fixed object in four dimensional spacetime - it doesn’t change with time. All that changes is the ball’s position on the world line. This is why we say that time doesn’t flow. Time is just one of the four dimensions that the world line occupies. In fact any physical property, pressure of a gas, strength of a gravitational field, or whatever, can be written as a function in the four spacetime dimensions, $F(t, x, y, z)$ . Written this way the geometrical object $F$ exists in all of space and all of time - it’s not something that evolves in time any more than it’s something that evolves in space. In principle we could have some function that represented the whole universe, $\mathcal{F}(t, x, y, z)$ , and this would exist for all values of $t$ , $x$ , $y$ and $z$ . This idea (or a range of ideas like it) is called the block universe - the idea that the whole universe exists simultaneously and time doesn’t flow. At this point I should note that many physicists, and I would guess the vast majority of non-physicists, would say this is just mathematical skulduggery and it’s nonsense to say time doesn’t flow. I’m not going to make any comment, except to say that this nicely brings us onto the last of our questions. Why is there an arrow of time? However mathematically convincing the idea of a block universe may be, the fact remains that our everyday experience tells us that: time flows it flows in one direction — forwards, and never backwards So, how do we reconcile this with the idea of a block universe? Many physicists have expended much thought on this, and there are lots of differing views. However there is a something of a consensus that it is related to entropy . Indeed this is encapsulated in the second law of thermodynamics, which roughly speaking states that for any isolated system entropy only ever increases. Consider some mechanism. We won’t worry exactly what it is, for example it could be something mechanical, an interstellar gas cloud or a human brain. When we talk about time flowing forward we mean that the state of the machine changes in a specific direction, e.g. a clock ticks forwards, and the second law of thermodynamics tells us that it changes in the direction of increasing entropy. Assuming the human brain is just a mechanism, it changes in the direction of increasing entropy just like every other mechanism. But if consciousness is the result of the brain changing then it follows that any conscious being will observe mechanisms changing in the direction of increasing entropy. This isn’t so much a physical law as a correlation. Since our brains change in the same direction (of increasing entropy) as everything else that means they will necessarily observe everything to be changing in this same direction. We call this direction increasing time. If I’m allowed a personal opinion I would say this all seems a little trite — too good to be true — and it seems a suspiciously simple explanation for something as complicated as the universe. However I have no better suggestion to make. Indeed, I don’t think anyone has a better suggestion, or at least not one better enough to convince large swathes of the physics community.
{ "source": [ "https://physics.stackexchange.com/questions/235511", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1325/" ] }
235,865
The arms of the LIGO interferometer are 4 km long. Now, LIGO functions by measuring the phase difference between two beams of light coming (as in a Michelson interferometer) to a sensitivity of $10^{-18}\: \mathrm m$ . Now, we know that if the path length $2d_2-2d_1$ ( $d_1,d_2$ being the length of the paths from the mirrors to the partially silvered diverging mirror) is anything other than $0$ then there is a phase difference between the two waves. So, how are the mirrors placed at exactly $4$ km apart and not even an error of $0.000\ldots 1\:\mathrm{mm} $ might have crept in which could have caused a phase difference, and false results creep in? How are the measurements so accurate?
It is a misconception that LIGO is a very accurate instrument, it has an uncertainty in calibration which is on the order of 10%. This means that the measured strain amplitude of GW150914 of $1.0 \cdot 10^{-21}$ could easily have been $1.1 \cdot 10^{-21}$ . Note that this is just a scaling error. LIGO is however extremely sensitive , it can measure relative length variations on the order of $10^{-22}$ , but only in a bandwidth between 10 and 2000 Hz. At lower frequencies, the measurement fluctuates by several orders of magnitude more. You need to do band-pass filtering to reveal the actual signal. As already mentioned in Chris's answer , a Michelson interferometer can only measure incremental changes in the path length difference . It does not say anything about the absolute length of the arms, and not even about the absolute difference in arm lengths. For a perfect Michelson interferometer, the resulting power on the photodiode is $$P = \frac{P_0}{2} \left(1 + \sin\left(4 \pi \frac{L_1 - L_2}{\lambda}\right)\right)\,,$$ which will only tell you how much the difference in the arm lengths changes over time. Still, there are reasons why you want to have the long arms as equal as possible. For a simple bench-top interferometer with a crappy laser diode, the path lengths need to be reasonably similar, otherwise you run into problems with the coherence length. This is not an issue for LIGO, they use Nd:YAG lasers which already have a coherence length measured in kilometers when running alone. These lasers are further pre-stabilized by locking them on ~16 meter suspended cavities, and finally the laser frequency is locked on the average length of the two 4 km long arms. The resulting line-width of the laser is on the order of 10 mHz , so a coherence length larger than $10^{10}$ meters ... You still want to make the length of the 4 km arms pretty equal, since any imbalance would couple residual noise of the laser frequency to the differential length measurement. With standard GPS-based surveying methods, the mirrors are positioned with an accuracy on the order of millimeters. There is no need to do this much more accurate, since there are other sources of asymmetry that can couple frequency noise to the differential measurement, such as the differences in absorption and reflection of the mirrors used in the two arms.
{ "source": [ "https://physics.stackexchange.com/questions/235865", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/82564/" ] }
235,878
I'm in 10th grade and this question came in my physics test. Nobody was able to answer this question correctly except my physics teacher who says that the answer is 2m. My answer is that there should be no limit on how small the focal length needs to be in this case. For example, if a convex lens of focal length 2cm is used to form an image on the opposite wall, the wall clock that is 2m away from the lens can be treated to be at infinity with respect to the lens and if the focus of the lens is kept at the opposite wall, an image of the object at infinity should form on the wall. My argument is that no matter how small the focal length becomes, as long as it's above zero, an image should form on the opposite wall. Please try to solve the problem and post the explanation. edit: I asked my teacher and he told me that while solving the equation through the lens formula, he had taken the object position to be infinite and the image distance at 2 meter. If there were a minimum limit on how small the focal length could be, what would it be?
It is a misconception that LIGO is a very accurate instrument, it has an uncertainty in calibration which is on the order of 10%. This means that the measured strain amplitude of GW150914 of $1.0 \cdot 10^{-21}$ could easily have been $1.1 \cdot 10^{-21}$ . Note that this is just a scaling error. LIGO is however extremely sensitive , it can measure relative length variations on the order of $10^{-22}$ , but only in a bandwidth between 10 and 2000 Hz. At lower frequencies, the measurement fluctuates by several orders of magnitude more. You need to do band-pass filtering to reveal the actual signal. As already mentioned in Chris's answer , a Michelson interferometer can only measure incremental changes in the path length difference . It does not say anything about the absolute length of the arms, and not even about the absolute difference in arm lengths. For a perfect Michelson interferometer, the resulting power on the photodiode is $$P = \frac{P_0}{2} \left(1 + \sin\left(4 \pi \frac{L_1 - L_2}{\lambda}\right)\right)\,,$$ which will only tell you how much the difference in the arm lengths changes over time. Still, there are reasons why you want to have the long arms as equal as possible. For a simple bench-top interferometer with a crappy laser diode, the path lengths need to be reasonably similar, otherwise you run into problems with the coherence length. This is not an issue for LIGO, they use Nd:YAG lasers which already have a coherence length measured in kilometers when running alone. These lasers are further pre-stabilized by locking them on ~16 meter suspended cavities, and finally the laser frequency is locked on the average length of the two 4 km long arms. The resulting line-width of the laser is on the order of 10 mHz , so a coherence length larger than $10^{10}$ meters ... You still want to make the length of the 4 km arms pretty equal, since any imbalance would couple residual noise of the laser frequency to the differential length measurement. With standard GPS-based surveying methods, the mirrors are positioned with an accuracy on the order of millimeters. There is no need to do this much more accurate, since there are other sources of asymmetry that can couple frequency noise to the differential measurement, such as the differences in absorption and reflection of the mirrors used in the two arms.
{ "source": [ "https://physics.stackexchange.com/questions/235878", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/107188/" ] }
236,063
We know that speed of sound is higher in a denser medium. So when a sound wave strikes a wall, why does it echo instead of passing through it with a speed faster than the speed of sound in air?
Whenever a wave reaches a boundary between two mediums some of the energy is reflected and some is transmitted. The important parameter is not just the speed of the wave on either side of the boundary but what is called the acoustic impedance (= density $\times$ speed). If there is a large difference between the acoustic impedances then you will get most of the wave reflected. So if you have an air (speed of sound 330 m/s, density 1.2 kg/m$^3$) brick wall (4200 m/s, 1850 kg/m$^3$) boundary a lot of the sound will be reflected. So when you walk down a tunnel the sound that you emit gets reflected off the walls and even off the open end of the tunnel and comes back to you as an echo.
{ "source": [ "https://physics.stackexchange.com/questions/236063", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/106702/" ] }
236,145
The merging black hole binary system GW150914 was detected in only 16 days of aLIGO data at a signal level that appears to be well above the detection threshold at around 5 sigma. There are no further events between 4 and 5 sigma in the same data. Could this event have been detected by previous LIGO/VIRGO incarnations which observed for much longer, albeit with lower sensitivity? If so, does this indicate that the aLIGO team have struck lucky and that this is a comparatively rare event that may not be repeated for many years? EDIT: The answers I have agree that LIGO couldn't have seen it, but don't yet completely explain why. GW150914 had a strain that rose from a few $10^{-22}$ to $10^{-21}$ over about 0.2 seconds. This appears to make the characteristic strain maybe a few $10^{-22}$ Hz$^{-1/2}$ and thus would appear from the published sensitivity curves to lie above original LIGO's detection sensitivity at frequencies of $\sim 100$ Hz. Is my estimate of the characteristic strain way off?
To expand on HDE's answer, initial LIGO indeed wouldn't have detected GW150914, but it's not quite as simple as the peak strain being below the curve in the sensitivity plot: the integration time also matters. These plots can be misleading; the curves they show don't represent a minimum detectable strain. Indeed, the units on the y-axis of these plots are $\mathrm{Hz}^{-1/2}$ , while the GW strain is dimensionless, so you can't actually compare them! It's entirely possible to detect a signal that peaks well below the noise curve, as long as it's in-band for sufficiently long. The curves that you see describing LIGO detector sensitivities conventionally show the amplitude spectral density of the detector noise. Meanwhile, the threshold for a detection is determined by the signal-to-noise ratio (SNR) from matched (Wiener) filtering . Assuming we know the form of the signal $h$ in advance (see caveats below), this is defined in terms of the noise-weighted inner product of $h$ with itself: $$ \mathrm{SNR}^2 = \left<h,h\right> \equiv \int_0^\infty \frac{4|\tilde{h}(f)|^2}{S_n(f)}\,\mathrm{d}f $$ where $S_n(f)$ is the noise power spectral density (i.e., the square of what's shown in the sensitivity plots). The SNR therefore depends on the spectral composition of the signal and its overlap with the detector bandwidth. If you imagine this in the time domain (Parseval's theorem), the (squared) SNR actually accumulates in proportion to the number of cycles the waveform spends in-band. For a monochromatic source, this is proportional to the integration time. For example, if $\tilde{h}(f) = \delta(f-f_0)h_0$ and, without loss, the noise PSD is a constant $S_n(f_0)$ , then the SNR is given by: $$ \mathrm{SNR}^2 = \frac{2}{S_n}\int_{-\infty}^\infty|\tilde{h}(f)|^2\,\mathrm{d}f = \frac{2}{S_n}\int_{-\infty}^\infty|h(t)|^2\,\mathrm{d}t $$ Therefore, since $|h(t)| = h_0$ , for a finite observation window $T$ , the SNR scales with $\sqrt{T}$ : $$ \mathrm{SNR} = \sqrt{\frac{2T}{S_n}}h_0 $$ So, let's approximate GW150914 as a monochromatic source. Reading off the plots in the detection paper, let's say it has a average frequency of $f_0 \approx 60 \ \mathrm{Hz}$ , an amplitude of $h_0\approx 5\times10^{-22}$ , and a duration of $T \approx 0.2\ \mathrm{s}$ . Then, reading off a strain ASD of $\sqrt{S_n(f_0)} \approx 10^{-22}$ for initial LIGO, we'd get an SNR of around 3, which doesn't meet the standard detection threshold of 8 (also, see the caveats below). There's a much more complete discussion of detector sensitivity curves in this paper ; it's worth a read! A more useful quantity, described in this paper, is the characteristic strain , which attempts to account for the frequency evolution of an inspiral signal such as GW150914, to ease comparison between detector sensitivity and strain amplitude. Caveats: in practice, it's more complicated than the matched filter model, since the detector noise is annoyingly non-stationary and non-Gaussian. There are more sophisticated search algorithms that use things like signal quality vetoes and $\chi^2$ discriminants that reject spurious responses of the matched filter. There are also search algorithms that don't require a priori knowledge of the signal waveform and can detect unmodelled bursts. It was actually this sort of generic search that detected GW150914; references are available in the detection paper . Also note that the SNR defined above is the optimal SNR that you get if: you filter the data stream with the exact signal that you're looking for, and the noise realisation is zero. Since the mean of the noise is zero, number 2 above is equivalent to taking the expectation of the SNR over all noise realisations. In practice, we don't know the precise signal a priori , and some SNR is lost in the approximation. For a candidate waveform $u$ , the expected SNR (over all noise realisations) is then given by $$ \mathrm{SNR} = \frac{\left<u,h\right>}{\|u\|} $$
{ "source": [ "https://physics.stackexchange.com/questions/236145", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/43351/" ] }
237,306
If there was a source of a continuous gravitational wave at (say) 50hz, and amplitude of say a micrometer (a typical sound wave displacement, I think), and you were nearby (standing happily on a planet in an atmosphere), with your ear pointing to the source, would you hear it? It seems to me that since the gravitational wave is reducing and increasing the distance between points in the atmosphere right at your eardrum, surely the density and pressure of the air there is likewise increasing and decreasing, so you might expect to hear it. What I can't "intuit" is whether you would actually hear it due to the fact that you yourself are also being distorted. My tentative conclusion is that you would hear it. At any given time, there appears to be a pressure differential across your eardrum due to this distortion in space pressurising the materials - so ... deflection? (note: I know that in the recent LIGO announcement they talked about "hearing" the waves, but this is something completely different: an electro-acoustic rendition of the waveform. I'm asking about direct physical sensing.)
The frequency of the recent experiment was in the audible range. The amplitude was off by unspeakable orders of magnitude. But yes, you would hear it (even in vacuum, if you were to survive). Yes, the GW are transverse (quadrupolar). But they do move things (they cause change in distances, that's actually how they detected them: the length of the 4km tube at LIGO changed; earlier experiments actually planned to detect the "sound" of a vibrating metal cylinder, but they weren't sensitive enough). An eardrum and the bones around them are a complex instrument and whatever direction the strain is applied, it would surely induce a vibrational motion that would produce vibration of the eardrum, even if not in the way you imagine (compare to sound, where there is direct pressure to the eardrum -- GW are more profound and make the eardrum itself directly deform and vibrate). If you were close enough to a cataclismic cosmic event, you would hear it across the emptiness of the space. Both directly (as induced vibrations in our bones), and through creaking of the structures around us. It's interesting to note that generally the same instrument that was used more than a century ago to prove that velocity of the "aether" is impossible to detect (disproving the notion of an elastic medium permeating the universe) was now used to prove acceleration of the "aether" (so to speak) can and was measured.
{ "source": [ "https://physics.stackexchange.com/questions/237306", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/37626/" ] }
237,588
Imagine that you are falling into object with huge gravity (i.e. black hole) that does not have any atmosphere. The question is - before you hit the ground, can the gravity itself (which would be extremely huge) kill you? And if so, how exactly that could be done? All parts of your body are affected same and in the same direction, therefore it cant tear you apart. The "zero gravity" is simulated on earth with falling plane with speed increasing by gravitational acceleration, but is it "real" zero gravity, or is it little different?
Yes gravity can kill you because as you approach something super dense like a black hole, the gravity will change with the square of the distance which means that eventually the gravity at your feet would become significantly larger than at your head. This gravitational gradient is referred to as tidal forces and is the same effect that keeps the same side of the moon facing the earth. This would tear you up and eventually disassemble the matter that makes you. Although it sounds dumb, scientists actually call this spaghettification as the CuriousOne has already mentioned. It is questionable whether other effects like gravitational blue shift would not kill you prior to the spaghettification of your body. A uniform gravitational field (which is actually only hypothetical) would have no effect comparable to spaghettification as it merely results in constant acceleration of your body.
{ "source": [ "https://physics.stackexchange.com/questions/237588", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/108473/" ] }
238,389
I am currently trying to follow Leonard Susskind's " Theoretical Minimum " lecture series on quantum mechanics. (I know a bit of linear algebra and calculus, so far it seems definitely enough to follow this course, though I have no university physics education.) In general, I find these lectures focus a bit too much on the math and not really on the physical motivation behind it, but so be it (if there are other courses aimed at those with reasonable math skills that focus more on physical meaning, let me know!). However, that's only indirectly related to what my question is about. In Lecture 4, just after the 40-minute mark , Susskind sets out to derive an expression for what he earlier labelled the time-development operator $U$: $$|\psi(t)\rangle = U(t)|\psi(0)\rangle$$ He starts out as such: $$U(\epsilon) = I + \epsilon H$$ which makes sense because the change in time will have to be small, i.e. on the order of a small $\epsilon$. However, he then goes ahead and changes this into: $$U(\epsilon) = I - i\epsilon H$$ which of course is still fine, because we still don't know what $H$ is supposed to be. Now my problem lies with the fact that Susskind then proceeds to derive an expression for $H$ and, subsequently, the Schrödinger equation in which it figures, from the above equation. The $i$ never gets lost and ends up in that equation. Could we not just as easily have left the $i$ out, or put a 6 or whatever there? Why put $i$ there? I finished the entire lecture hoping Susskind would get back to this, but he never does, unfortunately. (Which is, I guess, another symptom of this course, with which I'm otherwise quite happy, occasionally lacking in physical motivation.) For those of you who know this lecture, or similar styles of teaching: am I missing something here? Alternatively, a general answer as to why there is an $i$ in the Hamiltonian and the Schrödinger equation?
Let $U$ be an unitary operator . Write $$ U=\mathbb I+\epsilon A $$ for some $\epsilon\in\mathbb C$, and some operator $A$. Unitarity means $U^\dagger U=\mathbb I$, i.e., $$ U^\dagger U=(\mathbb I+\epsilon^* A^\dagger)(\mathbb I+\epsilon A)=I+\epsilon^*A^\dagger+\epsilon A+\mathcal O(\epsilon^2) $$ Therefore, if $U^\dagger U=\mathbb I$, we must have $$ \epsilon^*A^\dagger+\epsilon A=0 $$ How can we archive this? We can always redefine both $\epsilon$ and $A$ so that $\epsilon$ is real. If you do this, we get $A^\dagger=-A$, i.e., $A$ is anti-hermitian. In principle, this is perfectly valid, but we can do better. If we choose $\epsilon$ imaginary, we get $A^\dagger=A$. We like this option better, because we like hermitian operators. If $A$ is to be identified with the Hamiltonian, we better have $\epsilon$ imaginary, because otherwise $H$ cannot be hermitian (i.e., observable). Note that Susskind writes $U=\mathbb I-i\epsilon H$ instead of $U=\mathbb I+i\epsilon H$. This negative sign is just convention, it is what everybody does, but in principle it could be a $+$ sign. This sign doesn't affect the physics (but we must be consistent with our choice). This is similar to certain ODE's in classical mechanics (driven harmonic oscillator, RLC circuits, etc), where we use the ansatz $x(t)=\mathrm e^{-i\omega t}$, with a minus sign for historical reasons. So, we include the factor $i$ in $U$, and we end up with the Schrödinger equation. Had we not included the $i$, we would have got $$ \frac{\partial\psi}{\partial t}=\nabla^2\psi $$ where I take $\hbar=2m=1$ and $V=0$ to simplify the analysis (this doesn't change the conclusions). Note that this is the heat equation . The general solution of the heat equation is $$ \psi(x,t)=\int\mathrm dy\ \frac{\psi(y,0)}{\sqrt{4\pi t}}\mathrm e^{-(x-y)^2/4t} $$ No matter what $\psi(y,0)$ is, this solution is non-propagating, non-oscillatory and decays in time. Therefore, the "particles" described by the heat equation don't move, and they slowly disappear! (for example, "stationary" solutions are of the form $\psi(x,t)=\mathrm e^{-Et}\phi(x)$, which goes to zero as $t\to \infty$). Also, if it were not for the $i$ in Schrödinger equation, $\int\mathrm dx\ |\psi(x,t)|^2$ wouldn't be time independent, so the total probability would change in time, and this makes no sense. Therefore, the $i$ in Schrödinger equation makes the Born interpretation of the wave-function possible! Some things you might want to check out: Continuous Groups, Lie Groups, and Lie Algebras, for example in http://www.cmth.ph.ic.ac.uk/people/d.vvedensky/groups/Chapter7.pdf Wigner's theorem : symmetries are realised through linear/unitary operators, or through antilinear/antiunitary operators. Translation operator : in quantum mechanics (and in classical mechanics as well, in a sense), space/time translations are represented through unitary operators, where the generators of such operations are the energy/momentum operators. Spectral theorem : Hermitian operators have real eigenvalues. The Maximum Principle of the heat equation: if $\psi(x,t)$ solves the heat equation and $t_2>t_1$, then $\psi (t_2,x)\le \psi(t_1,y)\ \forall x,y$, which means $\psi$ "decreases" in time (therefore, probability "is destroyed", or "particles disappear"). Schrödinger versus heat equations
{ "source": [ "https://physics.stackexchange.com/questions/238389", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/108836/" ] }