source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
185,116 | In this question the accepted answer says: For objects moving at low speeds, your intuition is correct: say the bus move at speed $v$ relative to earth, and you run at speed $u$ on the bus, then the combined speed is simply $u+v$ . But, when objects start to move fast , this is not quite the way things work. The reason is that time measurements start depending on the observer as well, so the way you measure time is just a bit different from the way it is measured on the bus, or on earth. Taking this into account, your speed compared to the earth will be $\frac{u+v}{1+ uv/c^2}$ . where $c$ is the speed of light. This formula is derived from special relativity. What is "fast" in this answer? Is there a certain cutoff for when it stops being $u+v$ and becomes $\frac{u+v}{1+ uv/c^2}$ ? | For simplicity, consider the case $u=v$. The "slow" formula is then $2u$ and the "fast" formula is $\frac{2u}{1+(u/c)^2}$. In the plot you can see these results in units of $c$. The "slow" formula (red/dashed) is always wrong for $u\ne0$, but it is good enough [close enough to the "fast" formula (blue/solid)] for small $u/c$. The cutoff you choose depends on the accuracy required. When $u<c/10$ then the difference is only likely to be important for high precision work. A series expansion about $u=v=0$ shows the "slow" formula as the first term and that the corrections are small for $uv \ll c^2$: $$
\frac{u + v}{1+uv/c^2} = (u + v)\left[1-\frac{uv}{c^2} + \left(\frac{uv}{c^2}\right)^2 + O\left(\frac{uv}{c^2}\right)^3\right]
$$ | {
"source": [
"https://physics.stackexchange.com/questions/185116",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
185,231 | We have seen that rainbows looks so colorful as we are only able to see only the visible light. But Do they also have ultraviolet bands and infra-red bands, that we are unable to see?
I know someone has already asked the same question but I am concern about the specific ultraviolet and infrared bands only rather than any other wavelength. | Refraction of light in water droplets, leading to the formation of rainbows, is not limited to the visible range. Experimental evidence, compelling due to its simplicity, is shown in the following images taken by University of College London Earth Sciences professor Dominic Fortes . Check the alignment of the rainbow with respect to the trees in each of the pictures. The UV band lies to the left of the visible band, while IR is found to be shifted to the right. The spectral limits in a rainbow can be explained more technical by looking at the refractive index dispersion of water vapor, which can e.g. be found at refractiveindex.info . The UV, visible and near IR range lie in the wavelength region between 0.2 and 2.85 µm. The change in refractive index with respect to the wavelength leads to differing refraction angles and therefore a separation of the colors, as we know it from experience. Basically, this concept could also be extended to further wavelength ranges. Although the resonance around 2.9 µm leads to higher refractive indices for longer wavelengths again. Therefore light with a wavelength of e.g. 4.3 µm would overlay with light at 0.4 µm (both with a refractive index of 1.34). Yet, this is again only half the truth. If you look at the transmittance curve (further down on the same page), you can see that wavelengths longer than 1.8 µm are absorbed by water vapor. Therefore this is the realistic long wavlength end for rainbows. I assume similar arguments could be found for the short wavelength end, but I can't find experimental data. | {
"source": [
"https://physics.stackexchange.com/questions/185231",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/78776/"
]
} |
185,298 | Lets say the Earth is hollow and you are in the center of it (Same mass except all of it is on the outside like a beach ball) If you move slightly to one side now your distance is closer to that side therefore a stronger gravitational force however at the same time you have more mass now on the other side. At what rate would you fall? Which direction? Also, is there a scenario where depending on the radius of the sphere you would fall the other direction or towards the empty center? | If the mass/charge is symmetrically distributed on your sphere, there is no force acting on you, anywhere within the sphere. This is because every force originating from some part of the sphere will be canceled by another part. Like you said, if you move towards on side, the gravitational pull of that side will become stronger, but then there will also be "more" mass that is pulling you in the other direction.
These two components cancel each other exactly. | {
"source": [
"https://physics.stackexchange.com/questions/185298",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/64376/"
]
} |
185,855 | How important is programming in physics? I am studying physics at university and these first years there is actually no approach (as it is understandable) to what working in physics is like. Now, I know programming is actually important (I've read several posts in forums - even the one present here on Stack Exchange), but I actually was wondering: what 'hardware' is used in physics? Is it worth to learn to manage things like a Raspberry Pi -kind-of board? Or it just wouldn't be worth? I don't know if that could be of use to answer this question, but I would be interested in condensed matter physics for the future. | As a computational physicist working in materials/condensed matter, I'm either highly biased or well-placed to comment on this. Physics, in practice, is divided into three overlapping approaches: experimental, theoretical, and computational. (The highest impact research papers usually include a combined effort from all three.) If you plan to go into computational research then you will have to do a fair amount of programming. However, I don't know anyone who has made use of Raspberry Pi's for physics research (that's not to say that no one has, but it's a novelty rather than something that is commonly done). In computational physics, your code will almost exclusively be executed either on standard desktop machines or supercomputers (where you use message-passing systems like MPI to exploit huge parallelism). Virtually all universities have their own supercomputers, but you may also be granted access to some larger national or even international supercomputers (such as ARCHER , Jaguar , and so on). Graphics cards have also become quite popular for physics research in recent years due to the rise of CUDA , and most supercomputers now include several nodes packed with high-end graphics cards. So GPGPU programming is a nice skill to have but by no means a necessity. It's also worth mentioning programming languages. Mainly for historical reasons, most academic code is actually written procedurally in Fortran (which is so archaic it still has functionality left over from the punch-card era). C / C++ , Java , and Python are also widely used, along with the Unix shell (most academic machines run Linux). Those who do a lot of statistical modelling mostly use R or IDL . And those who are too lazy to do real programming - mostly mathematicians and engineers - use MATLAB or Mathematica (okay, I'm being a bit harsh on that one). Let me finish by discussing theoretical and experimental physics. Virtually every theorist I know does much of their work on computers - programming code to numerically solve, or test something, for instance. And many of their 'theories' are aimed at advancing computational methodologies. A classic example of this is the Hohenberg-Kohn theorems which laid the foundation for density functional theory, and there are now many theorists trying to extend this by developing linear-scaling and real-space DFT. It has also become common for experimentalists to program. Whether that be microcontrollers like Arduinos (as pointed out by Emilio Pisantry below), scripts to analyse data, or even employ standard simulation techniques to better understand their experimental observations. | {
"source": [
"https://physics.stackexchange.com/questions/185855",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/82014/"
]
} |
185,939 | I have heard both that Planck length is the smallest length that there is in the universe (whatever this means) and that it is the smallest thing that can be observed because if we wanted to observe something smaller, it would require so much energy that would create a black hole (or our physics break down). So what is it, if there is a difference at all. | Short answer: nobody knows, but the Planck length is more numerology than physics at this point Long answer: Suppose you are a theoretical physicist. Your work doesn't involve units, just math--you never use the fact that $c = 3 \times 10^8 m/s$, but you probably have $c$ pop up in a few different places. Since you never work with actual physical measurements, you decide to work in units with $c = 1$, and then you figure when you get to the end of the equations you'll multiply by/divide by $c$ until you get the right units. So you're doing relativity, you write $E = m$, and when you find that the speed of an object is .5 you realize it must be $.5 c$, etc. You realize that $c$ is in some sense a "natural scale" for lengths, times, speeds, etc. Fast forward, and you start noticing there are a few constants like this that give natural scales for the universe. For instance, $\hbar$ tends to characterize when quantum effects start mattering--often people say that the classical limit is the limit where $\hbar \to 0$, although it can be more subtle than that. So, anyway, you start figuring out how to construct fundamental units this way. The speed of light gives a speed scale, but how can you get a length scale? Turns out you need to squash it together with a few other fundamental constants, and you get:
$$
\ell_p = \sqrt{ \frac{\hbar G}{c^3}}
$$
I encourage you to work it out; it has units of length. So that's cool! Maybe it means something important? It's REALLY small, after all--$\approx 10^{-35} m$. Maybe it's the smallest thing there is! But let's calm down a second. What if I did this for mass, to find the "Planck mass"? I get:
$$
m_p = \sqrt{\frac{\hbar c}{G}} \approx 21 \mu g
$$ Ok, well, micrograms ain't huge, but to a particle physicist they're enormous. But this is hardly any sort of fundamental limit to anything. It isn't the world's smallest mass. Wikipedia claims that if a charged object had a mass this large, it would collapse--but charged point particles don't have even close to this mass, so that's kind of irrelevant. It's not that these things are pointless--they do make math easier in a lot of cases, and they tell you how to work in these arbitrary theorists' units. But right now, there isn't a good reason in experiment or in most modern theory to believe that it means very much besides providing a scale. | {
"source": [
"https://physics.stackexchange.com/questions/185939",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/52261/"
]
} |
186,199 | Electric sparks tend to appear blue or purple or white in color. Why? | Air is normally a bad conductor of electricity, but with enough voltage it can be converted to plasma, which is a good conductor. In a plasma, the electrons constantly bind to and leave atoms. Each time an electron binds to an atom, it emits the energy in light. As a result, the plasma glows the color of a photon with that energy. There are a few different energy levels that get involved, so the spectrum has a few different peaks. The final color depends on the gas you use. For example, neon looks red or red-orange. Air ends up looking blue, so electricity passing through air makes it glow blue. | {
"source": [
"https://physics.stackexchange.com/questions/186199",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/63691/"
]
} |
187,015 | I know it sounds like a weird question to ask but I find it unusual nobody has actually told me. I know what it does but not what it is... I have some kind of idea that it's actually matter and matter is energy and everything in the universe is just kind of intermingled if that makes sense..? I don't know! But can somebody help me understand, that would be great! Thanks | Due to the nature of the other answers I feel compelled to expand my comment to an answer. While the other answers tell you, that energy is not a thing, they fail to tell you, why the concept of energy (or electrons, or quarks) being a thing is utterly irrelevant for physics, undecidable by means of physics and unscientific. Ultimately, physics can never answer what something is made of. Physics cannot provide an ontology. Physics can only describe how things relate. We can say a proton is composed of quarks, but really what are quarks made of? But the question is moot and irrelevant for physics (because we can describe what quarks do). This shows that physics cannot only not tell what abstract concepts like energy are (or are made of), but that this assertion also holds for something, like elementary particles, of that we usually think much more as being "a thing". The question what something is , is a question of ontology, that is, a question of philosophy. (And even some modern proponents of analytical philosophy begin to consider ontology as more or less irrelevant, and choose to put epistemology at the root instead). Physics can only reduce a concept to more fundamental concepts ("reductionism"), without being able to explain what the more fundamental concept are . Physics is based on observational evidence. Experiments tell you how objects relate, but can never tell you what objects are. Observational evidence tells you that masses attract each other – so what are masses? You can only explain tautologically that masses are things that attract each other. But the system of relations uncovered by observation allows you to predict the behaviour of the solar system. If you say water is $\mathrm{H_2O}$ you are in no way explaining what water is. You only reduce water to simpler entities whose properties are simpler, thereby reducing the complexity of the fundamental description. In a way all of physics is just models for things. The fundamental objects in physical models have no ontological value. They are not, they are just entities put forward to make the description of their relations neat and allowing prediction of nature. So the most ontology you can get out of physics is the pragmatical stance, that things are how they relate to one another (the relative pronoun already tells you something is wrong here). But even that is a philosophical statement, in no way justifiable with physics, though motivated by physics. This all of course does not mean, that a physicist is not allowed to have a private ontology. He or she must just realize that their ontology is independent of physics and rather a matter of philosophical position and taste. | {
"source": [
"https://physics.stackexchange.com/questions/187015",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/82526/"
]
} |
187,016 | In a x86/amd64 CPU could I find an instruction or a combination of instructions that leak inside the system? Could some other physics properties (not accounted by the design) be measured or interacted with? | Due to the nature of the other answers I feel compelled to expand my comment to an answer. While the other answers tell you, that energy is not a thing, they fail to tell you, why the concept of energy (or electrons, or quarks) being a thing is utterly irrelevant for physics, undecidable by means of physics and unscientific. Ultimately, physics can never answer what something is made of. Physics cannot provide an ontology. Physics can only describe how things relate. We can say a proton is composed of quarks, but really what are quarks made of? But the question is moot and irrelevant for physics (because we can describe what quarks do). This shows that physics cannot only not tell what abstract concepts like energy are (or are made of), but that this assertion also holds for something, like elementary particles, of that we usually think much more as being "a thing". The question what something is , is a question of ontology, that is, a question of philosophy. (And even some modern proponents of analytical philosophy begin to consider ontology as more or less irrelevant, and choose to put epistemology at the root instead). Physics can only reduce a concept to more fundamental concepts ("reductionism"), without being able to explain what the more fundamental concept are . Physics is based on observational evidence. Experiments tell you how objects relate, but can never tell you what objects are. Observational evidence tells you that masses attract each other – so what are masses? You can only explain tautologically that masses are things that attract each other. But the system of relations uncovered by observation allows you to predict the behaviour of the solar system. If you say water is $\mathrm{H_2O}$ you are in no way explaining what water is. You only reduce water to simpler entities whose properties are simpler, thereby reducing the complexity of the fundamental description. In a way all of physics is just models for things. The fundamental objects in physical models have no ontological value. They are not, they are just entities put forward to make the description of their relations neat and allowing prediction of nature. So the most ontology you can get out of physics is the pragmatical stance, that things are how they relate to one another (the relative pronoun already tells you something is wrong here). But even that is a philosophical statement, in no way justifiable with physics, though motivated by physics. This all of course does not mean, that a physicist is not allowed to have a private ontology. He or she must just realize that their ontology is independent of physics and rather a matter of philosophical position and taste. | {
"source": [
"https://physics.stackexchange.com/questions/187016",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/82529/"
]
} |
187,098 | One way that second quantization is motivated in an introductory text (QFT, Schwartz) is: The general solution to a Lorentz-invariant field equation is an integral over plane waves (Fourier decomposition of the field). Each term of the plane wave satisfies the harmonic oscillator equation. Therefore, each Fourier component is interpreted as a harmonic oscillator in ordinary QM The $n$'th energy level of each Fourier component is now interpreted as $n$ particles. Everything in 1-3 looks like a sensible application of ordinary QM to a field. But how did 4 come about? What is the justification? | $\renewcommand{ket}[1]{|#1\rangle}$
Item #4 in your list is best thought of as the definition of the word "particle". Consider a classical vibrating string.
Suppose it has a set of normal modes denoted $\{A, B, C, \ldots\}$.
To specify the state of the string, you write it as a Fourier series $$f(x) = \sum_{\text{mode } n=\in \{A,B,C,\ldots \}} c_n [\text{shape of mode }n](x) \, .$$ In the typical case, $[\text{shape of mode }n](x)$ is something like $\sin(n\pi x / L)$ where $L$ is the length of the string.
Anyway, the point is that you describe the string by enumerating its possible modes and specifying the amount by which each mode is excited by giving the $c_n$ values. Suppose mode $A$ has one unit of energy, mode $C$ has two units of energy, and all the other modes have zero units of energy.
There are two ways you could describe this situation. Enumerate the modes (good) The first option is like the Fourier series: you enumerate the modes and give each one's excitation level:
$$|1\rangle_A, |2\rangle_C \, .$$
This is like second quantization; we describe the system by saying how many units of excitation are in each mode.
In quantum mechanics, we use the word "particle" instead of the phrase "unit of excitation".
This is mostly because historically we first understood "units of excitations" as things we could detect with a cloud chamber or Geiger counter.
To be honest, I think "particle" is a pretty awful word given how we now understand things. Label the units of excitation (bad) The second way is to give each unit of excitation a label, and then say which mode each excitation is in.
Let's call the excitations $x$, $y$, and $z$.
Then in this notation the state of the system would be
$$\ket{A}_x, \ket{C}_y, \ket{C}_z \, .$$
This is like first quantization.
We've now labelled the "particles" and described the system by saying which state each particle is in.
This is a terrible notation though, because the state we wrote is equivalent to this one
$$\ket{A}_y, \ket{C}_x, \ket{C}_z \, .$$
In fact, any permutation of $x,y,z$ gives the same state of the string.
This is why first quantization is terrible: particles are units of excitation so it is completely meaningless to give them labels . Traditionally, this terribleness of notation was fixed by symmetrizing or anti-symmetrizing the first-quantized wave functions.
This has the effect of removing the information we injected by labeling the particles, but you're way better off just not labeling them at all and using second quantization. Meaning of 2$^{\text{nd}}$ quantization Going back to the second quantization notation, our string was written
$$\ket{1}_A, \ket{2}_C$$
meaning one excitation (particle) in $A$ and two excitations (particles) in $C$.
Another way to write this could be to write a single ket and just list all the excitation numbers for each mode:
$$\ket{\underbrace{1}_A \underbrace{0}_B \underbrace{2}_C \ldots}$$
which is how second quantization is actually written (without the underbraces).
Then you can realize that
$$\ket{000\ldots \underbrace{N}_{\text{mode }n} \ldots000} = \frac{(a_n^\dagger)^N}{\sqrt{N!}} \ket{0}$$
and just write all states as strings of creation operators acting on the vacuum state. Anyway, the interpretation of second quantization is just that it's telling you how many excitation units ("quanta" or "particles") are in each mode in exactly the same way you would do it in classical physics. See this post . Comments on #4 from OP In introductory quantum we learn about systems with a single particle, say, in a 1D box.
That particle can be excited to a variety of different energy levels denoted $\ket{0}, \ket{1},\ldots$.
We refer to this system as having "a single particle" regardless of which state the system is in.
This may seem to run contrary to the statements made above in this answer in which we said that the various levels of excitation are referred to as zero, one, two particles.
However, it's actually perfectly consistent as we now discuss. Let's write the equivalent first and second quantized notations for the the single particle being in each state:
$$\begin{array}{lllll}
\text{second quantization:} & \ket{1,0,0,\ldots}, & \ket{0,1,0,\ldots}, & \ket{0,0,1,\ldots} & \ldots \\
\text{first quantization:} &\ket{0}, &\ket{1}, &\ket{2}, & \ldots
\end{array}
$$
Although it's not at all obvious in the first quantized notation, the second quantized notation makes clear that the various first quantized states involve the particle occupying different modes of the system.
This is actually pretty obvious if we think about the wave functions associated to the various states, e.g. using first quantized notation for a box of length $L$
\begin{align}
\langle x | 0 \rangle & \propto \sin(\pi x / L) \\
\langle x | 1 \rangle & \propto \sin(2\pi x / L) \, .
\end{align}
These are just like the various modes of the vibrating string.
Anyway, calling the first quantized states $\ket{0}$, $\ket{1}$ etc. "single particle states" is consistent with the idea that a particle is a unit of excitation of a mode because each of these states has one total excitation when you sum over all the modes.
This is really obvious in second quantized notation. | {
"source": [
"https://physics.stackexchange.com/questions/187098",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/57422/"
]
} |
187,254 | If the lenses in a camera are not set correctly, the intended subject will be "out of focus" in the resulting image. There seems to be no loss of information here. The photons are just steered to the wrong CCD element. If we knew the exact specification and configuration of the lenses, would it be possible to apply a tranform on the image to see the subject in focus? Sharpening filters are often used to improve "out of focus" images, but I suspect a much better result could be possible if the physics of the optics were considered. | Unfortunately there is a loss of information for physical images i.e. images with finite signal to noise ratio per pixel. An out of focus lens acts like a linear transformation, i.e. a matrix between the focused ideal image and the actual image. To reverse that transformation we have to calculate the inverse of the matrix. Depending on the severity of the blurring that inverse may not exist or it may, if it exists, amplify noise and sampling errors in the resulting image very strongly. Imagine the worst case blurring matrix of a two pixel image: $\begin{bmatrix}0.5&0.5\\0.5&0.5\end{bmatrix}$ This matrix is singular and can not be inverted at all. Take a less severe case (20% blurring), now the matrix is $\begin{bmatrix}0.8&0.2\\0.2&0.8\end{bmatrix}$ and the inverse of that is: $\begin{bmatrix}4/3&-1/3\\-1/3&4/3\end{bmatrix}$ There are two problems with this one: because of negative coefficients in the inverse you may end up with negative pixel values in the reconstructed image, which is unphysical. Secondly the diagonal elements are larger than one, which amplifies noise. Having said that, one can achieve remarkable results if the resulting image has a very high signal to noise ratio and if the inverse transformation can be reconstructed with high precision. If you are interested in this area I would urge you to do your own experiments with a few matrices to get a feel for what's going on. Ideally image blurring is a local phenomenon, i.e. we can restrict ourselves to areas of an image that are only a few (maybe 2-5) pixels wide. This reduces the problem to small matrices. Wolfram Alpha can do the matrix inversion for you, so you don't have to set up any math package (although numpy is easy to use, if you know Python). As for the experimental side of it, the proper way to calibrate a lens requires to produce a series of high contrast test images of either pinholes (delta functions) to retrieve the blurring matrix directly or, even better, to use high frequency stripe patterns to measure the blurring in the Fourier domain. | {
"source": [
"https://physics.stackexchange.com/questions/187254",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/82646/"
]
} |
187,403 | How do grandfather clocks keep going? The pendulum is what makes the clock go. However, the pendulum will slow down due to friction. What energy source keeps the pendulum from eventually stopping? | The premise of your question is incorrect: in fact, the pendulum is what keeps the clock from running! And the clock keeps the pendulum running! A clock is essentially a motor: a device that uses energy from some source to drive the hands of the clock around and around. The source of the energy varies; it could be a tightly wound spring, or a weight dropping down after being raised to some height. The energy is dissipated in the friction in the various gears that are used to reduce the speed of the motor for the different hands. The speed of this motor would depend only on the friction in the various gears... The role of the pendulum is critical. In part of its motion back and forth, it stops the gear train from moving. As the pendulum moves further in its swing, it releases a tooth of the gear, which rotates a little until another part of the pendulum catches another tooth. So each swing of the pendulum allows the clock "motor" to rotate only a fixed number of teeth (usually one tooth exactly) Here's a simple example of an escapement : The next trick is to have the teeth of the gear give a little push to the pendulum as each tooth is released. This compensates for the friction in the pendulum which would otherwise stop the pendulum in a few hours. So the energy source in the clock is keeping the pendulum swinging, as the pendulum regulates the rotation of the gear... | {
"source": [
"https://physics.stackexchange.com/questions/187403",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/21904/"
]
} |
187,456 | When we go in a car and tune to an FM radio station, why doesn't our motion disturb the frequency?
Like the Doppler effect? | It does! However it doesn't change the frequency enough to matter. An FM transmission is not a precise frequency. Instead it spans a range of about 100 or 200 kHz depending on which country you are in . So your FM radio actually accepts a range of frequencies either side of the central frequency. Let's suppose you're travelling at the maximum speed permitted in the UK, which is 70 mph or just over 30 m/s. This will Doppler shift the frequency of the FM station by a factor of about 1.0000001. In the UK the FM frequency is around 100 MHz, so the shift in frequency is about 10 Hz. This is only 0.01 % of the range of frequencies the transmission uses, so the frequency shift does not affect reception. To seriously affect reception you'd need to be travelling at around 100 000 miles per hour. For completeness I should probably add that modern radios auto tune, and would automatically compensate for a change of frequency due to the Doppler shift. | {
"source": [
"https://physics.stackexchange.com/questions/187456",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/82747/"
]
} |
187,917 | I've heard many scientists, when giving interviews and the like, state that if one were falling into a black hole massive enough that the tidal forces at the event horizon weren't too extreme, that you wouldn't "notice" or "feel" anything, and so forth. Thinking about this for a few minutes, it seems to be quite wrong. If you're falling feet first for example, as your feet cross the horizon, your brain can no longer receive any information from them, as the information would have to travel faster than light. Once you are entirely within the horizon, no part of your body closer to the singularity can send any sort of signal to any part of your body that is further away, for the same reason. Even bloodflow would stop, as blood that is pumped downward towards your feet could never be pumped back up again. In other words, inside the event horizon is a series of even more event horizons, like the layers of an onion, infinitely thin. Am I missing something important? | This is a great question, because it's a subtle variation on the usual question about spaghettification and supermassive black holes, and shows somewhat deeper thinking. So let's assume the black hole is supermassive -- or more specifically that you are really tiny compared to the black hole -- so that we can ignore tidal effects. Tidal effects are the difference in gravitational "force" on two different parts of an object. In this case, I mean the difference between the acceleration of your feet and your head. Your feet are slightly closer to the center of the black hole, so they will experience a slightly greater acceleration than your head. You would feel this as a slight tug on your feet. The bigger the hole or the farther you are from it, the smaller the difference will be. At some point, it will be so small that it's "in the noise" and you don't even notice it. We're assuming that. If your head were somehow stuck just outside the horizon, † you would be right. I don't think anyone would claim you wouldn't feel anything if your head were attached to a rocket keeping you out, while your feet dangled inside the black hole. :) But those aren't tidal effects; they're acceleration effects. On the other hand, if you are falling into the supermassive black hole (even if you jumped off this crazy rocket just an instant earlier), things are very different. Your head and feet are being "accelerated" at basically the same rate (relative to some stationary coordinate system, let's say) because you are so small compared to the black hole. So your head is moving at roughly the same speed as your feet, which means that the signal doesn't have to actually move outward relative to these stationary coordinates (it can't). Instead, it just needs to move inward more slowly than your head. And that's entirely allowed everywhere, even well inside the black hole. You'll typically see this sort of thing represented by a graph of the light cones. And inside the horizon, those light cones "tip over" towards the singularity. This means that even light pointed outward can't actually move outward; the outward-pointing light ray will still be moving toward the singularity. But your head (and your feet) are moving toward the singularity faster, so your head enters into the light cone of your feet. Which means that relative to your head light can still move outward, as can a nerve impulse. Basically, think of two light rays given off by your feet: one directed toward the singularity, and the other directed away from it. You'll probably believe that they have different speeds. The speed of your feet is somewhere between those two, as is the speed of your head. So all that needs to happen is for your head to enter the future light cone of your feet before your head hits the singularity. Not a problem, since the black hole is so large and you've still got a while to go. Now, you might be concerned that your feet will hit the singularity before your head gets that first signal, which would seem weird. But then you remember that the concept of simultaneity is relative . Your head and feet are in the same reference frame -- at least far from the singularity -- so they experience things at basically the same rate, and nearly the same time as judged in their own reference frame. † Just as a side note, you should try to distinguish between an event horizon and an apparent horizon . Technically, you're talking about the latter, which is the local surface where light rays that are directed outward can't actually move outward. An event (or absolute) horizon, on the other hand, has nothing to do with local effects -- at least not directly. You can only know if something is an event horizon if you know the entire future history of the universe. Unfortunately, the term "event horizon" is thrown around in popular descriptions of black holes when it shouldn't be. They happen to be the same for certain special black holes, but they really are different concepts, and the right way to think about a horizon is different in the two cases. I just use the term "horizon", and anyone who knows the difference will figure it out. A good (and accurate) popular reference for all such things is Thorne's "Black holes and time warps" . The standard technical reference is Hawking & Ellis's "The large-scale structure of space-time". | {
"source": [
"https://physics.stackexchange.com/questions/187917",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/82958/"
]
} |
187,929 | Someone was asking me how much force is acting on the waves when the moon is closest to the earth. Of course the first thing I tried to do is apply Newton's equation for universal gravitation $$F = G \frac{m_1 m_2}{r^2}$$ but the formula does not seem to take into consideration the geometry of either the ocean or the moon. Further more, it seems intuitive that most of the force is experienced on a patch of the ocean and negligible elsewhere. I am confused whether we should take into consideration of the geometry of these objects. If so, should the equation be modified to calculate the attractive force for this scenario? If not, how can we calculate this attractive force? | This is a great question, because it's a subtle variation on the usual question about spaghettification and supermassive black holes, and shows somewhat deeper thinking. So let's assume the black hole is supermassive -- or more specifically that you are really tiny compared to the black hole -- so that we can ignore tidal effects. Tidal effects are the difference in gravitational "force" on two different parts of an object. In this case, I mean the difference between the acceleration of your feet and your head. Your feet are slightly closer to the center of the black hole, so they will experience a slightly greater acceleration than your head. You would feel this as a slight tug on your feet. The bigger the hole or the farther you are from it, the smaller the difference will be. At some point, it will be so small that it's "in the noise" and you don't even notice it. We're assuming that. If your head were somehow stuck just outside the horizon, † you would be right. I don't think anyone would claim you wouldn't feel anything if your head were attached to a rocket keeping you out, while your feet dangled inside the black hole. :) But those aren't tidal effects; they're acceleration effects. On the other hand, if you are falling into the supermassive black hole (even if you jumped off this crazy rocket just an instant earlier), things are very different. Your head and feet are being "accelerated" at basically the same rate (relative to some stationary coordinate system, let's say) because you are so small compared to the black hole. So your head is moving at roughly the same speed as your feet, which means that the signal doesn't have to actually move outward relative to these stationary coordinates (it can't). Instead, it just needs to move inward more slowly than your head. And that's entirely allowed everywhere, even well inside the black hole. You'll typically see this sort of thing represented by a graph of the light cones. And inside the horizon, those light cones "tip over" towards the singularity. This means that even light pointed outward can't actually move outward; the outward-pointing light ray will still be moving toward the singularity. But your head (and your feet) are moving toward the singularity faster, so your head enters into the light cone of your feet. Which means that relative to your head light can still move outward, as can a nerve impulse. Basically, think of two light rays given off by your feet: one directed toward the singularity, and the other directed away from it. You'll probably believe that they have different speeds. The speed of your feet is somewhere between those two, as is the speed of your head. So all that needs to happen is for your head to enter the future light cone of your feet before your head hits the singularity. Not a problem, since the black hole is so large and you've still got a while to go. Now, you might be concerned that your feet will hit the singularity before your head gets that first signal, which would seem weird. But then you remember that the concept of simultaneity is relative . Your head and feet are in the same reference frame -- at least far from the singularity -- so they experience things at basically the same rate, and nearly the same time as judged in their own reference frame. † Just as a side note, you should try to distinguish between an event horizon and an apparent horizon . Technically, you're talking about the latter, which is the local surface where light rays that are directed outward can't actually move outward. An event (or absolute) horizon, on the other hand, has nothing to do with local effects -- at least not directly. You can only know if something is an event horizon if you know the entire future history of the universe. Unfortunately, the term "event horizon" is thrown around in popular descriptions of black holes when it shouldn't be. They happen to be the same for certain special black holes, but they really are different concepts, and the right way to think about a horizon is different in the two cases. I just use the term "horizon", and anyone who knows the difference will figure it out. A good (and accurate) popular reference for all such things is Thorne's "Black holes and time warps" . The standard technical reference is Hawking & Ellis's "The large-scale structure of space-time". | {
"source": [
"https://physics.stackexchange.com/questions/187929",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/56452/"
]
} |
188,112 | I have read about why a candle flame is in the shape of a tear drop in the presence of gravity. The top portion which is sharp is directly above the wick, where the soot particles from the wick rise up due to buoyancy and burn. However, when I keep two flames side by side, with a space of about 3mm between the actual yellow boundaries of the two flames, I observe that the two flames are trying to get merged and the widths of the flames increase in order to mix with each other. When I bring the flames a little closer to the point where the initial yellow boundaries would have just touched, there is only a single flame now, a much bigger tear drop and the highest "sharp" point being right in the middle of the two candles/lamps . Could someone please explain this. | I did some experimenting (playing ? :-)). The effect is "ill conditioned" and, while the result when the wicks are in close proximity is always a joined flame, the results when the separation is increased slightly is very 'time variable'. Using even quite thin candles (thicker than tapers - about 10mm od) flame proximity could not be got close enough to cause flame joining when the bodies were vertical. I angled two candles (see photos) and mounted them on bases which allowed X (and Z) separations to be varied. I took about 90 photos. Best results for this purpose seemed to be given by using flash to give basic image and reducing shutter speed below what would usually be used to get a degree of time averaging of the flame motion. At higher shutter speeds a flame that was flickering and that visually looked far taller than when the candles were widely separated, was almost always far shorter in the photos than it appeared to the eye. At shutter speeds of 1/20s or longer the perceived and photographed flames appear similar. I believe that the mechanisms that I described originally (material below) were generally correct but the impression given by experiment is that heat transfer between flames is the main factor in flame growth which escalates into flame combination at very low separations. Apart from flame size there are no apparent gross indications of decreasing proximity. Flame colouration on the lower curve of each flame on the side facing the other candle tends to change, with more of the red outer layer that is consistently seen further up the flame, but this effect is variable with flame flickering from air currents or flame interactions. At very low separations and prior to joining, flame sizes suddenly increase substantially and flames may become very unstable with pronounced interactions between flames, but also may coexist stably for extended periods. [Somewhere in Brazil the Lorenz butterfly is enjoying itself]. Larger version here and, sequence. Much larger view here - 5442 x 1064 Shorter: In the the region between the two candles a number of effects combine to produce increased vertical gas flow (of both air and combustion products) and higher temperatures at a given height. This raises the height of the combustion zones compared to elsewhere in the flame and the results are "regenerative" and continue until a steady state is reached. Factors which cause the above temperature rise and increased flow include: Blocking of incoming radial air due to the other candle, two streams of approximately tangential air into the shared zone, radiative heating of incoming air further away than elsewhere due to two energy sources, greater convection in this zone due to increased energy input, greater volatilised fuel feed (less so than for air feed) Longer: A candle flame is a high temperature chemical reaction between atmospheric Oxygen and gaseous 'fuel', consisting of volatilised solids - typically paraffin or bees 'wax' - that is liquefied by radiated energy from the reaction above it and drawn vertically by capillary action into the reaction zone to replace fuel that has reacted. The high temperature of reaction relative to the surrounding Oxygen source produces low density high temperature combustion products which undergo classic "convective heat transfer" as the high temperature less low dense combustion products rise vertically and are displaced by cold air which is input from all sides. Consider a single ideal candle burning in isolation in still air: The flame radiates energy down into the 'wax' below it causing it to melt (aka solid to liquid phase change). The liquid is drawn up the provided "wick" by capilliary action until the closeness to the combustion zone raises its temperature to vaporisation point (aka liquid to gaseous phase change). For an isolated ideal symmetrical candle with a vertical wick in otherwise still air, cool relatively dense air flows into the combustion zone equally from all sides and hotter, less dense combustion products become 'buoyant' due to density differences and as incoming air is entering from all radial directions, the hotter gases rise vertically above the centre of the candle. Fuel feed is vertical into the combustion zone, air feed is horizontal (& radial inwards) into the combustion zone, and the logical and only place left for the hot less dense combustion products is 'up'. Radiant energy escapes radially in a rotationally symmetrical pattern as light and heat (the two are the same in nature, being differentiated only by wavelength). The radiant energy output is not symmetrical when viewed in cross section due to the flow of reactants, changes in temperature and varying opacity of the flame zone. Because the flame zone is "rotationally" symmetrical, it appears the same from any radial direction, but to a radially positioned observer (the safest place to be when looking at a candle) it appears wider than thick (ie wide and flattened in depth) as the width of the combustion zone is easily seen, but the depth is hidden by the 'flame'. A non ideal candle may not be truly symmetrical and the wick may be at an angle and fuel flow up the wick may not enter the zone symmetrically relative to the candle body, but the above affects are observed to occur 'well enough' in everyday candles. Now consider two identical burning candles placed a distance "d" apart. When d is large the two candles burn independently and appear as before. As d is diminished the air between the two candles starts to be affected by both flames. Instead of drawing air from infinity, the centre air must be fed from "either side" of the centre line, as "there is a candle in the way out towards infinity". Also the air between the candles starts to be preheated by two sources of radiation rather than one so is hotter than at other points the same distance from a candle centre. As d is decreased the increase in air temperature near the common zone becomes increasingly hotter than elsewhere so air starts to rise convectively sooner prior to meeting the main combustion zone, so that combustion happens higher up in the inter-candle region. This effect can be seen quantitatively in this crop from the main image below. As d is further reduced to say under a body diameter there is no air path from the direction of the other candle and all air along the joining line must enter approximately tangentially. The incoming air is substantially preheated both by radiation and by gases carried into this hotter faster rising zone from further around the candle and as d further decreases any point in the inter candle region becomes essentially indistinguishable from a point at somewhat less distance from the centre elsewhere on the circumference. At low enough separation's two unavoidably become one. This image is non ideal but shows in the inter candle zone: The lack of air path between the two candles (there's another candle in the way!), The transfer of radiative energy from two sources (place your finger that close and you know what would happen.) Higher level of equivalent combustion zones due to greater air flow leading to ... Image is from here unrelated - Wikipedia. ============================================================= And many more. Larger version here about 3000 pixels wide - enough to see most usefully, should anyone care. | {
"source": [
"https://physics.stackexchange.com/questions/188112",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12841/"
]
} |
188,114 | If I make a object enter a planet's gravity range,it hits the surface with huge velocity,where is it gaining k.e. from?
is it from the potential energy that it gained when it is separated from the planet at the big bang? | I did some experimenting (playing ? :-)). The effect is "ill conditioned" and, while the result when the wicks are in close proximity is always a joined flame, the results when the separation is increased slightly is very 'time variable'. Using even quite thin candles (thicker than tapers - about 10mm od) flame proximity could not be got close enough to cause flame joining when the bodies were vertical. I angled two candles (see photos) and mounted them on bases which allowed X (and Z) separations to be varied. I took about 90 photos. Best results for this purpose seemed to be given by using flash to give basic image and reducing shutter speed below what would usually be used to get a degree of time averaging of the flame motion. At higher shutter speeds a flame that was flickering and that visually looked far taller than when the candles were widely separated, was almost always far shorter in the photos than it appeared to the eye. At shutter speeds of 1/20s or longer the perceived and photographed flames appear similar. I believe that the mechanisms that I described originally (material below) were generally correct but the impression given by experiment is that heat transfer between flames is the main factor in flame growth which escalates into flame combination at very low separations. Apart from flame size there are no apparent gross indications of decreasing proximity. Flame colouration on the lower curve of each flame on the side facing the other candle tends to change, with more of the red outer layer that is consistently seen further up the flame, but this effect is variable with flame flickering from air currents or flame interactions. At very low separations and prior to joining, flame sizes suddenly increase substantially and flames may become very unstable with pronounced interactions between flames, but also may coexist stably for extended periods. [Somewhere in Brazil the Lorenz butterfly is enjoying itself]. Larger version here and, sequence. Much larger view here - 5442 x 1064 Shorter: In the the region between the two candles a number of effects combine to produce increased vertical gas flow (of both air and combustion products) and higher temperatures at a given height. This raises the height of the combustion zones compared to elsewhere in the flame and the results are "regenerative" and continue until a steady state is reached. Factors which cause the above temperature rise and increased flow include: Blocking of incoming radial air due to the other candle, two streams of approximately tangential air into the shared zone, radiative heating of incoming air further away than elsewhere due to two energy sources, greater convection in this zone due to increased energy input, greater volatilised fuel feed (less so than for air feed) Longer: A candle flame is a high temperature chemical reaction between atmospheric Oxygen and gaseous 'fuel', consisting of volatilised solids - typically paraffin or bees 'wax' - that is liquefied by radiated energy from the reaction above it and drawn vertically by capillary action into the reaction zone to replace fuel that has reacted. The high temperature of reaction relative to the surrounding Oxygen source produces low density high temperature combustion products which undergo classic "convective heat transfer" as the high temperature less low dense combustion products rise vertically and are displaced by cold air which is input from all sides. Consider a single ideal candle burning in isolation in still air: The flame radiates energy down into the 'wax' below it causing it to melt (aka solid to liquid phase change). The liquid is drawn up the provided "wick" by capilliary action until the closeness to the combustion zone raises its temperature to vaporisation point (aka liquid to gaseous phase change). For an isolated ideal symmetrical candle with a vertical wick in otherwise still air, cool relatively dense air flows into the combustion zone equally from all sides and hotter, less dense combustion products become 'buoyant' due to density differences and as incoming air is entering from all radial directions, the hotter gases rise vertically above the centre of the candle. Fuel feed is vertical into the combustion zone, air feed is horizontal (& radial inwards) into the combustion zone, and the logical and only place left for the hot less dense combustion products is 'up'. Radiant energy escapes radially in a rotationally symmetrical pattern as light and heat (the two are the same in nature, being differentiated only by wavelength). The radiant energy output is not symmetrical when viewed in cross section due to the flow of reactants, changes in temperature and varying opacity of the flame zone. Because the flame zone is "rotationally" symmetrical, it appears the same from any radial direction, but to a radially positioned observer (the safest place to be when looking at a candle) it appears wider than thick (ie wide and flattened in depth) as the width of the combustion zone is easily seen, but the depth is hidden by the 'flame'. A non ideal candle may not be truly symmetrical and the wick may be at an angle and fuel flow up the wick may not enter the zone symmetrically relative to the candle body, but the above affects are observed to occur 'well enough' in everyday candles. Now consider two identical burning candles placed a distance "d" apart. When d is large the two candles burn independently and appear as before. As d is diminished the air between the two candles starts to be affected by both flames. Instead of drawing air from infinity, the centre air must be fed from "either side" of the centre line, as "there is a candle in the way out towards infinity". Also the air between the candles starts to be preheated by two sources of radiation rather than one so is hotter than at other points the same distance from a candle centre. As d is decreased the increase in air temperature near the common zone becomes increasingly hotter than elsewhere so air starts to rise convectively sooner prior to meeting the main combustion zone, so that combustion happens higher up in the inter-candle region. This effect can be seen quantitatively in this crop from the main image below. As d is further reduced to say under a body diameter there is no air path from the direction of the other candle and all air along the joining line must enter approximately tangentially. The incoming air is substantially preheated both by radiation and by gases carried into this hotter faster rising zone from further around the candle and as d further decreases any point in the inter candle region becomes essentially indistinguishable from a point at somewhat less distance from the centre elsewhere on the circumference. At low enough separation's two unavoidably become one. This image is non ideal but shows in the inter candle zone: The lack of air path between the two candles (there's another candle in the way!), The transfer of radiative energy from two sources (place your finger that close and you know what would happen.) Higher level of equivalent combustion zones due to greater air flow leading to ... Image is from here unrelated - Wikipedia. ============================================================= And many more. Larger version here about 3000 pixels wide - enough to see most usefully, should anyone care. | {
"source": [
"https://physics.stackexchange.com/questions/188114",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/70505/"
]
} |
188,208 | I know that this machine does not work, via thermodynamics. I am asking for an analysis in terms of mechanics and magnetism. Anyway, so here is the machine: (source: cabinetmagazine.org ) The magnet (the red ball) pull the ball up the ramp, and then it drops the ball through the hole, which then rolls down, and go ups the ramp again. Thermodynamics shows that this can not work. From a mechanics and magnetism perspective, what happens when you do this, and why can't it happen? I have a source saying that it would work if it were frictionless and we didn't try to extract energy from it ( here ), so in a certain sense, it is very close to possible (they didn't provide a full analysis though.) Another Image: | There is no problem assuming that the ball will fall trough the hole. Even is the magnetic force is large, it only needs to be larger than the gravity component along the inclined surface. This component is $mg \cos \theta$. Once the ball gets to the hole the gravity felt by the ball increases to mg, so it can happens that the ball that initially went up now goes goes down the hole. However, notice that as the ball moves downward the gravity force starts to diminish again (is becoming more horizontal also the magnetic force decreases as the ball moves away from the magnet. If you make a graph about both forces you will find that there is always a point on the ball's path when both forces have the same magnitude but opposite sign. That will be the equilibrium position of the ball, where it will stay at rest if you extract all its initial kinetic energy. Thus it is not a perpertum mobile after all. | {
"source": [
"https://physics.stackexchange.com/questions/188208",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40394/"
]
} |
188,418 | I'm learning about work in my dynamics class right now. We have defined the work on a particle due to the force field from point A to point B as the curve Integral over the force field from point A to B. From math I know that if a vector field has a potential, we only need to evaluate the potential at point B minus the potential at point A to get the result of the curve Integral. In the text that I'm reading, it's explained that if the integral over a force-field is path-independent, then the force field $F = -{\rm grad}(V)$, where $V$ is the potential. Why is it defined as the negative gradient? Doesn't one determine the potential from $F$ mathematically. Why do we impose the sign on the potential? | We introduce a minus sign to equate the mathematical concept of a potential with the physical concept of potential energy. Take the gravitational field, for example, which we approximate as being constant near the surface of Earth. The force field can then be described by $\vec{F}(x,y,z)=-mg\hat{e_z}$, taking the up/down direction to be the $z$ direction. The mathematical potential $V$ would be $V(x,y,z) = -mgz+\text{Constant}$ and would satisfy $\nabla V=\vec{F}$. This would correspond with decreasing in height increasing in potential energy which would make us have to redefine mechanical energy as $T-V$ in order to maintain conservation. Instead of redefining mechanical energy, we introduce the minus sign $\vec{F} = -\nabla V$ which equates the physical notion of potential energy with the mathematical notion of the scalar potential. | {
"source": [
"https://physics.stackexchange.com/questions/188418",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/68592/"
]
} |
188,673 | I always see the label and it says 350G's withstandable. What would put this over 350G's? Is it even possible to hit 350Gs of force to a hard drive? | Is it even possible to hit 350Gs of force to a hard drive? Sure is. Drop it on the floor. You are thinking about sustained forces. 350g sustained won't happen even in rocket launches. But momentary forces can easily peak at this level. Note that the G limit on the drive is for when it's not running. No spinning drive will like 350g, except maybe in particular directions that will never happen in reality. If you drop your hard drive from $1~\text{m}$ it will hit the floor at around: $$
\sqrt{2\times 1~\text{m}\times 1~g}\approx 4.4~\text{m}\cdot\text{s}^{-1}
$$ At exactly $350~g$ it would come to a stop in: $$
\frac{\left(4.4~\text{m}\cdot\text{s}^{-1}\right)^2}{2\times 350~g}=2.8~\text{mm}
$$ (Note that due to the way the math works out, the stopping distance when dropping from a height $h$ is just $hg/a$). Since the actual impact will probably be a varying acceleration and the rigid case of the hard drive will probably deform less than $3~\text{mm}$, the actual peak acceleration can easily exceed $350~g$. | {
"source": [
"https://physics.stackexchange.com/questions/188673",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/83275/"
]
} |
188,678 | I am looking to learn AP Physics C (if you are not acquainted with the curriculum see here ), both the mechanics part and the electricity/magnetism part. I have not had much exposure to an AP-style physics curriculum for AP Physics 1 or 2 (before C). Ideally this book would be some sort of AP-preparation book, but I am not taking a course and most of the books I have seen which are designed as "AP-prep" are meant to be supplements to an actual course. Ideally this book would also be comprehensive, easy to follow, and it would explain the intuition (if there is any) behind concepts. A book which I found to be the perfect example of this (at least for me) was Morris Tenenbaum's and Henry Pollard's book, Ordinary Differential Equations (maybe you have read it). | Is it even possible to hit 350Gs of force to a hard drive? Sure is. Drop it on the floor. You are thinking about sustained forces. 350g sustained won't happen even in rocket launches. But momentary forces can easily peak at this level. Note that the G limit on the drive is for when it's not running. No spinning drive will like 350g, except maybe in particular directions that will never happen in reality. If you drop your hard drive from $1~\text{m}$ it will hit the floor at around: $$
\sqrt{2\times 1~\text{m}\times 1~g}\approx 4.4~\text{m}\cdot\text{s}^{-1}
$$ At exactly $350~g$ it would come to a stop in: $$
\frac{\left(4.4~\text{m}\cdot\text{s}^{-1}\right)^2}{2\times 350~g}=2.8~\text{mm}
$$ (Note that due to the way the math works out, the stopping distance when dropping from a height $h$ is just $hg/a$). Since the actual impact will probably be a varying acceleration and the rigid case of the hard drive will probably deform less than $3~\text{mm}$, the actual peak acceleration can easily exceed $350~g$. | {
"source": [
"https://physics.stackexchange.com/questions/188678",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/79000/"
]
} |
189,060 | I was reading a book on differential geometry in which it said that a problem early physicists such as Einstein faced was coordinates and they realized that physics does not obey man's coordinate systems. And why not? When I am walking from school to my house, I am walking on a 2D plane the set of $\mathbb{R} \times \mathbb{R}$ reals. The path of a plane on the sky can be characterized in 3D parameters. A point on a ball rotates in spherical coordinates. A current flows through an inductor via cylindrical coordinates. Why do we need coordinate-free description in the first place? What things that exist can be better described if we didn't have a coordinate system to describe it? | That's a very good question. While it may seem "natural" that the world is ordered like a vector space (it is the order that we are accustomed to!), it's indeed a completely unnatural requirement for physics that is supposed to be built on local laws only. Why should there be a perfect long range order of space, at all? Why would space extend from here to the end of the visible universe (which is now some 40 billion light years away) as a close to trivial mathematical structure without any identifiable cause for that structure? Wherever we have similar structures, like crystals, there are causative forces that are both local (interaction between atoms) and global (thermodynamics of the ordered phase which has a lower entropy than the possible disordered phases), which are responsible for that long range order. We don't have that causation argument for space (or time), yet. If one can't find an obvious cause (and so far we haven't), then the assumption that space "has to be ordered like it is" is not natural and all the theory that we build on that assumption is built on a kludge that stems from ignorance. "Why do we need coordinate free in the first place?"... well, it's not clear that we do. Just because we have been using them, and with quite some success, doesn't mean that they were necessary. It only means that they were convenient for the description of the macroscopic world. That convenience does, unfortunately, stop once we are dealing with quantum theory. Integrating over all possible momentum states in QFT is an incredibly expensive and messy operation that leads to a number of trivial and not so trivial divergences that we have to fight all the time. There are a few hints from nature and theory that it may actually be a fools errand to look at nature in this highly ordered way and that trying to order microscopically causes more problems than it solves. You can listen to Nima Arkani Hamed here giving a very eloquent elaboration of the technical (not just philosophical) problems with our obsession with space-time coordinates: https://www.youtube.com/watch?v=sU0YaAVtjzE . The talk is much better in the beginning when he lays out the problems with coordinate based reasoning and then it descends into the unsolved problem of how to overcome it. If anything, this talk is a wonderful insight into the creative chaos of modern physics theory. As a final remark I would warn you about the human mind's tendency to adopt things that it has heard from others as "perfectly normal and invented here". Somebody told you about $\mathbb R$ and you have adopted it as if it was the most natural thing in the world that an uncountable infinity of non-existing objects called "numbers" should exist and that they should magically map onto real world objects, which are quite countable and never infinite. Never do that! Not in physics and not in politics. | {
"source": [
"https://physics.stackexchange.com/questions/189060",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/81952/"
]
} |
189,600 | Why and how does negative velocity exist? I have read on the internet about negative velocity but I still don't understand how it can even exist since time is positive and so is length. By doing some math I came to the conclusion it can't and should not exist and yet there are so many papers and videos trying to explain it. | Velocity is a vector. Speed is its magnitude. Position is a vector. Length (or distance ) is its magnitude. A vector points in a direction in the given space. A negative vector (or more precisely "the negative of a vector") simply points the opposite way. If I drive from home to work (defining my positive direction ), then my velocity is positive if I go to work , but negative when I go home from work. It is all about direction seen from how I defined my positive axis . Consider an example where I end up further back than where I started. I must have had negative net velocity to end up going backwards (I end at a negative position ). But only because backwards and forwards are clearly defined as the negative and positive directions, respectively, before I start. So, does negative velocity exist? Well, since it is just a matter of words that describe the event, then yes . Negative velocity just means velocity in the opposite direction than what would be positive. At the core of it, signs have no meaning in real life. Directions have meaning, and signs are a mathematical way to indicate or alter 1-dimensional direction. | {
"source": [
"https://physics.stackexchange.com/questions/189600",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/83711/"
]
} |
190,262 | I have come across a question that asked me to find the radius of curvature of a projectile . As far as I know, the path of a projectile is a parabola and I have found mention of the radius of curvature referring to lenses and mirrors. But in optics, the lens and mirrors were assumed to be part of a circle. My questions are: How can a parabola have a center from which a radius is to be measured? Does the radius of curvature change with the position of body (in projectile motion)? In mechanics and mathematics, what is the radius of curvature and how does one calculate it (in the case of a parabola)? | So let's start with your last question, informally, the radius of curvature is a measure of how much a certain curve is pointy and has sharp corners. Given a curve $y$, you can calculate its radius of curvature using this formula: $$\dfrac{\left[1+\left(\dfrac{dy}{dx}\right)^2\right]^\dfrac{3}{2}}{\left|\dfrac{d^2y}{dx^2}\right|}$$ You might ask what radii of circles have to do with curvature, so it's worthwhile explaining it. This is a parabola: As you can see, the sides of the parabola are pretty flat, whereas its vertex and the surrounding region (i.e.: at $x=0$) have a relatively sharp corner. So the question is how to mathematically describe this property? Well, one way to do it is to use circles. The part of the curve that is pretty flat can be considered to be a section of a really large circle(as shown in the picture), this circle has large radius, and hence we say this part of the curve has large radius of curvature, that is, it's very flat. On the other hand, the vertex of the parabola and the surrounding region are relatively sharp and pointy, hence you'll notice it takes a circle with small radius to fit it on this edgy section of the parabola, we say this region has small radius of curvature. You'll also note that, the radius of curvature for a curve, changes from one point on the curve to another, you'll further notice that, when the region is flat, the rate of change of the radius of curvature is small(you can use small number of huge circles to describe a flat region), whereas it takes a lot of circles with small radii to describe a sharp corner, and hence the rate of change of the radius of curvature is great at these regions. How can a parabola have a center from which radius is to be measured? No, it does not, but every point on the parabola and and the surrounding region can be regarded as a part of a circle with certain radius. Does the radius of curvature changes with the position of body(in projectile motion)? Yes, as stated earlier, the radius of curvature changes from point to point on a curve, since the path of the projectile can be modeled as its position on a parabola, hence the radius of curvature will change with the change of position of the projectile. | {
"source": [
"https://physics.stackexchange.com/questions/190262",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
190,274 | Prior to the Big Bang all matter was compressed into a point of high density. Why isn't all matter already entangled? | So let's start with your last question, informally, the radius of curvature is a measure of how much a certain curve is pointy and has sharp corners. Given a curve $y$, you can calculate its radius of curvature using this formula: $$\dfrac{\left[1+\left(\dfrac{dy}{dx}\right)^2\right]^\dfrac{3}{2}}{\left|\dfrac{d^2y}{dx^2}\right|}$$ You might ask what radii of circles have to do with curvature, so it's worthwhile explaining it. This is a parabola: As you can see, the sides of the parabola are pretty flat, whereas its vertex and the surrounding region (i.e.: at $x=0$) have a relatively sharp corner. So the question is how to mathematically describe this property? Well, one way to do it is to use circles. The part of the curve that is pretty flat can be considered to be a section of a really large circle(as shown in the picture), this circle has large radius, and hence we say this part of the curve has large radius of curvature, that is, it's very flat. On the other hand, the vertex of the parabola and the surrounding region are relatively sharp and pointy, hence you'll notice it takes a circle with small radius to fit it on this edgy section of the parabola, we say this region has small radius of curvature. You'll also note that, the radius of curvature for a curve, changes from one point on the curve to another, you'll further notice that, when the region is flat, the rate of change of the radius of curvature is small(you can use small number of huge circles to describe a flat region), whereas it takes a lot of circles with small radii to describe a sharp corner, and hence the rate of change of the radius of curvature is great at these regions. How can a parabola have a center from which radius is to be measured? No, it does not, but every point on the parabola and and the surrounding region can be regarded as a part of a circle with certain radius. Does the radius of curvature changes with the position of body(in projectile motion)? Yes, as stated earlier, the radius of curvature changes from point to point on a curve, since the path of the projectile can be modeled as its position on a parabola, hence the radius of curvature will change with the change of position of the projectile. | {
"source": [
"https://physics.stackexchange.com/questions/190274",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/84029/"
]
} |
190,308 | What do we know about accretion rates of micro black holes? Suppose a relative small black hole (mass about $10^9$ kilograms) would be thrown into the sun. Eventually this black hole will swallow all matter into the star, but how much time will pass before this happens? Are there any circumstances where the black hole would trigger a gravitational collapse in the core, and result in a supernova? There seems to be some margin for the accretion heating to counter or exceed the heating from fusion, so it could throw the star over the temperature threshold for carbon-12 fusion and above. The black hole is converting nearly 80% - 90% of the rest-mass of the accretion matter to heat, while fusion is barely getting about 0.5% - 1%. Bonus question: Could this be used to estimate a bound on primordial micro black holes with the fraction of low-mass stars going supernova? | The micro black hole would be unable to accrete very quickly at all due to intense radiation pressure. The intense Hawking radiation would have an luminosity of $3.6 \times 10^{14}$ W, and a roughly isotropic flux at the event horizon of $\sim 10^{48}$ W m$^{-2}$. The Eddington limit for such an object is only $6 \times 10^{9}$ W. In other words, at this luminosity (or above), the accretion stalls as matter is driven away by radiation pressure. There is no way that any matter from the Sun would get anywhere near the event horizon. If the black hole was rotating close to the maximum possible then the Hawking radiation would be suppressed and accretion at the Eddington rate would be allowed. But this would then drop the black hole below its maximum spin rate, leading to swiftly increasing Hawking radiation again. As the black hole evaporates, the luminosity increases , so the accretion problem could only become more severe. The black hole will entirely evaporate in about 2000 years. Its final seconds would minutely increase the amount of power generated inside the Sun, but assuming that the ultra-high energy gamma rays thermalised, this would be undetectable. EDIT: The Eddington limit may not be the appropriate number to consider, since we might think that the external pressure of gas inside the Sun might be capable of squeezing material into the black hole. The usual Eddington limit is calculated assuming that the gas pressure is small compared with the radiation pressure. And indeed that is probably the case here. The gas pressure inside the Sun is $2.6 \times 10^{16}$ Pa. The outward radiation pressure near the event horizon would be $\sim 10^{40}$ Pa. The problem is that the length scales are so small here that it is unclear to me that these classical arguments will work at all. However, even if we were to go for a more macroscopic 1 micron from the black hole, the radiation pressure still significantly exceeds the external gas pressure. Short answer: we wouldn't even notice - nothing would happen. Bonus Question: The answer to this is it doesn't have a bearing on the supernova rate, because the mechanism wouldn't cause supernovae. Even if the black hole were more massive and could grow, the growth rate would be slow and no explosive nucleosynthesis would occur because the gas would not be dense enough to be degenerate. Things change in a degenerate white dwarf, where the enhanced temperatures around an accreting mini-black hole could set off runaway thermonuclear fusion of carbon, since the pressure in a degenerate gas is largely independent of temperature. This possibility has been explored by Graham et al (2015) (thanks Timmy), who indeed conclude that type Ia supernova rates could constrain the density of micro black holes in the range $10^{16}$ to $10^{21}$ kg. | {
"source": [
"https://physics.stackexchange.com/questions/190308",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/955/"
]
} |
191,189 | I'm working on something and I need to know the wavelength of the laser pointer that I'm using. Can you suggest me a way, using some optics formulae, or anything else to calculate the wavelength? | Your iPhone is a pretty good grating. I just did a simple experiment with an iPhone, a green laser pointer and a sheet of graph paper. This was the result: The display of the iPhone 6 has a resolution of 326 ppi - meaning we have a "grating spacing" of 25.4/326=0.0779 mm. Different models have different resolutions - make sure you find out what your phone has and don't just use the above. 6 Plus has 401 dpi, the 5 and 5s have 326 dpi. You can use pretty much any screen you happen to have lying around... if you can find the pixel size, you can use it. In the image I see 5 peak separations over 7 squares (of 1/4 inch each), making the spacing 8.9 mm* The grid paper was 127 cm from the face of the phone. We can calculate the wavelength by looking at the following diagrams: Similar triangles tell us that $\frac{s}{D}=\frac{\lambda}{d}$ from which it follows that $$\lambda = \frac{s\cdot d}{D}= 546 nm $$ That is pretty close to the 532 nm usually quoted for a laser pointer. Setting this up with a larger distance to the screen would have allowed more accurate estimation of the peak separation. Still - this got me to 3% without an optical bench (kitchen counter and kitchen ceiling, one hand holding laser pointer while taking picture with the other hand... Yes I would say 3% is OK and you can easily do better.) *Looking more closely at the image, the dot spacing is a little bit less than 5/7 - using a ruler on the image I get about 8.75 mm. That improves the estimate to 541 nm... getting within 2% of the actual value. I doubt my exercise book paper is more accurate than that. As @Benjohn pointed out you could try to use the front facing camera. It takes all kinds of things out of the equation but you lose some resolution. Here was my first attempt: I then repeated it with a 6 Plus (finer resolution screen): It looks possible to deduce the peak spacing directly from that... Afterword So I did play around a bit more with the data. First, I re-measured the distance from the kitchen counter to the ceiling and found that the width of my tape measure wasn't what I thought it was. This made the distance 1 cm larger than I originally had it; also, using some autocorrelation and filtering functions, I found the "true" peak spacing was 8.85 mm, and my new estimate of the wavelength from the first image was updated to 539 nm. Next I tried to use the last image - "self calibrated" image taken with the front facing camera of the 6 Plus. It is hard to get good specs on the camera: from metadata I found the focal length was 2.65 mm, but the pixel size is more elusive. I tried two different methods: in the first method I placed a ruler at exactly 12" (± 0.1") from the front of the camera, and could see 25 cm (± 3 mm). With 960 pixels across, this puts the angular resolution (angle / pixel) at about 0.87 mrad. Taking a picture of a ruler at this distance and analyzing the spacing between lines gave me a value of 0.88 mrad. This is within the error I expect from this measurement. The "blobs" in the last photo were hard to measure accurately - but again some Fourier magic came to my rescue and I determined them to be spaced about 10.1 pixels apart. With the iPhone 6 plus having a finer grid, this gave me a wavelength of 564 nm. Not as good as the other measurement - but not bad for such a blobby image. Re. the Fourier magic: this is the autocorrelation of the image after summing along Y dimension and performing a convolution with a Ricker filter first: And a peak finding algorithm found the following peaks (after fitting to the central five points this was the residuals plot): It can be seen that the peak spacing in the blob image could be estimated with remarkable precision. I attribute the fact that the final answer was "not that great" to the lack of careful calibration of the camera - not the image obtained. There is hope for this method. CDs and DVDs I was curious how well CDs and DVDs might work, so I rigged up a slightly better experiment. Distance from disk to screen was 163 cm, and laser pointer was clamped to reduce motion. With the DVD (Blank Fujifilm 4.7 GB DVD-R), the first maxima were at 170 cm from the central spot, and it was quite easy to pick the location within a couple of mm (spot was narrower in the direction I was measuring). There is some ghosting, but the central peak is not hard to pick out. For the CD (Very Best of Fleetwood Mac, disk 1), the angles of diffraction were smaller and I could see the first and second maximum on each side of the reflected central spot; however, the second one was so spread out it was not easy to pick a clear center: I am not sure if we are seeing unequal spacing between tracks at work, or multiple reflections in the CD coating - I suspect the latter as the effect was much stronger at the lower-angle second peak. At any rate, the deflection angle could be calculated for each case as $\tan^{-1}\frac{s}{D}$: DVD - 46.17°
CD - 20.98° These angles are no longer "small" so we need to be a bit more careful about our equations. We can see that $\frac{\lambda}{d}=\sin\theta$ and $\frac{S}{D}=\tan\theta$. If we assume the wavelength is known, we find the track spacing from this experiment: $$d = \frac{\lambda}{\sin\theta}$$ This gives DVD: 737 nm
CD: 1486 nm the nominal spacing for a DVD is 740 nm, and for a long-playing CD it can be 1500 nm - but CD's can vary quite a bit, depending on the recording length they want to achieve. Unless you know what your disk is, CD's should not be relied upon as accurate gratings. The 737 nm vs 740 nm is an astonishing 0.5% error; it may well be that the 1486 nm measured was in fact 1500 nm, and also within 1% error. If you had seen me balancing on a chair while measuring the distance between spots on the ceiling with a tape measure, you would not have expected me to get that close... One final word: The screen of an iPhone 6 is not perfectly flat, and if you happen to be measuring reflection of your laser pointer close to the edge it is possible you will get a different answer. To first order, all the diffraction peaks will be deflected by the same amount - but if there is appreciable curvature an accurate measurement will show a small difference. It would take a careful setup (proper clamps etc) to detect this; and it would detract from the "cool kitchen counter experiment" atmosphere of this one. | {
"source": [
"https://physics.stackexchange.com/questions/191189",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
191,201 | Why is it that $v\cdot sin(x)$ gives the vertical component and $v \cdot cos(x)$ gives the horizontal component, where $v$ is the speed? What logic is there behind it, or even better is there a proof to back it up? I know by drawing a right angled triangle you can find out the components, but I want a deeper explanation than that. | Your iPhone is a pretty good grating. I just did a simple experiment with an iPhone, a green laser pointer and a sheet of graph paper. This was the result: The display of the iPhone 6 has a resolution of 326 ppi - meaning we have a "grating spacing" of 25.4/326=0.0779 mm. Different models have different resolutions - make sure you find out what your phone has and don't just use the above. 6 Plus has 401 dpi, the 5 and 5s have 326 dpi. You can use pretty much any screen you happen to have lying around... if you can find the pixel size, you can use it. In the image I see 5 peak separations over 7 squares (of 1/4 inch each), making the spacing 8.9 mm* The grid paper was 127 cm from the face of the phone. We can calculate the wavelength by looking at the following diagrams: Similar triangles tell us that $\frac{s}{D}=\frac{\lambda}{d}$ from which it follows that $$\lambda = \frac{s\cdot d}{D}= 546 nm $$ That is pretty close to the 532 nm usually quoted for a laser pointer. Setting this up with a larger distance to the screen would have allowed more accurate estimation of the peak separation. Still - this got me to 3% without an optical bench (kitchen counter and kitchen ceiling, one hand holding laser pointer while taking picture with the other hand... Yes I would say 3% is OK and you can easily do better.) *Looking more closely at the image, the dot spacing is a little bit less than 5/7 - using a ruler on the image I get about 8.75 mm. That improves the estimate to 541 nm... getting within 2% of the actual value. I doubt my exercise book paper is more accurate than that. As @Benjohn pointed out you could try to use the front facing camera. It takes all kinds of things out of the equation but you lose some resolution. Here was my first attempt: I then repeated it with a 6 Plus (finer resolution screen): It looks possible to deduce the peak spacing directly from that... Afterword So I did play around a bit more with the data. First, I re-measured the distance from the kitchen counter to the ceiling and found that the width of my tape measure wasn't what I thought it was. This made the distance 1 cm larger than I originally had it; also, using some autocorrelation and filtering functions, I found the "true" peak spacing was 8.85 mm, and my new estimate of the wavelength from the first image was updated to 539 nm. Next I tried to use the last image - "self calibrated" image taken with the front facing camera of the 6 Plus. It is hard to get good specs on the camera: from metadata I found the focal length was 2.65 mm, but the pixel size is more elusive. I tried two different methods: in the first method I placed a ruler at exactly 12" (± 0.1") from the front of the camera, and could see 25 cm (± 3 mm). With 960 pixels across, this puts the angular resolution (angle / pixel) at about 0.87 mrad. Taking a picture of a ruler at this distance and analyzing the spacing between lines gave me a value of 0.88 mrad. This is within the error I expect from this measurement. The "blobs" in the last photo were hard to measure accurately - but again some Fourier magic came to my rescue and I determined them to be spaced about 10.1 pixels apart. With the iPhone 6 plus having a finer grid, this gave me a wavelength of 564 nm. Not as good as the other measurement - but not bad for such a blobby image. Re. the Fourier magic: this is the autocorrelation of the image after summing along Y dimension and performing a convolution with a Ricker filter first: And a peak finding algorithm found the following peaks (after fitting to the central five points this was the residuals plot): It can be seen that the peak spacing in the blob image could be estimated with remarkable precision. I attribute the fact that the final answer was "not that great" to the lack of careful calibration of the camera - not the image obtained. There is hope for this method. CDs and DVDs I was curious how well CDs and DVDs might work, so I rigged up a slightly better experiment. Distance from disk to screen was 163 cm, and laser pointer was clamped to reduce motion. With the DVD (Blank Fujifilm 4.7 GB DVD-R), the first maxima were at 170 cm from the central spot, and it was quite easy to pick the location within a couple of mm (spot was narrower in the direction I was measuring). There is some ghosting, but the central peak is not hard to pick out. For the CD (Very Best of Fleetwood Mac, disk 1), the angles of diffraction were smaller and I could see the first and second maximum on each side of the reflected central spot; however, the second one was so spread out it was not easy to pick a clear center: I am not sure if we are seeing unequal spacing between tracks at work, or multiple reflections in the CD coating - I suspect the latter as the effect was much stronger at the lower-angle second peak. At any rate, the deflection angle could be calculated for each case as $\tan^{-1}\frac{s}{D}$: DVD - 46.17°
CD - 20.98° These angles are no longer "small" so we need to be a bit more careful about our equations. We can see that $\frac{\lambda}{d}=\sin\theta$ and $\frac{S}{D}=\tan\theta$. If we assume the wavelength is known, we find the track spacing from this experiment: $$d = \frac{\lambda}{\sin\theta}$$ This gives DVD: 737 nm
CD: 1486 nm the nominal spacing for a DVD is 740 nm, and for a long-playing CD it can be 1500 nm - but CD's can vary quite a bit, depending on the recording length they want to achieve. Unless you know what your disk is, CD's should not be relied upon as accurate gratings. The 737 nm vs 740 nm is an astonishing 0.5% error; it may well be that the 1486 nm measured was in fact 1500 nm, and also within 1% error. If you had seen me balancing on a chair while measuring the distance between spots on the ceiling with a tape measure, you would not have expected me to get that close... One final word: The screen of an iPhone 6 is not perfectly flat, and if you happen to be measuring reflection of your laser pointer close to the edge it is possible you will get a different answer. To first order, all the diffraction peaks will be deflected by the same amount - but if there is appreciable curvature an accurate measurement will show a small difference. It would take a careful setup (proper clamps etc) to detect this; and it would detract from the "cool kitchen counter experiment" atmosphere of this one. | {
"source": [
"https://physics.stackexchange.com/questions/191201",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/84402/"
]
} |
191,425 | Current nuclear power plants are essentially an enhanced version of a kettle, which seems like a stupidity caused by a lack of other options. We heat the water which turns to steam which rotates the turbine, which is total waste of energy due to the several conversions. I googled a bit and found that actually there exists the thermoelectric effect which allows for converting heat to electricity directly. Yes, I didn't know about it until today. ;) Is it possible to turn the heat from the nuclear reactor directly to electricity? Have there been any attempts to do it? I am not asking why we do not use it currently , my question is about whether it's possible in principle and whether anyone has tried it. | The efficiency of a thermoelectric generator is around 5 - 8% . The efficiency of a large steam turbine power plant aproaches 40% . In fact the thermodynamic efficiency of a large steam turbine power plant is over 90%, so it's about as efficient as anything could be. The maximum possible efficiency of a steam driven engine is given by the idealised model called a Carnot engine . The efficiency is ultimately limited by the difference in temperature of the hot and cold ends of the engine, and modern power plants get pretty close to this theoretical maximum. Thermoelectric generators tend to be used only where other restrictions force their use. For example the Curiousity rover uses a thermoelectric generator with an efficiency of about 6% . The lower efficiency is balanced out by a lack of moving parts, and of course the non-availability of water on Mars from which to make steam. | {
"source": [
"https://physics.stackexchange.com/questions/191425",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/84517/"
]
} |
191,449 | I am having problems in comprehending the proof of contradiction used by Purcell in his book; ...We can now assert that $W^1$ must be zero at all points in space. For if it is not, it must have a maximum or minimum somewhere-remember that $W$ is zero at infinity as well as on all the conducting boundaries. If $W$ has an extremum at some point $P$, consider a sphere centered on that point. As we saw in chapter2, the average over a sphere of a function that satisfies Laplace's equation is equal to its value at the center. This could not be true if the center is extremum; it must therefore be zero everywhere.
$^1 W = \phi(x,y,z) - \psi(x,y,z)$, where the former term is the deduced solution & the later term is the assumed solution in order to proof contradiction. Extremum means local maximum or local minimum, right? Why can't average be equal to an extrmum value? If it is not equal to the average value, how does it ensure that at all the places $W$ is equal? | The efficiency of a thermoelectric generator is around 5 - 8% . The efficiency of a large steam turbine power plant aproaches 40% . In fact the thermodynamic efficiency of a large steam turbine power plant is over 90%, so it's about as efficient as anything could be. The maximum possible efficiency of a steam driven engine is given by the idealised model called a Carnot engine . The efficiency is ultimately limited by the difference in temperature of the hot and cold ends of the engine, and modern power plants get pretty close to this theoretical maximum. Thermoelectric generators tend to be used only where other restrictions force their use. For example the Curiousity rover uses a thermoelectric generator with an efficiency of about 6% . The lower efficiency is balanced out by a lack of moving parts, and of course the non-availability of water on Mars from which to make steam. | {
"source": [
"https://physics.stackexchange.com/questions/191449",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
191,777 | Underground atomic bomb tests are done in a deep, sealed hole. Not all underground tests eject material on the surface. In this case, they are only noticeable as earthquakes, according to German Wikipedia on nuclear tests . There seems to be no bulge above the explosion site afterwards. I assume the explosion creates a cavity. Also, I assume that rocks are not very compressible, more so for rocks deep under ground, without many gas-filled pores. I'd like to understand where the volume of the rock goes. Is it one or more of these: Rock is compressible, and the surrounding rock is just squished a little after the explosion. The ground is elastic, and there is no cavity after the explosion. There is a bulge on the surface, it's just too flat to be noticeable, but has a large volume. the cavity is so small that the bulge on the surface is so flat that it is not noticeable. There are enough pores filled with compressible gas in rocks generally, which end up with higher gas pressure after they lost some or most volume, with the total lost volume being the same as the cavities volume. Regarding compressibility,
"Geologic materials reduces in volume only when the void spaces are
reduced, which expel the liquid or gas from the voids." ( Wikipedia: Compressibility - Earth science ) The answer of LDC3 hints that it can be assumed that the ground chosen for nuclear tests is most probably not porous, to avoid migration of radioactive isotopes.
From this, it could be concluded that compressibility is not an important factor, which is certainly counterintuitive. There are probably some more options, and it may be more than one mechanism.
But where does that volume mainly come from? | There is an interesting diagram in the wiki article on underground nuclear testing - the picture file is here This shows that the crater you get from a nuclear explosion depends on the depth of burial: I think the most interesting diagrams are the ones labeled (e) and (f) - where the explosion happens at great depth. In that case, you get a "tight packing" of the soil above in a way that I think is similar to the mechanism that causes sugar to "settle" if you first fill a bowl to the rim, and then tap the bowl gently. The shock wave that travels through the soil (or the sugar) causes individual grains to find a more energetically favorable orientation - so they are a little more tightly packed. This can result in a crater. Now whether you consider this "compacting voids" is a matter of opinion. But it's a real effect. Of course, very close to the nuclear reaction the heat will be so great that the rock will liquify; as a liquid it might be able to pack more tightly, although that depends on many factors. | {
"source": [
"https://physics.stackexchange.com/questions/191777",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/42975/"
]
} |
191,786 | At the 109th UCLA Faculty Research lecture , Seth Putterman gave a talk on Sonoluminescence. During the lecture he emphasized that "The Navier Stokes equations cannot be derived from first principles [of physics]". In physics there are lots of first principles, and so the first question is what set of first principles would one expect to derive the Navier Stokes equations? And the second, and main question is why does a derivation fail? Are we missing some yet to be discovered set of first principles in this area of physics? | None of the interesting equations in physics can be derived from simpler principles, because if they could they wouldn't give any new information. That is, those simpler principles would already fully describe the system. Any new equation, whether it's the Navier-Stokes equations, Einstein's equations, the Schrodinger equation, or whatever, must be consistent with the known simpler principles but it has also to incorporate something new. In this case you appear to have the impression that an attempt to derive the Navier-Stokes equations runs into some impassable hurdle and therefore fails, but this isn't the case. If you search for derivations of the Navier-Stokes equations you will find dozens of such articles, including (as usual) one on Wikipedia . But these are not derivations in the sense that mathematicians will derive theorems from some initial axioms because they require some extra assumptions, for example that the stress tensor is a linear function of the strain rates. I assume this is what Putterman means. Later: Phil H takes me to task in a comment , and he's right to do so. My first paragraph considerably overstates the case as the number of equations that introduce a fundamentally new principle are very small. My answer was aimed at explaining why Putterman says the Navier-Stokes equations can't be derived but actually they can be, as can most equations. Physics is based on reductionism , and while I hesitate to venture into deep philosophical waters physicists basically mean by this that everything can be explained from a small number of basic principles. This is the reason we (some of us) believe that a theory of everything exists. If such a theory does exist then the Navier-Stokes equations could in principle, though not in practice, be derived from it. Actually the Navier-Stokes equations could in principle be derived from a statistical mechanics treatment of fluids. They don't require any new principles (e.g. relativity or quantum mechanics) that aren't already included in a the theoretical treatment of ideal fluids. In practice they are not derivable because those derivations based on a continuum approach rather than a truly fundamental treatment. | {
"source": [
"https://physics.stackexchange.com/questions/191786",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45613/"
]
} |
191,871 | Modern atomic clocks only use caesium atoms as oscillators. Why don't we use other atoms for this role? | "Because that is how the second is defined" is nice - but that immediately leads us to the question "why did Cesium become the standard"? To answer that we have to look at the principle of an atomic clock: you look at the frequency of the hyperfine transition - a splitting of energy levels caused by the magnetic field of the nucleus. For this to work you need: an atom that can easily be vaporized at a low temperature (in solids, Pauli exclusion principle causes line broadening; in hot gases, Doppler broadening comes into play) an atom with a magnetic field (for the electron - field interaction): odd number of protons/neutrons an atom with just one stable isotope (so you don't have to purify it, and don't get multiple lines) a high frequency for the transition (more accurate measurement in shorter time) When you put all the possible candidate elements against this table, you find that Cs-133 is your top candidate. Which made it the preferred element; then the standard; and now, pretty much the only one used. I found much of this information at http://www.thenakedscientists.com/forum/index.php?topic=12732.0 | {
"source": [
"https://physics.stackexchange.com/questions/191871",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/84730/"
]
} |
192,125 | From my understanding, gamma matrices transforms under Lorentz transformation $\Lambda$ as
\begin{equation}
\gamma^{\mu} \rightarrow S[\Lambda]\gamma^{\mu}S[\Lambda]^{-1} = \Lambda^{\mu}_{\nu}\gamma^{\nu}
\end{equation}
Where $S[\Lambda]$ is the corresponding Lorentz transformation in bispinor representation. So my question is: When we change from on frame to another, are we allowed to write $\gamma'^{\mu} = \Lambda^{\mu}_{\nu} \gamma^{\nu}$ where $\gamma'^{\mu}$ is the transformed version of $\gamma^{\mu}$? If yes, then when we write down $\gamma^{\mu}$ explicitly (in some representation) as we do in any standard QFT textbook like
\begin{equation}
\gamma^{\mu} = \begin{pmatrix}
0 & \sigma^{\mu} \\
\bar{\sigma}^{\mu} & 0
\end{pmatrix}
\end{equation}
do we assume any specific frame of reference? If so, which frame? Because if I apply Lorentz transformation such as a boost along $x$-direction I will have
\begin{equation}
\gamma'^{0} = \cosh(\eta)\gamma^0 + \sinh(\eta)\gamma^1 = \begin{pmatrix}
0 & \cosh(\eta) + \sigma^1 \sinh(\eta) \\
\cosh(\eta) - \sigma^1\sinh(\eta) & 0
\end{pmatrix}
\end{equation}
and similarly for $\gamma^i$. I understand that at the end any choice of reference frame does not matter because that's the point of relativistic theory like QFT. The term like $\bar{\psi}\gamma^{\mu}\partial_{\mu}\psi$ in the theory will remain invariance under Lorentz transformation. But just like how we have the momentum shell condition $p^{\mu}p_{\mu} = -m^2$ in all frames but $p^{\mu}$ itself will change from one frame to the other and in particle rest frame we have $p^{\mu} = (m,0,0,0)$ it seems to me that by writing down $\gamma^{\mu}$ explicitly as above we are picking a specific frame. Could someone please clarify this to me? | I think the clearest way to think about this is to say that the gamma matrices don't transform. In other words, the fact that they carry a vector index doesn't mean that they form a four vector. This is analogous to how the Pauli matrices work in regular quantum mechanics, so let me talk a little bit about that. Suppose you have a spin $1/2$ particle in some state $|\psi\rangle$. You can calculate the mean value of $\sigma_x$ by doing $\langle \psi | \sigma_x | \psi\rangle$. Now let's say you rotate your particle by an angle $\theta$ around the $z$-axis. (Warning: There is about a 50% chance my signs are incorrect.) You now describe your particle with a different ket, given by $|\psi'\rangle = \exp(-i \sigma_z \theta /2)$. Remember that we are leaving the coordinates fixed and rotating the system, as is usually done in quantum mechanics. Now the expectation value is given by $$\langle \psi' | \sigma_x | \psi' \rangle = \langle \psi |\, e^{i\sigma_z \theta /2}\, \sigma_x\, e^{-i \sigma_z \theta / 2}\, | \psi\rangle$$ There is a neat theorem, not too hard to prove, that says that $$e^{i\sigma_z \theta /2}\, \sigma_x\, e^{-i \sigma_z \theta / 2} = \cos \theta\, \sigma_x -\sin \theta\, \sigma_y$$ So it turns out that the expectation value for the rotated system is also given by $\langle \psi |\, \cos \theta\, \sigma_x -\sin \theta\, \sigma_y \, |\psi\rangle = \cos \theta\, \langle \sigma_x \rangle - \sin \theta\, \langle \sigma_y \rangle$. It's as if we left our particle alone and rotated the Pauli matrices. But note that if we apply the rotation to $|\psi\rangle$, then we don't touch the matrices. Also, I never said that I transformed the matrices. I just transformed the state, and then found out that I could leave it alone and rotate the matrices. The situation for a Dirac spinor is similar. The analogous identity is that $S(\Lambda) \gamma^\mu S^{-1}(\Lambda) \Lambda^\nu_{\ \mu} = \gamma^\nu$. This is just something that is true; nobody said anything about transforming $\gamma^\mu$. There's no $\gamma^\mu \to \dots$ here. Now let's take the Dirac equation, $(i \gamma^\mu \partial_\mu - m)\psi = 0$, and apply a Lorentz transformation. This time I will change coordinates instead of boosting the system, but there's no real difference. Let's say we have new coordinates given by $x'^\mu = \Lambda^\mu_{\ \nu} x^\nu$, and we want to see if the Dirac equation looks the same in those coordinates. The field $\psi'$ as seen in the $x'^\mu$ frame is given by $\psi' = S(\Lambda) \psi \iff \psi = S^{-1}(\Lambda) \psi'$, and the derivatives are related by $\partial_\mu = \Lambda^\nu_{\ \mu} \partial'_\nu$. Plugging in we get $(i\gamma^\mu \Lambda^\nu_{\ \mu} \partial'_\nu-m) S^{-1}(\Lambda)\psi' = 0$, which doesn't really look like our original equation. But let's multiply on the left by $S(\Lambda)$. $m$ is a scalar so $S$ goes right through it and cancels with $S^{-1}$. And in the first term we get $S(\Lambda)\gamma^\mu S^{-1}(\Lambda) \Lambda^\nu_{\ \mu}$, which according to our trusty identity is just $\gamma^\nu$. Our equation then simplifies to $$(i\gamma^\mu \partial'_\mu - m)\psi'=0$$ This is the same equation, but written in the primed frame. Notice how the gamma matrices are the same as before; when you're in class and the teacher writes them on the board, you don't need to ask in what coordinate system they are valid. Everyone uses the same gamma matrices. They're not really a four-vector, but their "transformation law" guarantees that anything written as if they were a four vector is Lorentz invariant as long as the appropiate spinors are present. | {
"source": [
"https://physics.stackexchange.com/questions/192125",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/65773/"
]
} |
192,185 | So I was watching Final Destination 5 and something caught my attention. There's a part where a bridge collapses and everything falls apart, so there's this bus that has a person inside (unaware of what was happening) and it falls vertically (the front of the bus is now pointing down and the back points to the sky). As the bus is falling, the person is shown slipping through the seats and finally ending up on the front window at the front of the bus. My question is, would that actually happen if someone was falling inside a vehicle? Or should they be pushed to the back? Or should both fall equally? Here's the clip of the movie: http://youtu.be/m01ICYfdLsA?t=1m7s | If the bus was in a vacuum (both inside and outside), then the passenger would float. However, the effects of air resistance on the two objects (passenger and bus) are probably not negligible in such an instance. The bus will be moving relative to the outside air, and so will be accelerating towards the ground at a rate less than $g$. If we then released an object inside the bus, from rest with respect to the bus, it would initially accelerate towards the ground at $g$ (since there is no air resistance on it.) Thus, the object would accelerate towards the ground at a rate $g$, and would therefore move towards the front of the bus. Effectively, the bus's acceleration will be $\vec{A} = \vec{g} + \vec{A}_\text{air}$, where $\vec{A}_\text{air}$ is the bus's acceleration due to air resistance. (Note that this latter vector points upwards.) An object of mass $m$ in this non-inertial reference frame will then obey a version of Newton's Law that's something like
$$
m \vec{a} = m \vec{g} + \vec{F}_\text{air} - m \vec{A} = \vec{F}_\text{air} - m \vec{A}_\text{air}.
$$
where $\vec{a}$ is the object's acceleration relative to the bus and $\vec{F}_\text{air}$ is now the force of air resistance on the object. Thus, we see that initially the acceleration will be in the opposite direction to $\vec{A}$ (i.e., downwards). If the object could fall for long enough relative to the bus, then eventually it would reach its own terminal velocity relative to the air in the bus; but it would still be falling in the downward direction relative to the bus. Finally, note that in the limit where the bus is falling at terminal velocity, the effects of air resistance and gravity would be just like those on the ground, since the bus would be moving with constant velocity. Thus, objects inside the bus would move (relative to the bus) just as they would if the bus was sitting on the earth. Oh, and this whole derivation ignores things like rotation of the bus; a passenger in a bus tumbling end over end would feel a centrifugal force away from the bus's center of mass, even in the absence of air resistance. But that's another kettle of fish that I don't have time to open just now. | {
"source": [
"https://physics.stackexchange.com/questions/192185",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/79485/"
]
} |
192,886 | When one starts learning about physics, vectors are presented as mathematical quantities in space which have a direction and a magnitude. This geometric point of view has encoded in it the idea that under a change of basis the components of the vector must change contravariantly such that the magnitude and direction remain constant. This restricts what physical ideas may be the components of a vector (something much better explained in Feynman's Lectures), so that three arbitrary functions de not form an honest vector $\vec{A}=A_x\hat{x}+A_y\hat{y}+A_z\hat{z}$ in some basis. So, in relativity a vector is defined "geometrically" as directional derivative operators on functions on the manifold $M$ and this implies, if $A^{\mu}$ are the components of a vector in the coordinate system $x^\mu$, then the components of the vector in the coordinate system $x^{\mu'}$ are
$$A^{\mu'}=\frac{\partial x^{\mu'}}{\partial x^\mu}A^\mu$$
(this all comes from the fact that the operators $\frac{\partial}{\partial x^\mu}=\partial_\mu$ form a basis for the directional derivative operators, see Sean Carrol's Spacetime and Geometry) My problem is the fact that too many people use the coordinates $x^\mu$ as an example of a vector, when, on an arbitrary transformation,
$$x^{\mu'}\neq\frac{\partial x^{\mu'}}{\partial x^\mu}x^\mu$$
I understand that this equation is true if the transformation beween the two coordinates is linear (as is the case of a lorentz transformation between cartesian coordinate systems) but I think it can´t be true in general. Am I correct in that the position does not form a four-vector? If not, can you tell me why my reasoning is flawed? | You are correct. Position is a vector when you are working in a vector space, since, well, it is a vector space. Even then, if you use a nonlinear coordinate system, the coordinates of a point expressed in that coordinate system will not behave as a vector, since a nonlinear coordinate system is basically a nonlinear map from the vector space to $\mathbb{R}^n$, and nonlinear maps do not preserve the linear structure. On a manifold, there is no sense in attempting to "vectorize" points. A point is a point, an element of the manifold, a vector is a vector, element of a tangent space at a point.
Of course you can map points into $n$-tuples, that is part of the definition of a topological manifold, but there is no reason why the inverse of this map should carry the linear structure over to the manifold. And now, for a purely personal opinion: While Carroll's book is really good, the physicist's way of attempting to categorize everything by "transformation properties" is extremely counterproductive, and leads to such misunderstandings as you have overcome here. If one learns proper manifold theory, this is clear from the start... | {
"source": [
"https://physics.stackexchange.com/questions/192886",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/69057/"
]
} |
192,891 | I've often heard that Einstein shattered the notion of absolute motion (i.e. all things move relative to one another) and that he established the speed of light as being absolute. That sounds paradoxical to me; I cannot understand how the two concepts can be reconciled. Before going further, I'd like to say: 1) Over the years, I've seen many layman's explanations on these topics (including the nice YouTube video by Vsauce, Would Headlights Work at Light Speed? ) . I understand everything that's said (or, at least, I think I do). Just nothing I've found seems to address this apparent contradiction. 2) More recently, I've tried to find the answer on my own. That includes searching the posts on this site. Some come close ( like this one ), but nothing I've been able to find seems address specifically what I'm asking. Back to the question: Relativity shows us that there is no universal frame of reference by which to judge motion, so object A might be reckoned as moving at 10 m/s relative to object B or as stationary relative to object C. This is fine for me. I can grasp that the universe has no intrinsic coordinate system, that we only think that way on Earth because we have the ground to move over. Then there's the speed of light (in a vacuum). The speed of light is the ultimate "speed limit," it's often said. But if there is no universal frame of reference, how can there be any such speed? The very idea only make sense if there is a universal frame. If one object is moving (uniformly) at 60% c and another object is also moving at 60% c, but in the exact opposite direct, then from the perspective of either one (if they could still see each other) the other would appear to violate that speed limit. All these spacetime bending consequences used to explain why nothing can move past this speed only seems to enshrine the concept that there is some ultimate speed standard. If there is only relative speed, then the concept of light having a specific speed in the vacuum should be a nonsensical one since it having speed (x m/s) would only make sense when measured against some other body. Since I was very young, it has always sounded to me like motion is only mostly relative , that until you get close to the speed of light, the effects of an absolute frame of reference are negligible. Perhaps that there is an actual fabric of space which everything moves relative to, which is why there is something to expand between galaxies (faster than light can propagate) in the metric expansion of space. Growing up, I always thought this would just start makes sense with time. Now I'm up to a first year (college) level in physics, I even know basic calculus, yet I'm still hopelessly confused. EDIT: Thank you to whoever suggested this may be a duplicate of What is the speed of light relative to? . It and others are very much related and at least partially answer my question. Unfortunately, explaining that distances become shorter and time becomes slower as a way to stop you from exceeding the speed of light does not explain how that speed is not an absolute. By my reckoning, if all speed is relative, then no matter how fast you go light should always race away from you at the same apparent speed. I.e. there should be no speed limit. For there to be a speed which you cannot exceed or you would catch up (and make time irrelevant) requires the very concept of some external speed by which light can travel and nothing else can reach - thus my logical paradox continues unabated. | It sounds like your confusion is coming from taking paraphrasing such as "everything is relative" too literally. Furthermore, this isn't really accurate. So let me try presenting this a different way: Nature doesn't care how we label points in space-time. Coordinates do not automatically have some real "physical" meaning. Let's instead focus on what doesn't depend on coordinate systems: these are geometric facts or invariants. For instance, our space-time is 4 dimensional. There are also things we can calculate, like the invariant length of a path in space-time, or angles between vectors. It turns out our spacetime has a Lorentzian signature: roughly meaning that one of the dimensions acts differently than the others when calculating the geometric distance. So there is not complete freedom to make "everything" relative. Some relations are a property of the geometry itself, and are independent of coordinate systems. I can't find the quote now, but I remember seeing once a quote where Einstein wished in reflection that instead of relativity it was the "theory of invariants" because those are what matter. Now, it turns out that the Lorentzian signature imposes a structure on spacetime. In nice Cartesian inertial coordinates with natural units, the geometric length of a straight path between two points is: $ds^2 = - dt^2 + dx^2 + dy^2 + dz^2$ Unlike space with a Euclidean signature, this separates pairs of points into three different groups: $> 0$, space like separated $< 0$, time like separated $= 0$, "null" separation, or "light like" No matter what coordinate system you choose, you cannot change these. They are not "relative". They are fixed by the geometry of spacetime. This separation (light cones if viewed as a comparison against a single reference point), is the causal structure of space time. It's what allows us to talk about event A causing B causing C, independently of a coordinate system. Now, back to your original question, let me note that speed itself is a coordinate system dependent concept. If you had a bunch of identical rulers and clocks, you could even make a giant grid of rulers and put clocks at every intersection, to try to build up a "physical" version of a coordinate system with spatial differences being directly read off of rulers, and time differences being read from clocks. Even in this idealized situation we cannot yet measure the speed of light. Why? Because we still need to specify one more piece: how remote clocks are synchronized. It turns out the Einstein convention is to synchronize them using the speed of light as a constant. So in this sense, it is a choice ... a choice of coordinate system. There are many coordinate systems in which the speed of light is not constant, or even depends on the direction. So, is that it? It's a definition? That is not a very satisfying answer, and not a complete one. What makes relativity work is the amazing fact that this choice is even possible. The modern statement of special relativity is usually something like: the laws of physics have Poincare symmetry (Lorentz symmetry + translations + rotations). It is because of the symmetry of spacetime that we can make an infinite number of inertial coordinate systems that all agree on the speed of light. It is the structure of spacetime, its symmetry, that makes special relativity. Einstein discovered this the other way around, postulating that such a set of inertial frames were possible, and derived Lorentz transformations from them to deduce the symmetry of space-time. So in conclusion: "If all motion is relative, how does light have a finite speed?" Not everything is relative in SR, and speed being a coordinate system dependent quantity can have any value you want with appropriate choice of coordinate system. If we design our coordinate system to describe space isotropically and homogenously and describe time uniformly to get our nice inertial reference frames, the causal structure of spacetime requires the speed of light to be isotropic and finite and the same constant in all of the inertial coordinate systems. | {
"source": [
"https://physics.stackexchange.com/questions/192891",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37035/"
]
} |
192,923 | In general relativity, when solving for the schwartzchild solution, we set $T=0$. 1) Is it possible for the stress energy tensor to have nonzero value in a vacuum region? 2) Is the stress $T=0$ in a vacuum region in $f(R)$ modified gravity? | It sounds like your confusion is coming from taking paraphrasing such as "everything is relative" too literally. Furthermore, this isn't really accurate. So let me try presenting this a different way: Nature doesn't care how we label points in space-time. Coordinates do not automatically have some real "physical" meaning. Let's instead focus on what doesn't depend on coordinate systems: these are geometric facts or invariants. For instance, our space-time is 4 dimensional. There are also things we can calculate, like the invariant length of a path in space-time, or angles between vectors. It turns out our spacetime has a Lorentzian signature: roughly meaning that one of the dimensions acts differently than the others when calculating the geometric distance. So there is not complete freedom to make "everything" relative. Some relations are a property of the geometry itself, and are independent of coordinate systems. I can't find the quote now, but I remember seeing once a quote where Einstein wished in reflection that instead of relativity it was the "theory of invariants" because those are what matter. Now, it turns out that the Lorentzian signature imposes a structure on spacetime. In nice Cartesian inertial coordinates with natural units, the geometric length of a straight path between two points is: $ds^2 = - dt^2 + dx^2 + dy^2 + dz^2$ Unlike space with a Euclidean signature, this separates pairs of points into three different groups: $> 0$, space like separated $< 0$, time like separated $= 0$, "null" separation, or "light like" No matter what coordinate system you choose, you cannot change these. They are not "relative". They are fixed by the geometry of spacetime. This separation (light cones if viewed as a comparison against a single reference point), is the causal structure of space time. It's what allows us to talk about event A causing B causing C, independently of a coordinate system. Now, back to your original question, let me note that speed itself is a coordinate system dependent concept. If you had a bunch of identical rulers and clocks, you could even make a giant grid of rulers and put clocks at every intersection, to try to build up a "physical" version of a coordinate system with spatial differences being directly read off of rulers, and time differences being read from clocks. Even in this idealized situation we cannot yet measure the speed of light. Why? Because we still need to specify one more piece: how remote clocks are synchronized. It turns out the Einstein convention is to synchronize them using the speed of light as a constant. So in this sense, it is a choice ... a choice of coordinate system. There are many coordinate systems in which the speed of light is not constant, or even depends on the direction. So, is that it? It's a definition? That is not a very satisfying answer, and not a complete one. What makes relativity work is the amazing fact that this choice is even possible. The modern statement of special relativity is usually something like: the laws of physics have Poincare symmetry (Lorentz symmetry + translations + rotations). It is because of the symmetry of spacetime that we can make an infinite number of inertial coordinate systems that all agree on the speed of light. It is the structure of spacetime, its symmetry, that makes special relativity. Einstein discovered this the other way around, postulating that such a set of inertial frames were possible, and derived Lorentz transformations from them to deduce the symmetry of space-time. So in conclusion: "If all motion is relative, how does light have a finite speed?" Not everything is relative in SR, and speed being a coordinate system dependent quantity can have any value you want with appropriate choice of coordinate system. If we design our coordinate system to describe space isotropically and homogenously and describe time uniformly to get our nice inertial reference frames, the causal structure of spacetime requires the speed of light to be isotropic and finite and the same constant in all of the inertial coordinate systems. | {
"source": [
"https://physics.stackexchange.com/questions/192923",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/28952/"
]
} |
193,420 | (inspired by this question .) In every semiconductor that I can think of, the valence band maximum and conduction band minimum are at a high-symmetry point in the Brillouin Zone (BZ). Often the BZ center, or a corner, etc. In silicon, the CBM is not at any of those points, but it is on the straight-line path between the Gamma and X point, so it still has higher symmetry than an arbitrary point in the BZ. Why does that typically happen? Does it always happen, or are there any exceptions where a band extremum (any band, not just valence or conduction) occurs at a point in the BZ with the lowest-possible symmetry? (So that there are as many copies of the extremum as there are elements of the point group ... or something like that.) | $\renewcommand{ket}[1]{|#1\rangle}$
The basic logical connection here is
$$\text{symmetry} \rightarrow \text{degeneracy} \rightarrow \text{avoided crossing} \rightarrow \text{band gap} \, .$$ $\textrm{symmetry}\rightarrow \textrm{degeneracy}$ Consider an operator $S$ and let $T(t) = \exp[-i H t / \hbar]$ be the time evolution operator.
If
$$ [ T(t), S] = 0 $$
then $S$ is a symmetry transformation.
We can see why this commutation condition is a sensible definition of symmetry by considering an initial state $\ket{\Psi}$ and the transformed state $\ket{\Psi'} \equiv S \ket{\Psi}$.
If $[T, S] = 0$, then
\begin{align}
T S \ket{\Psi}
&= S T \ket{\Psi} \\
T \ket{\Psi'} &= S \ket{\Psi(t)} \\
\ket{\Psi'(t)} &= S \ket{\Psi(t)} \, .
\end{align}
This says that if we transform an initial state and then propagate it through time (left hand side), we get the same thing as if we propagate through time and then transform (right hand side).
Imagine a 1D Hamiltonian with left/right symmetry.
That symmetry means that e.g. a right moving wave packet is the mirror of a left moving wave packet.
In other words, if we move right for time $t$ and then mirror, we get the same thing as the left moving packet after time $t$. A simple way to find an operator $S$ which commutes with $T$ is to find one which commutes with $H$.
If $S$ commutes with $H$ then we have degeneracy because for an energy eigenstate $\ket{\Psi}$ with eigenvalue $E$ we have
$$H (S \ket{\Psi}) = S H \ket{\Psi} = E (S \ket{\Psi})$$
which says that $S\ket{\Psi}$ is also an eigenstate of $H$ with energy $E$.
Note that this also shows that the number of degenerate states is equal to the number of times you can multiply $S$ by itself before getting the identity. $\textrm{degeneracy} \rightarrow \textrm{avoided crossing}$ Suppose you have a Hamiltonian $H$ which depends on a parameter $\lambda$, and suppose for a particular value $\lambda_0$, $H$ has a symmetry and therefore a degeneracy.
This is illustrated by the dotted lines in the diagram which show the energies of the states $\ket{\Psi}$ and $S\ket{\Psi}$ as functions of $\lambda$; they cross at $\lambda_0$. If there is another term $V$ in the Hamiltonian which is not symmetric under $S$, then the degeneracy disappears and the energies for $\ket{\Psi}$ and $\ket{\Psi'}$ do not cross.
This famous "avoided level crossing" is indicated by the solid lines in the figure.$^{[a]}$
Calculation of the gap in the avoided level crossing is a standard problem in Hamiltonian mechanics and can be done using perturbation theory considering the two levels involved in the crossing. $\textrm{avoided crossing} \rightarrow \text{band gap}$ The Hamiltonian for an electron in a crystal has three parts: kinetic energy, potential energy, and electron-electron coupling.
Let's forget about electron-electron interactions entirely, and assume that the potential energy from the crystal is weak compared to the electron kinetic energies.
In this case, we can treat the kinetic energy as the strong part $H$ and the potential energy as the weak part $V$ of the Hamiltonian.
It turns out that if you compute the kinetic energies of the electron in a periodic lattice as a function of crystal momentum $\vec{k}$ there is degeneracy wherever $\vec{k}$ hits a Bragg plane.
Thinking now of $\vec{k}$ playing the role of $\lambda$, we have an energy crossing when the crystal momentum hits a Bragg plane. When we add in the potential energy of the lattice, it plays the role of $V$ and splits the degeneracy, producing what we call a band gap. $[a]$: Avoided level crossings are not a quantum effect.
A classical Hamiltonian with a strong part $H$ exhibiting a symmetry and a weaker part $V$ breaking that symmetry also exhibits avoided level crossing. Reference: I strongly recommend reading chapters 8 and 9 of Ashcroft and Mermin's Solid State Physics . The arguments presented here are explained in great mathematical detail. | {
"source": [
"https://physics.stackexchange.com/questions/193420",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3811/"
]
} |
193,522 | Why is ice more reflective (has higher albedo) than liquid water? They're both the same substance (water). Is something quantum mechanical involved? | In fact ice is slightly less reflective than water. The reflectivity is related to the refractive index (in a rather complicated way) and the refractive index of ice is 1.31 while the refractive index of water is 1.33. The slightly lower refractive index of ice will cause a slightly lower reflectivity. In both cases the reflectivity is about 0.05 i.e. at an air/water or air/ice surface about 5% of the light is reflected. Water generally has a relatively smooth surface so the light falling on the water only gets a chance to reflect back once. Any light that doesn't reflect off the surface propagates down into the water where it is eventually absorbed and converted to heat. The end result is that a large body water reflects only about 5% of the light. Ice is generally covered with some snow, and snow is made up of small ice crystals with air gaps between them. Light falling onto snow may be reflected at the first surface, but any light that isn't reflected will meet lots more ice/air interfaces as it travels through the snow, and at every surface more light will be reflected. The net result is that much more of the light is reflected from snow. So the difference isn't anything fundamental, it's just because water is continuous while snow isn't. It is possible to form an air water dispersion, for example foam or fog. Both foams and fogs reflect light far more efficiently than a large body of water. | {
"source": [
"https://physics.stackexchange.com/questions/193522",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/59393/"
]
} |
193,609 | I read in various places that frequency does not change with medium. Instead, wavelength changes in different mediums due to a change in speed. I understand why speed changes with medium, but I'm not sure why wavelength, not frequency, changes. One website said it was because of conservation of energy, but I read that the energy of a sound wave depends on its amplitude, not frequency. Is that correct? If so, why does frequency not depend on the medium? | Because the frequency of a sound wave is defined as "the number of waves per second." If you had a sound source emitting, say, 200 waves per second, and your ear (inside a different medium) received only 150 waves per second, the remaining waves 50 waves per second would have to pile up somewhere — presumably, at the interface between the two media. After, say, a minute of playing the sound, there would already be 60 × 50 = 3,000 delayed waves piled up at the interface, waiting for their turn to enter the new medium. If you stopped the sound at that point, it would still take 20 more seconds for all those piled-up waves to get into the new medium, at 150 waves per second. Thus, your ear, inside the different medium, would continue to hear the sound for 20 more seconds after it had already stopped. We don't observe sound piling up at the boundaries of different media like that. (It would be kind of convenient if it did, since we could use such an effect for easy sound recording, without having to bother with microphones and record discs / digital storage. But alas, it just doesn't happen.) Thus, it appears that, in the real world, the frequency of sound doesn't change between media. Besides, imagine that you switched the media around: now the sound source would be emitting 150 waves per second, inside the "low-frequency" medium, and your ear would receive 200 waves per second inside the "high-frequency" medium. Where would the extra 50 waves per second come from? The future? Or would they just magically appear from nowhere? All that said, there are physical processes that can change the frequency of sound, or at least introduce some new frequencies. For example, there are materials that can interact with a sound wave and change its shape , distorting it so that an originally pure single-frequency sound wave acquires overtones at higher frequencies. These are not, however, the same kinds of continuous shifts as you'd observe with wavelength, when moving from one medium to another with a different speed of sound. Rather, the overtones introduced this way are generally multiples (or simple fractions) of the original frequency: you can easily obtain overtones at two or three or four times the original frequency, but not at, say, 1.018 times the original frequency. This is because they're not really changing the rate at which the waves cycle, but rather the shape of each individual wave (which can be viewed as converting some of each original wave into new waves with two/three/etc. times the original frequency). | {
"source": [
"https://physics.stackexchange.com/questions/193609",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/85645/"
]
} |
193,621 | Could someone show me a simple and intuitive derivation of the Centripetal Acceleration Formula $a=v^2/r$, preferably one that does not involve calculus or advanced trigonometry? | Imagine a object steadily traversing a circle of radius $r$ centered on the origin. It's position can be represented by a vector of constant length that changes angle. The total distance covered in one cycle is $2\pi r$. This is also the accumulated amount by which position has changed.. Now consider the velocity vector of this object: it can also be represented by a vector of constant length that steadily changes direction. This vector has length $v$, so the accumulated change in velocity is $2 \pi v$. The magnitude of acceleration is then $\frac{\text{change in velocity}}{\text{elapsed time}}$, which we can write as:
$$a = \frac{2 \pi v}{\left(\frac{2\pi r}{v} \right)} = \frac{v^2}{r} \,.$$ Q.E.D. Aside: that derivation is used in a lot of algebra/trig based textbooks. | {
"source": [
"https://physics.stackexchange.com/questions/193621",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/85633/"
]
} |
193,663 | I am extremely interested in self-learning Einstein's theory of relativity, but I don't know where to start. Can I make general relativity my starting point, and later look at special relativity as a special case of GR? Is is doable for a person with average math skills? | Can I make GR my starting point, and look at SR later as a special case of GR? This would be like making differential geometry your starting point and then learning linear algebra as a special case --- or learning calculus as your starting point and then learning about straight lines as a special case. In other words, it's insane. | {
"source": [
"https://physics.stackexchange.com/questions/193663",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/43897/"
]
} |
193,684 | So my friend asked me why angles are dimensionless, to which I replied that it's because they can be expressed as the ratio of two quantities -- lengths. Ok so far, so good. Then came the question: "In that sense even length is a ratio. Of length of given thing by length of 1 metre. So are lengths dimensionless?". This confused me a bit, I didn't really have a good answer to give to that. His argument certainly seems to be valid, although I'm pretty sure I'm missing something crucial here. | Your friend's question is perceptive but not at odds with your earlier answer. When you compare the length of something with a unit (1 meter), the ratio is indeed a unitless number. But then all numbers (1.5, $\pi$, 42) are unitless. When you want to determine speed you divide displacement by time - each of which has units. But what you enter into you calculator are just the numbers - you handle the units separately. "The runner covered 100 meter in 10 seconds. What was his average speed?" Is solved by calculating the numerical ratio 100/10 and adding the dimensional ratio m/s to preserve the units. Most calculators don't have (or need) a means to enter units (some sophisticated computer programs do - to help you avoid mistakes by mixing units). For some physical calculations you need to take the logarithm - when you do, you ALWAYS have to divide the quantity by some scale factor with the same units as it is not possible to take the $\log$ of a unit. | {
"source": [
"https://physics.stackexchange.com/questions/193684",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/85677/"
]
} |
194,334 | I've been told that physicists and computer scientists are working on computers that could use quantum physics to increase significantly computation capabilities and break any cipher so cryptography becomes meaningless. Is it true? | No, it is not. Quantum computers can factor large numbers efficiently, which would allow to break many of the commonly used public key cryptosystems such as RSA, which are based on the hardness of factoring. However, there are other cryptosystems such as lattice-based cryptography which are not based on the hardness of factoring, and which (to our current knowledge) would not be vulnerable to attack by a quantum computer. | {
"source": [
"https://physics.stackexchange.com/questions/194334",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/85964/"
]
} |
194,936 | The Law of Thermodynamics says that two bodies will eventually have equal temperatures. How is it possible that when you leave your car in the sun, it gets hotter in the car than it is outside? Why isn’t the car at the same temperature as the outside, as it should be according to the Law? | Law of Thermodynamics says that two bodies eventually will have equal temperatures. That is not an absolute Law. There are conditions, and one of those conditions involves the energy input to the bodies. If this Law was absolute, then the Sun would be at the same temperature as the universe, about 2.7 K, because the universe is much larger than the Sun. But the Sun has an internal energy converter/source which raises its local temperature. The interior of a closed car in the sunlight will be higher because of a greenhouse effect. The glass of the car is transparent to the visible light, so that energy is absorbed by the interior of the car (the seats, dashboard, and floor) increasing their temperature. Those items then emit infrared radiation and the glass is fairly opaque to that radiation and the energy stays in the car. So more energy comes in the glass than is escaping out of the glass. Because the trunk/boot doesn't have a glass opening to let radiation in, it will generally stay quite a bit cooler than the passenger compartment. Whatever radiation the trunk lid gets is reflected and radiated back out fairly efficiently. That's not to say it doesn't get hot, but it doesn't get to the same as the passenger compartment. | {
"source": [
"https://physics.stackexchange.com/questions/194936",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10408/"
]
} |
195,696 | When an exothermic reaction occurs, the energy in the chemical bonds of the reactants is partially transferred to the chemical bonds of the products. The remaining energy is released as heat. For example: $$\mathrm{N_2 + 3H_2 \to 2NH_3} \qquad \Delta G^\circ = -32.96 \,\rm kJ/mol$$ Therefore, when $1\,\rm mol$ of nitrogen reacts with $3\,\rm mol$ of hydrogen (under standard conditions), we get $32.96\,\rm kJ$ of heat. Now, applying $E=mc^2$, this works out to be $$m = 32.96 \times (3 \times 10^{-8})^2 = 2.96 \times 10^{-14} \,\rm kg \quad \text{or} \quad 29.6\, pg$$ Does this relationship hold? Do the products of an exothermic reaction really weigh ever so slightly less than the reactants? In a more general sense, does removing energy from a system decrease its mass (or vice versa )? | As far as the theory goes, you are absolutely correct, the (negative) binding energy between atoms in a molecule contributes to the total mass of that molecule, so a stable molecule is less massive than the sum of the masses of its constituent atoms. However (as you yourself calculated), the mass difference is absolutely tiny, and as far as I know, it has never been measured. But the principle is no different from the mass deficit that occurs in nuclear reactions and that, in turn, is readily measurable. Consider the atomic mass of deuterium ($2.01410178\,\rm u$) vs. helium ($4.002602\,\rm u$), which is about $0.64\%$ less than the mass of two deuterium atoms. The difference is the energy that would be released in a fusion reaction. So yes, in general, removing energy from a system decreases its mass, and conversely, adding energy to the system increases its mass. The most extreme example perhaps would be protons and neutrons: roughly $99\%$ of their masses come from the (positive) binding energy between their constituent quarks, and only about $1\%$ is attributed to the quark rest masses. | {
"source": [
"https://physics.stackexchange.com/questions/195696",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1650/"
]
} |
195,915 | Based on popular accounts of modern physics and black holes (articles, video lectures), I have come to understand the following: Black holes are predicted by General Relativity, a classical theory of gravity. We know that the universe is inherently quantum mechanical, so we believe General Relativity to be somehow incomplete or inaccurate. We do not have a quantum mechanical theory of gravity. We know that (on some level) General Relativity and Quantum Mechanics are incompatible. There is no direct experimental evidence of event horizons. If all of these things are true (and if they aren't, please correct me), why do we trust black hole physics? How can we talk about something like Hawking Radiation if it uses both General Relativity and Quantum Mechanics and we know that we don't know exactly how to unify them. When I read about or hear physicists talk about black hole related phenomena they speak with some considerable degree of certainty that these things actually exist and that they behave in the way the known physical laws describe them, so I'd like to understand why in the absence of direct evidence or a unified Quantum Mechanics/General Relativity framework we can be so confident in black hole physics. EDIT: I just want to point out in response to some of the answers that I am aware of the evidence of very massive objects which are very compact and are believed to be black holes. I do not doubt that there exist very massive objects which have a great effect on the propagation of light and distort space and so on. When I talk about "black hole physics" I specifically mean physics which is derived by combining quantum mechanics and GR such as Hawking Radiation, things relating to the Information Paradox, etc. That's also why I specifically mentioned event horizons. | At first many people didn't care much for black holes. But later people showed that they were pretty unavoidable features of the theory of general relativity and that theory made other quite precise predictions that were tested and found good. So when you are told that black holes are required if you have GR and GR looks like the best game in town then it becomes less bothersome. But there is more. Having a detailed classical theory of black holes gives limits on the sizes of neutron stars, and we see neutrons stars. So you can look for neutron stars, look for evidence of their mass and now if you see one that is too large you can disprove GR. So people look. And GR wins again. And eventually we start to see objects that behave like we expect a black hole to behave. So it makes sense to refer to things as black holes. Because they are enough like them that theories about black holes can work. You still make a line between what has been observed and what hasn't. And Hawking radiation is on the wrong side. But if someone talks about Hawking radiation with certainty they are probably trying to explicate a known theory's predictions rather than an experimentally confirmed fact. But it is always important to distinguish between new results and known results, so the apparent certainty is probably an attempt to say "I am not saying something new" and it just comes off badly. | {
"source": [
"https://physics.stackexchange.com/questions/195915",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/86722/"
]
} |
195,928 | Why there is a sharp cut off of the charged region outside the depletion region, like on this image ? For example why don't electrons on the conduction band in the n-type side rush towards the positively charged area making the whole piece positively charged somewhat, not just at the area near the depletion region? The source of the confusion is that I know if you charge up a regular conductors the internal currents will uniformly distribute the charge along the whole piece, while insulators are only locally charged up, since they cannot carry current.
Semiconductors here seem to act like insulators, but diodes do carry current when used. How? | At first many people didn't care much for black holes. But later people showed that they were pretty unavoidable features of the theory of general relativity and that theory made other quite precise predictions that were tested and found good. So when you are told that black holes are required if you have GR and GR looks like the best game in town then it becomes less bothersome. But there is more. Having a detailed classical theory of black holes gives limits on the sizes of neutron stars, and we see neutrons stars. So you can look for neutron stars, look for evidence of their mass and now if you see one that is too large you can disprove GR. So people look. And GR wins again. And eventually we start to see objects that behave like we expect a black hole to behave. So it makes sense to refer to things as black holes. Because they are enough like them that theories about black holes can work. You still make a line between what has been observed and what hasn't. And Hawking radiation is on the wrong side. But if someone talks about Hawking radiation with certainty they are probably trying to explicate a known theory's predictions rather than an experimentally confirmed fact. But it is always important to distinguish between new results and known results, so the apparent certainty is probably an attempt to say "I am not saying something new" and it just comes off badly. | {
"source": [
"https://physics.stackexchange.com/questions/195928",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7743/"
]
} |
195,941 | I'll begin with a with a brief and familiar example to frame the question: |_>EXAMPLE When water waves pass through a double slit experiment everyone knows that an
interference pattern is created. The interference pattern is simply a combination
of crests and troughs, but the "dark bands" here represent flat water (no up/down
motion). This means that water is still reaching the observed wall in these dark
band regions. The interference pattern is thus defined with crests, troughs, and
flats. |_> QUESTION When light passes through a double slit experiment, an interference pattern is
created (with no recording instruments). Following the example above, the dark
bands created should instead be horizontal 'flat light' (light which no longer
exhibits wave properties, only the particle of light itself should be here).
Thus, light should still be reaching the observed wall in these dark band regions
if analogous. Why then is there no light reaching these "dark band" regions
instead of a flat horizontal line of light or other expected outcome based on
standard wave/particle motion? I have many other questions and of course, more to read. But I think this is the most important start. The question has been slightly addressed here, but I welcome more complicated answers: Are double-slit patterns really due to wave-like interference? | At first many people didn't care much for black holes. But later people showed that they were pretty unavoidable features of the theory of general relativity and that theory made other quite precise predictions that were tested and found good. So when you are told that black holes are required if you have GR and GR looks like the best game in town then it becomes less bothersome. But there is more. Having a detailed classical theory of black holes gives limits on the sizes of neutron stars, and we see neutrons stars. So you can look for neutron stars, look for evidence of their mass and now if you see one that is too large you can disprove GR. So people look. And GR wins again. And eventually we start to see objects that behave like we expect a black hole to behave. So it makes sense to refer to things as black holes. Because they are enough like them that theories about black holes can work. You still make a line between what has been observed and what hasn't. And Hawking radiation is on the wrong side. But if someone talks about Hawking radiation with certainty they are probably trying to explicate a known theory's predictions rather than an experimentally confirmed fact. But it is always important to distinguish between new results and known results, so the apparent certainty is probably an attempt to say "I am not saying something new" and it just comes off badly. | {
"source": [
"https://physics.stackexchange.com/questions/195941",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/86678/"
]
} |
196,100 | We have images of stars orbiting black holes or black holes destroying near stars, but why do we see the stars moving normally? I mean, if time dilation does exist, shouldn't we see that stars slow down and speed up? Why do we see stars orbiting at a normal rate? | Those objects are orbiting closely to SgrA${}^{*}$, certainly, but they are not orbiting closely enough to exhibit significant time dilation effects. In particular, consider the Schwarzschild spacetime. The inner most stable circular orbit around the central obect is at $r = 6M$, three Schwarzschild radii away. This makes the time dilation factor: $$\sqrt{1-\frac{2M}{r}}= \sqrt{1-1/3} = \sqrt{2/3} = .82$$ So, even the farthest in stable orbit is only running 18% slower than a distant clock. You can cheat at this a bit by giving the central black hole spin, which will draw in the innermost orbit, but generically, you don't see huge time dilation effects for orbiting bodies. Wikipedia gives the orbit of the closest of those stars, S2 , as being 17 light hours. We can now compare this distance to the schwarzschild radius of the black hole to guess how much time dilation we should see. $$\begin{align}
r_{s}
&= \frac{2GM}{c^{2}} \\
&= \frac{2\times\bigl(6.11*10^{-11}\; {\rm N \cdot m^{2}/kg^{2}}\bigr)\times\bigl({10^6}\times(2\times 10^{30}\;{\rm kg})\bigr)}{(3*10^8\;{\rm m/s})^{2}} = 2.7 \times 10^{9}\; {\rm m} \\
r_\text{S2} &= 17 \;\text{light-hours} \times (3\times 10^{8} \;{\rm m/s})(60 {\rm \;s/min})(60 \;{\rm min/h}) = 1.8\times10^{13}\; {\rm m}
\end{align}$$ So, S2 is roughly ten thousand Schwarzschild radii away from SgrA${}^{*}$, and no significant time dilation is expected. Now, you might ask "why is this evidence that there is a black hole there, then?" The reason why is that this is still a HUGE amount of mass in an area roughly the size of the solar system. General relativity predicts that there is no possible stable configuration of matter of this density that is NOT a black hole. | {
"source": [
"https://physics.stackexchange.com/questions/196100",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/86804/"
]
} |
196,101 | I've heard someone state that the double slit experiment can also be done with atoms, not just electrons or photons of light. | Those objects are orbiting closely to SgrA${}^{*}$, certainly, but they are not orbiting closely enough to exhibit significant time dilation effects. In particular, consider the Schwarzschild spacetime. The inner most stable circular orbit around the central obect is at $r = 6M$, three Schwarzschild radii away. This makes the time dilation factor: $$\sqrt{1-\frac{2M}{r}}= \sqrt{1-1/3} = \sqrt{2/3} = .82$$ So, even the farthest in stable orbit is only running 18% slower than a distant clock. You can cheat at this a bit by giving the central black hole spin, which will draw in the innermost orbit, but generically, you don't see huge time dilation effects for orbiting bodies. Wikipedia gives the orbit of the closest of those stars, S2 , as being 17 light hours. We can now compare this distance to the schwarzschild radius of the black hole to guess how much time dilation we should see. $$\begin{align}
r_{s}
&= \frac{2GM}{c^{2}} \\
&= \frac{2\times\bigl(6.11*10^{-11}\; {\rm N \cdot m^{2}/kg^{2}}\bigr)\times\bigl({10^6}\times(2\times 10^{30}\;{\rm kg})\bigr)}{(3*10^8\;{\rm m/s})^{2}} = 2.7 \times 10^{9}\; {\rm m} \\
r_\text{S2} &= 17 \;\text{light-hours} \times (3\times 10^{8} \;{\rm m/s})(60 {\rm \;s/min})(60 \;{\rm min/h}) = 1.8\times10^{13}\; {\rm m}
\end{align}$$ So, S2 is roughly ten thousand Schwarzschild radii away from SgrA${}^{*}$, and no significant time dilation is expected. Now, you might ask "why is this evidence that there is a black hole there, then?" The reason why is that this is still a HUGE amount of mass in an area roughly the size of the solar system. General relativity predicts that there is no possible stable configuration of matter of this density that is NOT a black hole. | {
"source": [
"https://physics.stackexchange.com/questions/196101",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/78292/"
]
} |
196,127 | Imagine there are two rooms kept at the same temperature but with different humidity levels. A person is asked to stay in each room for 5 minutes. At the end of experiment if we ask them which room was hotter, they will point to the room with the higher humidity. Correct right? How does humidity cause this feeling of hotness? | When the ambient humidity is high, the effectiveness of evaporation over the skin is reduced, so the body's ability to get rid of excess heat decreases. Human beings regulate their body temperature quite effectively by evaporation, even when we are not sweating, thanks to our naked skin. (This, supposedly, is also what made it possible for early hominids to become hunters by virtue of being effective long-distance runners.) Humans are so good at this, we can survive in environments that are significantly hotter than our body temperature (e.g., desert climates with temperatures in the mid-40s degrees Celsius) so long as the humidity remains low and we are adequately hydrated. (Incidentally, this is also why we are more likely to survive being locked in a hot car on a summer day than our furry pets.) In contrast, when the humidity is very high, even temperatures that are still several degrees below normal body temperature can be deadly already. | {
"source": [
"https://physics.stackexchange.com/questions/196127",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41462/"
]
} |
196,136 | Note: For the purposes of my question, when I refer to free fall assume it takes place in a vacuum. From my (admittedly weak) understanding of the equivalence principle, falling in a gravitational field is physically indistinguishable from floating in interstellar space. This would make sense to me if gravity simply caused an object to move at a constant velocity. Moving at a constant speed, or floating in space, are just two different ways of describing an inertial frame, and are fundamentally no different. But free falling in a gravitational field means accelerating continuously, and doesn't an accelerating body experience a force? Then isn't free falling fundamentally different from floating in space? | It is incorrect to link the feeling of being accelerated to being accelerated itself. You can be under constant velocity or be continuously accelerated, yet you need not feel anything at all. Let me explain. The reason you feel compressed or stretched when you are accelerated in a lift is because of the presence of the normal force from the ground on you. The normal force pushes up on your feet while gravity pushes down from your center of mass. That's why your legs feel compressed in an accelerating lift. Your leg is under stress , and that's the feeling of being accelerated. A free falling object doesn't experience force even though gravity acts on it because there is no other opposing force to induce any stress in your body. In the absence of such a normal opposing force during free-fall, you do not feel anything. | {
"source": [
"https://physics.stackexchange.com/questions/196136",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/29924/"
]
} |
196,545 | Let's say we have a cloud of dust which is a lightyear across and someone shoots a beam of light from point A to B , why it is not possible for an observer far far away to see the light while it travels through the cloud at the speed of light? | Sometimes we do, and the phenomenon is called a light echo . What you're looking at there is NOT moving gas. It's an "echo" exactly as you describe. The problem is that you need a pulse of light. If you have a constant stream of light, the "light echos" will be exactly like what you see in fog on earth. | {
"source": [
"https://physics.stackexchange.com/questions/196545",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/86985/"
]
} |
197,102 | A superconductor has zero resistance. What about an electron in a vacuum? Could this simple system be considered superconducting? | You are right, electrons in vacuum can carry a current without resistivity.
However, superconductivity is not only zero resistance, but something more. Superconductivity is defined by zero resistivity and by the presence of the Meissner effect, i.e., the expulsion of magnetic fields from the system . The zero resistivity of electrons in vacuum is usually called ballistic conductivity, which means absence of scattering and, consequently, absence of resistivity. Ballistic conductivity is realized not only in vacuum, but also in other systems, for examples carbon nanotubes. In this systems the electrons can travel without undergoing scattering, since an ideal nanotube has no impurity and it is hollow at the center. You may think that this is only matter of definitions, since both ballistic systems (electrons in vacuum, nanotubes) and superconductors exhibit zero resistance. However, the Meissner effect (expulsion of the magnetic field) is what differentiates superconductivity from ballistic conductance. Also, the physical origin of zero resistance is completely different in the two systems. In ballistic conductors (electrons in vacuum, nanotubes), zero resistance arises from the fact that electron scattering is negligible in absence of impurities. Electrons simply move undisturbed. In superconductors, scattering is still present (superconductors can have impurities and disorder), but the weak attraction between electrons (mediated by the ions in the crystal) overcomes that, and therefore electrons move coherently without any dissipation. Another difference between ballistic conductivity and superconductivity is the presence of a phase transition at a critical temperature $T_c$. Superconductors exhibit superconductivity only under a certain temperature $T_c$, and the transition between the normal state (usually metallic) and superconductivity is very, very sharp. In vacuum, the presence of ballistic conductivity does not depend on temperature. In nanotubes, perfect ballistic conductivity is obtained at low temperatures, but there is no sharp transition between zero and non-zero resistivity. Actually the dependence on the temperature is very smooth and in fact carbon nanotubes are nearly ballistic even at room temperatures. The increasing resistivity at higher temperatures is due to the fact that the system becomes more and more "disordered" as temperature increases. Addendum : I should stress that the Meissner effect is not a synonym of perfect diamagnetism. Electron in vacuum (like plasma) can exhibit perfect diamagnetism, which is a direct consequence of zero resistance. Zero resistance in fact implies that current loops (eddy currents) are generated as a response to a variation of the external magnetic field, and will exactly cancel this variation. In a perfect diamagnetic system (as the electron-in-vacuum system) the magnetic field can be still non-zero (for example, the electron moving in vacuum generates a finite magnetic field). Inside a superconductor, the magnetic field is always zero, at least under a certain critical field, where the superconductor has a phase transition to a normal system (metal) or to a more complicated mixed state (type II superconductors). The difference between Meissner effect and diamagnetism is kind of subtle to understand but is physically well defined. I think a good introduction to this is here 2nd Addendum : Another difference between ballistic conductivity and superconductivity is the presence of a superconducting order parameter $\Delta e^{\imath \varphi}$. This is not only mathematical object to describe the pairing between electrons, but has direct physical consequences. One example is the Josephson effect. Clearly no Josephson junction can be realized between two ballistic conductors. (although it is possible to realize Josephson junctions between a superconductor and a ballistic conductor). 3rd Addendum : The weak attraction between electrons in conventional, low-temperature superconductors is described in the Bardeen-Cooper-Schrieffer (BCS) theory as an effective interaction between electrons mediated by the lattice distortions (phonons). In BCS, the weak attraction is described in the mean field approximation as an order parameter $\Delta e^{\imath \varphi}$. This mean field describes the phase transition between superconducting state and normal state (this is similar to the case of ferromagnets, which are described by a different order parameter, namely, the magnetization).
No interaction plays a role in a system made of a single electron in vacuum. This is also the case in a system made of many electrons in vacuum (as long as the density is low) or in a ballistic system (nanotubes). | {
"source": [
"https://physics.stackexchange.com/questions/197102",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/74534/"
]
} |
197,110 | If two stars of any type were to form near each other, how closely can they form before something prevents them from being two distinct stars? | There is a database of visual binaries - that's a good place to start. It includes the following plot of period and eccentricity: The bottom left corner represents a system with a period of $10^{-1.6}\approx 0.025$ years or just over 9 days. Now from Kepler's Laws, we have that the square of the period scales as the cube of the average distance. If the two stars have the same mass as our Sun, we can estimate the distance using the correction for reduced mass given on hyperphysics : $$T^2 = \frac{a^3}{m_1+m_2}$$ where $a$ is expressed in a.u., $m$ in mass of the sun, and $T$ is calculated in years. Using this, we end up with an estimated distance of $a = 0.11 a.u.$. That is "very close" - much closer than the orbit of Mercury. For reference, the radius of the Sun is 0.0046 a.u. Tidal forces would be enormous. There may be better ways to estimate all of this - I will try to ping a few people who know much more about these things. | {
"source": [
"https://physics.stackexchange.com/questions/197110",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/74534/"
]
} |
197,470 | For instance, conservation of momentum, does it take time to propagate between two or more objects? If it does, then there would be some moment that the momentum is not conserved. If it doesn't take any time at all, since the law itself is information, then doesn't it prove that information can travel faster than light? | Conservation laws don't "propagate". They are inevitable consequences of symmetries of the dynamics by Noether's theorem , and the dynamics propagate with whatever finite speed they do. | {
"source": [
"https://physics.stackexchange.com/questions/197470",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/87328/"
]
} |
197,487 | Like many others, I have marveled at the images made available from the Hubble Space Telescope over the years. But, I have always had a curiosity about the color shown in these images. An example is shown below. Are the colors we see, such as the yellows, blues, and so on the true colors or are they applied by some kind of colorization method to enhance the image quality for realism. | Sort of. As Space.com writes , The raw Hubble images, as beamed down from the telescope itself, are black and white. But each image is captured using three different filters: red, green and blue. The Hubble imaging team combines those three images into one, in a Technicolor process pioneered in the 1930s. (The same process occurs in digital SLRs, except that in your camera, it's automatic.) Why are the original images in black and white? Because if Hubble's eye saw in color, the light detector would have to have red, green and blue elements crammed into the same area, taking away crucial resolving capability. Without those different elements, Hubble can capture images with much more detail. As an interesting aside, the Wide Field Camera 3 sees in wavelengths other than visible light, as do the Cosmic Origins Spectrograph and the Space Telescope Imaging Spectrograph . NASA goes into a litte detail about the process here , as well as some of the rationale behind choosing some colors. Some of the reasons for using artificial colors include showcasing elements whose emission lines are out of the visible spectrum, and showing features that are too dim at visible wavelengths. Remember, CCD detectors usually don't see the same things that humans do, and Hubble can see outside the visible spectrum. | {
"source": [
"https://physics.stackexchange.com/questions/197487",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/59161/"
]
} |
199,602 | A friend of mine has the idea that drinking cold water and eating cold food will assist them in losing weight. The core temperature of a human body is 37$^{\circ}$ C. If they drink water, at a temperature as cold as they can stand it, say normal tap water with lots of ice in it, will they lose weight, over a long period of time, by the heat energy used by the body in increasing the temperature of the water they have ingested? So every time they ingest a kilogram of water,, I assume that their body will attempt to raise it to something in the region of their core temperature and will need to expend energy increasing the water temperature. 1 Kcal is needed to raise 1 litre of water 1$^{\circ}$ Celsius. But over time , if they drank say a maximum of 3 litres a day, (and all evidence based diet programmes do stress the long term nature of the effort involved), it may be worth it.
Also, if they kept to a regime of as cold as possible food, they may increase this energy loss to say 200kcal a day, which in my opinion, would be significant, long term. | Sort of, yes. Ice water is, in fact, a negative-calorie foodstuff and could be used to lose some weight. Fats contain about 37 kJ/gram of energy, drinking one glass of ice water will burn about 37 kJ or up to three times more if you eat some crushed ice as part of drinking the water: so that's 1 gram of fat burned per drink, up to 2-3 if you eat ice. The normal advice of "drink 8 glasses of water per day" in principle leads to a direct weight loss of 3kg/year. There are other negative-calorie foodstuffs, like celery. You generally have to look at a whole metabolic effect to see it, so you have to consider the cost of the whole digestion process and its effect on the body. For example, totally black coffee might raise your baseline metabolic rates enough that it loses more energy than it contains; it is very hard to know without an experiment. But that misunderstands the problem. The thermodynamics of weight loss is really easy and 100% correct. However it is not adequate for understanding the problem. If you have a complex system and you don't know all of the inputs, tweaking a dial labeled "more energy out!" will not necessarily discharge a battery that you see elsewhere in the system, and could potentially even charge it further, if you don't know what you're doing. If you've been in physics for long enough you've seen feedback loops , at least in the cool feedback-based circuits you can make with op-amps, like analog integrators and analog derivative-takers. Biophysics has to deal with the exact same loops; they are a core part of how any living organism maintains homeostasis and collects energy. Ipso facto, your body contains several of these feedback loops operating within it, and any weight loss plan needs to take these feedback loops into account. When you digest food, most nutrients get sent to the liver. Dextrose/glucose can more or less be forwarded on as-is; anything else needs to be turned into sugars so that you can use them. (In particular, there is a myth that fats go straight to your gut that is just not true.) There are a lot of processes that happen, but the most important one is related to a substance called glycogen , which is basically a "hairball" where the "hairs" are glucose sugar molecules. Every cell in your body can use the glucose to "compress" a phosphate group onto ADP to produce ATP, and these phosphate "springs" are then used as your direct mechanism of energy transport: complex proteins will often accept some of these "compressed springs" and, when they have the components they need, will then unleash them back into ADP to get the energy to actually perform whatever job the protein does. Your liver basically maintains a large store of glycogen, and you can basically think of this like one big "cup" of fluid. When that cup is "overflowing", the liver stops filling it and starts to produce fat and stores it in fat cells in the adipose tissues . When it is under-filled, the liver sends signals to your brain to make you feel "drained", as you feel after a hard workout or after a day of fasting. Your body starts to remove fat from the fat cells and "burn" it again into ATP and glycogen etc. to have energy available. So, there are three caches of energy: ATP, sugars like glycogen, and fat. When you run out of active ATP "fuel" your body seamlessly makes more from the glycogen available; in addition to inter-borrowing, the glycogen cache makes you feel "drained" as it depletes. It borrows from the triglycerides in the fat cells, which have their "backbone" ripped off and the three resulting "fatty acids" do a similar job to the glycogen: but the fat cells, as they deplete, make you feel "hungry." [It's a little more complicated than that, but basically all of your fat cells all the time are transmitting a message saying "I'm satisfied" (the hormone called "leptin"), which contradicts the messages from your GI tract saying "I'm hungry" (other hormones in the "ghrelin" family), and your brain gradually acclimates to whatever the "balance" is between the two hormones, as "normal". From there, if you are losing weight, you become a little bit more hungry overall but a lot more susceptible to existing hunger cravings from your GI tract.] So by itself, this loss in energy due to water will release signals triggering you to exercise less and eat more , and because of that , the small magnitude of "one gram per glass of ice water" is likely to get lost in the noise of "ten more/less grams of food per meal." The same is true of a half-hour workout every few days: 300 kcal of exercise will burn 34 grams of fat in the short term, but it will also move you off-equilibrium to the point where you're probably eating 100 grams more food per day, which will balance it out. For many people this is "snacking" but it can also easily be larger portions per meal. This is why diet and obesity intervention needs to be a "lifestyle change". It's not that thermodynamics is wrong; rather it's 100% right for its limited part of the picture: but thermodynamics does not model complex systems like hormonal feedback to the brain very well. In particular, with respect to this diet-intervention: drinking 8 glasses of water per day tends to "flush" your stomach contents into your intestines, which can increase ghrelin and make you more hungry. | {
"source": [
"https://physics.stackexchange.com/questions/199602",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
199,632 | In the case of a supernova explosion it is possible to create heavy elements through fusion. Supernovae have a tremendous amount of energy in a very small volume but not as much energy per volume as there was in our early universe. So, what is the major difference? Why didn't the Big Bang create heavy elements? | Heavy elements couldn't form right after the Big Bang because there aren't any stable nuclei with 5 or 8 nucleons. Source: Wikipedia (user Pamputt) In the Big Bang nucleosynthesis , the main product was $^4He$, because it is the most stable light isotope: 20 minutes after the Big Bang, helium-4 represented about 25% of the mass of the Universe, and the rest was mostly $^1H$. There was only 1 nucleus of deuterium and helium-3 for each $10^5$ protons, and 1 nucleus of $^7Li$ for each $10^9$ protons. Given these abundances, the most probable reactions to yield heavier elements would be $^1H + {}^4He$ and $^4He + {}^4He$, but neither produces stable nuclei. So instead we have only $^2H + {}^7Li \to {}^9Be$ and $^4He + {}^7Li \to {}^{11}B$. This reactions are extremely unlikely, since lithium was so scarce. It is predicted that one of these nuclei was form for $10^{16}$ protons. Abundance of the previous elements and cooling of the universe prevented the formation of even heavier elements. On the other hand, in the first stars carbon formed in the triple alpha process , which is only possible with the density and helium abundance found in stars, and takes a lot of time. Subsequent nuclear fusions create heavier elements up to iron, and the energy released in the supernova explosion allows the synthesis of even heavier elements. References Alain Coc, Jean-Philippe Uzan, Elisabeth Vangioni: Standard big bang nucleosynthesis and primordial CNO Abundances after Planck JCAP10(2014)050 arxiv:1403.6694 | {
"source": [
"https://physics.stackexchange.com/questions/199632",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/74534/"
]
} |
199,730 | I'm watching a movie, The Hurt Locker, and the first scene shows an IED explosion which kills a soldier. Of course movies don't depict explosions with maximum realism, but I noticed the debris and smoke / flame didn't reach him, and it made me curious about whether invisible aspects of an explosion - heat or concussive blast can be lethal (without carrying shrapnel). How strong are the unseen forces from an explosion such as a road side bomb? Strong enough to be lethal? | Blast can definitely kill you, although it is only lethal at much shorter ranges compared to shrapnel. A building can be destroyed by 5psi overpressure while a Human can withstand up to 45psi and live. Some data here: A 5 psi blast overpressure will rupture eardrums in about 1% of
subjects, and a 45 psi overpressure will cause eardrum rupture in
about 99% of all subjects. The threshold for lung damage occurs at
about 15 psi blast overpressure. A 35-45 psi overpressure may cause 1%
fatalities, and 55 to 65 psi overpressure may cause 99% fatalities.
(Glasstone and Dolan, 1977; TM 5-1300, 1990) BTW, damage in Humans mainly occurs at the interface of areas of different density eg lungs and eardrums. It is essentially a spallation effect like Newton's Cradle in tissue. At much higher pressures the shock wave tends to tear tissue. Here is a FEMA report on TNT equivalent blast overpressures and distances . However, there is the question of "impulse". For example, high explosives (HE) typically create very high over pressures for very short duration. That's why a Human can easily survive 5psi overpressure. This is equivalent to around a tonne of pressure on the body. Obviously, if that duration was in seconds instead of milliseconds the person would die. HE creates a shattering effect called brisance, which is more damaging to hard and rigid materials than soft ones. Thermobaric explosions, OTOH, create lower overpressures but for much longer duration. | {
"source": [
"https://physics.stackexchange.com/questions/199730",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75876/"
]
} |
199,899 | Now that the ESA has landed a probe on a comet, namely Rosetta's Philae Probe, could we possibly land a probe on Halley's comet? Fuel seems to be a limiting factor for interstellar expeditions - could using Halley's comet as a free taxi, shutting down all equipment once landed to save fuel and energy, aid in such expeditions? | There seems to be a fundamental misunderstanding as to how movement in space works. In space there is no air friction, that is, once you are moving toward your destination, you don't need a continuous source of power to keep going. Landing on a comet doesn't buy you anything, since in order to land you must first match the comet's orbit, at which point the comet could disappear and you could still shut down your engines and follow the exact same path it would have taken you. | {
"source": [
"https://physics.stackexchange.com/questions/199899",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/87674/"
]
} |
199,903 | To elaborate, it is mathematically proven that triangles are the strongest shape. I don't know what strong materials there are in the world but I have heard of carbon nanotubes. In the case of nanotubes, as well as other carbon based compounds that I have seen, they seem to form hexagonal patterns as seen in this picture: So when I ask if chemical bonds abide by widely accepted geometric principles , I am asking if I would be right to assume some of the strongest materials would be those that form triangle patterns? Sorry if this question seems dumb and simplistic, depending on the complexity of the answer I may take a larger interest in the subject. ALso I didn't know what tag to use so I listed as soft question but I still desire the nitty gritty if you don't mind (>~<) | There seems to be a fundamental misunderstanding as to how movement in space works. In space there is no air friction, that is, once you are moving toward your destination, you don't need a continuous source of power to keep going. Landing on a comet doesn't buy you anything, since in order to land you must first match the comet's orbit, at which point the comet could disappear and you could still shut down your engines and follow the exact same path it would have taken you. | {
"source": [
"https://physics.stackexchange.com/questions/199903",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/95830/"
]
} |
200,078 | How do lenses produce 2-dimensional images, if a lens bends all incoming rays of light to intersect at the focal point? Shouldn't this produce a single dot of light on a screen placed at the focal length? This is basically the standard diagram that always shows up in textbooks: I know this doesn't happen in real life--I used to use telescopes pretty frequently for work. The most in-focus image would be the one with the smallest diameters for the stars, which, of course, we think of point sources at infinity. But--the light from all stars (despite being at infinity) is not all focused at a single point. Instead, each star's light is focused at its own $(x,y)$ point on a 2D image. I can't reconcile the theory as I understand it with any of my real-life experience with optics. What am I missing? | ...if a lens bends all incoming rays of light to intersect at the focal point? Shouldn't this produce a single dot of light...? (In your diagram, the source image is at infinity. I will continue the analysis along that idea.) It is true that all rays parallel to the axis focus to that single dot. Not all rays, however, are parallel to the axis: Rays coming from different angles focus to different points. That is how an image is formed. | {
"source": [
"https://physics.stackexchange.com/questions/200078",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/89119/"
]
} |
200,083 | My homework is that : a container contains two half parts X and Y separated by a plate P. Part X contains ideal gas, while part Y is vacuum. Then the plate P is removed, so gas from X can spread out the whole container. But the most confusing part is : it is said that X does no external work, although the gas expands. Can anyone explains it to me? I think that because vacuum has little gas molecule so gas from X cannot interact or collide with any other particles => it cannot pass on energy. Is it right? | ...if a lens bends all incoming rays of light to intersect at the focal point? Shouldn't this produce a single dot of light...? (In your diagram, the source image is at infinity. I will continue the analysis along that idea.) It is true that all rays parallel to the axis focus to that single dot. Not all rays, however, are parallel to the axis: Rays coming from different angles focus to different points. That is how an image is formed. | {
"source": [
"https://physics.stackexchange.com/questions/200083",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/89121/"
]
} |
200,198 | Please forgive my lack of artistic ability, but here's my question:
Consider that a skydiver, without using his parachute, were to fall exactly parallel to a giant curved slide that starts at $90\,^\circ$ perpendicular to the ground and gradually curves until it is parallel to the ground. Can he survive? My thinking tells me that if I stood at the top of the slide and slid down, making sure to keep contact with the slide, I would (if the top of the slide was high enough) eventually get to almost terminal velocity, yet when the slide starts to curve I would begin to feel an increase in G-force and friction, but no impact and thus would survive. So then, if I were to jump directly above the slide given that I had enough time to adjust myself to be perfectly aligned with the slide as it started to shallow, (or even better, if I was able to have my body or part of my body scraping the slide) the impact when the slide moves from 90 degrees to 89 degrees would be soft enough for me to survive - and so forth until I'm actually sliding and no longer falling with the slide. | The answer is Yes and your thinking is correct. You try to differ between impact and sliding on a curve . In fact the impact is just a sudden large force, while a curved (e.i. circular) motion similarly applies a force, just much smaller but also over a longer period of time. The key in surviving any fall is to reduce the force on your body at "impact". A pillow does that. A curved slide does that. And they both do it by extending the impact duration . Remember first Newton's 2nd law: $$\sum\vec F=\frac{d\vec p}{dt}\approx \frac{\Delta\vec p}{\Delta t}$$ Smaller momentum change $\Delta \vec p$ (that would be smaller speed or lighter skydiver) or larger duration $\Delta t$ will reduce the total force. A soft material like a mattress will extend $\Delta t$ . And a curved slide will as well, as you explain it yourself, cause the momentum change over a much longer period of time. | {
"source": [
"https://physics.stackexchange.com/questions/200198",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/89167/"
]
} |
200,209 | My question is where can I find a good book, review, online course, or all of them for self-teaching Green's function in quantum many-body problems (if it has problems with solutions for self-evaluating the concepts the better). As I start to dig in the field, I find very nice books on one-particle green's function (e.g. Economou's book ). However, the many-body non-equilibrium Green's function is presented always in a cloud of mysticism understandable only for people in field. I am acquaintance with one particle Green's function at Sakurai's (scattering) level, but no more than that. | The answer is Yes and your thinking is correct. You try to differ between impact and sliding on a curve . In fact the impact is just a sudden large force, while a curved (e.i. circular) motion similarly applies a force, just much smaller but also over a longer period of time. The key in surviving any fall is to reduce the force on your body at "impact". A pillow does that. A curved slide does that. And they both do it by extending the impact duration . Remember first Newton's 2nd law: $$\sum\vec F=\frac{d\vec p}{dt}\approx \frac{\Delta\vec p}{\Delta t}$$ Smaller momentum change $\Delta \vec p$ (that would be smaller speed or lighter skydiver) or larger duration $\Delta t$ will reduce the total force. A soft material like a mattress will extend $\Delta t$ . And a curved slide will as well, as you explain it yourself, cause the momentum change over a much longer period of time. | {
"source": [
"https://physics.stackexchange.com/questions/200209",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/42691/"
]
} |
200,635 | My question is simple: as the title says, do I exert a gravitational force on distant objects, for example, Pluto? Although it is a very small force, it is there, right? This leads me to the question, am I exerting a gravitational force on everything in the Universe, for example the farthest galaxy that we know of? | While the atoms that make up your body are exerting gravitational force on very distant objects you as an entity are only exerting gravitational force on objects out to about 14 light years distance (assuming the age shown in your profile is correct). Because the "speed of gravity" is the speed of light. And toward the outer edge of that sphere the forces are controlled by your birth mass rather than your current mass. | {
"source": [
"https://physics.stackexchange.com/questions/200635",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/82064/"
]
} |
200,868 | I read somewhere that most things 1 emits all kinds of radiation, just very few of some kinds. So that made me wondering whether there is a formula to calculate how many X-rays an 100W incandescent light bulb would emit, for example in photons per second. For example, we already know that it emits infrared and visible light. I find it hard to describe what I have tried. I searched on the internet for a formula, but couldn't find it. Yet I thought this was an interesting question, so I posted it here. 1 Black holes don't emit any radiation excepted for Hawking radiation if I get it right. | The formula you want is called Planck's Law . Copying Wikipedia: The spectral radiance of a body, $B_{\nu}$, describes the amount of energy it
gives off as radiation of different frequencies. It is measured in
terms of the power emitted per unit area of the body, per unit solid
angle that the radiation is measured over, per unit frequency. $$ B_\nu(\nu, T) = \frac{ 2 h \nu^3}{c^2} \frac{1}{e^\frac{h\nu}{k_\mathrm{B}T} - 1} $$ Now to work out the total power emitted per unit area per solid angle by our lightbulb in the X-ray part of the EM spectrum we can integrate this to infinity: $$P_{\mathrm{X-ray}} = \int_{\nu_{min}}^{\infty} \mathrm{B}_{\nu}d\nu,
$$ where $\nu_{min}$ is where we (somewhat arbitrarily) choose the lowest frequency photon that we would call an X-ray photon. Let's say that a photon with a 10 nm wavelength is our limit. Let's also say that 100W bulb has a surface temperature of 3,700 K, the melting temperature of tungsten. This is a very generous upper bound - it seems like a typical number might be 2,500 K. We can simplify this to: $$
P_{\mathrm{X-ray}} = 2\frac{k^4T^4}{h^3c^2} \sum_{n=1}^{\infty} \int_{x_{min}}^{\infty}x^3e^{-nx}dx,
$$ where $x = \frac{h\nu}{kT}$. wythagoras points out we can express this in terms of the incomplete gamma function, to get $$
2\frac{k^4T^4}{h^3c^2}\sum_{n=1}^{\infty}\frac{1}{n^4} \Gamma(4, n\cdot x)
$$ Plugging in some numbers reveals that the n = 1 term dominates the other terms, so we can drop higher n terms, resulting in $$
P \approx 10^{-154} \ \mathrm{Wm^{-2}}.
$$ This is tiny . Over the course of the lifetime of the universe you can expect on average no X-Ray photons to be emitted by the filament. More exact treatments might get you more exact numbers (we've ignored the surface area of the filament and the solid angle factor for instance), but the order of magnitude is very telling - there are no X-ray photons emitted by a standard light bulb. | {
"source": [
"https://physics.stackexchange.com/questions/200868",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/81048/"
]
} |
200,932 | Why are golden mirrors yellow? Do they add a yellow component to the spectrum or absorb non-yellow components? If they absorb, then why are they used in telescopes being imperfect? If they add a yellow component, then where do they take energy for it from? JWST mirrors are coated with gold Do they add some corrections in the on-board computer to compensate for the color of gold? | If you look at the reflectivity of gold (vs silver or aluminum) you can see a plateau at wavelengths below 500 nm source : If blue wavelengths are not reflected as well as other colors, the resulting image will look "more yellow" - which is what you see. At longer wavelengths, gold is a very good reflector (better than the other two above 600 nm). It also doesn't tarnish, so its reflectivity is less affected by atmospheric contamination. If you need anything approaching accurate measurement, you have to calibrate your system at any rate - beside the mirrors and lenses, you need to consider the response of the detector, effects of the atmosphere, and pretty much everything in (or near) your optical path. Serious photometry needs serious calibration, as Chris White pointed out in the comment. | {
"source": [
"https://physics.stackexchange.com/questions/200932",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7612/"
]
} |
201,504 | We all know when fan starts moving faster, we cannot see its blades. Why is this? First I assumed persistence of vision may be the reason. But that can happen with blade also right? Image of blade can remain in our memory and moving fan can appears as a circular plane with blade color. Why only image of rear side of fan is remaining in our memory? Note: I tried with fan whose blade area is almost same as that of non blade area | The eyes are measuring the number of photons of each color that are hitting a given point of the retina – that are coming from some direction. This is a function of time, $f(t)$, for each point. However, when this function is changing too quickly, the eye can't see the changes. Effectively, the eye may also see the average of $f(t)$ in each period of time which is as short as 1/50 second or so. That's why 24 or 25 or 30 or 50 frames per second are usually enough for a TV screen. If the fan frequency is at least 1 blade per 1/50 second, which is the same as 10 rotations per second for a 5-blade fan, for example, the following is true: During 1/50 seconds, each point of the image where fan blade may either be or not be sees a full period, so the perception is no different from the perception in which the color is averaged over those 1/50 seconds. But the averaged color of each point is pretty much the same. It's a weighted average of the (RGB) color of the objects behind the fan at the given point; and the color of the fan blade. The weights in the weighted average are determined by the thickness of the fan blades (relatively to the circumference), and these weights may actually depend on the radial coordinate $r$. So what we see is not "quite" transparent – the contrast is lower – but it's enough to see what's behind; the color of the things behind the fan is mixed with the color of the blades; and this mixing occurs pretty much independently of the location relatively to the axis of the fan (if the fan blades' color is uniform), and independently of time (because of the averaging over the 1/50 second time intervals). Note that the 1/50 second resolution depends on the neurology – abilities of the eye, nerves, brain etc. However, even if the brain were perfect, there would exist certain limitations that couldn't be beaten. The number of photons coming to each retina cells per second is finite and the inverse of this number basically determines the best possible time resolution one can have for the given "pixel". | {
"source": [
"https://physics.stackexchange.com/questions/201504",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41462/"
]
} |
201,973 | Analogous to the tides of Earths oceans, do the Moon and Sun cause our atmosphere to bulge in what could be described as a low and high tide? | The differential force of gravity on the atmosphere works the same as it does for the rest of the earth (the oceans etc). However, moving the equipotential surface by a few m will be almost undetectable on the atmosphere, since the density of the atmosphere decreases so gradually – over many km. Contrast this with the surface of the ocean, which is crisp. So while it might be theoretically possible to look for small changes in the height of an isobar of, say, $10^4\,\mathrm{Pa}$ , I don't think that it will be possible to measure such a change in practice. See for example this graph from the Australian weather service showing pressure changes over four days. The units on the left are $\mathrm{hPa}$ – you expect tidal variations to be much smaller. It may take a while (many cycles) to pick out the lunar variations - although I am sure it has been done. There is a thing called "lunar atmospheric tides" - see Wikipedia which describes the math behind this. And it describes it as "weak". So the short answer is "yes". For a good (27 page) review of the subject, see this 1979 article by Lindzen The introduction of that article states: 1 INTRODUCTION Atmospheric tides refer to those oscillations in the atmosphere whose periods are integral fractions of a lunar or solar day. The 24-hour Fourier component is referred to as a diurnal tide, the 12-hour component as a semidiurmal tide. The total tidal variation is referrred to as the daily variation. Although atmospheric tides are, in small measure, gravitationally forced, they are primarily forced by daily variations in solar insolation. So – the main cause of daily variation is solar heating. There is a (much) smaller component due to gravity: ... atmospheric tides are, in small measure, gravitationally forced... | {
"source": [
"https://physics.stackexchange.com/questions/201973",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/74534/"
]
} |
202,213 | Near where I live, local fishermen often bring cans of castor oil with them, to calm the water around their boats, if they feel bad weather is due. They claim this method of sea calming works, (possibly because it worked for their fathers, and their father's fathers.....). Does this idea have a factual basis, or is it a tall tale/superstition? The physics department of my local university is currently conducting sea trials regarding the process, so they seem to take it seriously enough. Have we any evidence that it does work, even as a possibly beneficial effect from serious adverse environmental events such as oil tanker leaks? Is there any theory regarding the mechanism involved? EDIT Below the answer from Floris, Chris White wrote this comment, which I just had to steal: An amazing quote from Ben Franklin via Tanford's Ben Franklin Stilled the Waves quoted in the first article, attesting to the efficacy of this method: "the oil, though not more than a teaspoonful, produced an instant calm over a space of several yards square, which spread amazingly, and extended itself gradually till it reached the lee side, making all that quarter of the pond, perhaps half an acre, as smooth as a looking glass." This from a teaspoon of presumed olive oil! | Yes it works. But let's not use it on a massive scale, lest we damage the ecosystem (tip of the hat to @phi1123). A hint to the mechanism can be found in Behroozi et al (Am J Phys, 2007) They state in the abstract: From the attenuation data at frequencies between 251 and 551Hz, we conclude that the calming effect of oil on surface waves is principally due to the dissipation of wave energy caused by the Gibbs surface elasticity of the monolayer , with only a secondary contribution from the reduction in surface tension. Our data also indicate that the surface-dilational viscosity of the oil monolayer is negligible and plays an insignificant role in calming the waves. (my emphasis) Dissipation of wave energy. The key to waves getting big is that a) wave energy is added by the motion of air over the surface, and b) the energy imparted is not immediately dissipated. In a sense, the oil acts as a "Q spoiler" - a little bit of energy dissipation in each cycle means that the wave just doesn't get a chance to build up. A similar thing is explained in the book "Waves on Fluids" by James Lighthill (Cambridge University Press, 2001). On page 237 it states: It is departures of the surface tension $T$ from its equilibrium value that can result in such surface dissipation. In a fluid such
that small wave motions generate small variations in $T$, a net
$X$-component of force $$(\partial T/\partial x) \delta x$$ must act
on a strip of surface of width $\delta x$ with frontiers of unit length
parallel to the $y$-axis, even though on linear theory the same small
variations make no change to the $z$-component. In the surface
boundary layer, therefore, the tangential stress changes from (80) not
to zero but to the value $$p_{xx} = -\partial T/\partial x$$ Needed to balance the $x$-component ($\partial T/\partial x)$ of
surface force per unit area. There are conditions when in the surface boundary layer the tangential stresses increase in magnitude so enormously from the internal value (80) to the surface value (87) that the resulting surface dissipation (extra viscous dissipation due to enhanced shearing stresses within the surface boundary layer) greatly exceeds the rate of internal dissipation. This is the mechanism responsible for the proverbial calming effect of 'oil on troubled waters'. In other words - the thin layer of oil causes a rapid change in tangential stresses near the surface, leading to energy dissipation. This prevents the buildup of wave energy - especially at the shorter wavelengths. This not only makes the water appear smoother ("smooth as a looking glass", in the Franklin quote) but in the process reduces the "grip" of the wind on the water - making energy transfer from wind to water more difficult. | {
"source": [
"https://physics.stackexchange.com/questions/202213",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
202,284 | I know that the light is reflected from a object to my eyes, but I don't understand exactly how. The photons appear from the light source and disappear in my eye! Can someone explain the phenomenon of where the photons go and do to allow us to see? | From the wiki article on color vision as an illustration of how photons are absorbed: Perception of color begins with specialized retinal cells containing pigments with different spectral sensitivities, known as cone cells. In humans, there are three types of cones sensitive to three different spectra, resulting in trichromatic color vision. Each individual cone contains pigments composed of opsin apoprotein, which is covalently linked to either 11-cis-hydroretinal or more rarely 11-cis-dehydroretinal. So it is molecules with different absorption spectra that absorb the optical photons and start the sequence of giving a signal to the brain. It is not a simple matter and belongs more to biology than to physics. The physics part is just that the photon hits a molecule and raises an electron to a higher level which generates a series of reactions that finally register in the brain. | {
"source": [
"https://physics.stackexchange.com/questions/202284",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/90023/"
]
} |
202,628 | I got this question as my physics class homework for tomorrow. Anyone please help me out. If Earth constantly rotates and revolves, then how can we call an object in a state of rest? | "A state of rest" is a relative term. Relative means - measured in comparison to the things around it. When you sit in a train and sip from a cup of coffee, you can do so because the cup is still relative to you even though both of you might be hurtling through the countryside at 200 km/h. For most experiments, objects can be considered "at rest" if they don't move relative to the things around them. But the "frame of reference" (the thing that you consider "stationary") does matter. For example - if you sit in a car that accelerates, you might be "at rest" relative to the car, but you can feel yourself being pushed into the seat of the car by an invisible force. Similarly, there are measurable effects on earth that are due to the fact that Earth rotates about its axis (for example - the way air rotates around a low pressure region is a consequence of the rotation of the Earth), and even effects that relate to the motion around the sun (including the tides). Physicists call accelerating and rotating frames of reference "non-inertial", and say that observations in such frames "give rise to fictitious forces" - that is, if you think your non-inertial frame is stationary, you will also think that a force has appeared out of thin air. Such as the force that pushes you into the seat of the accelerating car. Or the "force" that makes you spill your drink when the car goes over a bump (and your "frame of reference" suddenly accelerates). But for many "in classroom" experiments, we can ignore all these things. Much of physics (and science) is about knowing what you can ignore, and when you can ignore it. | {
"source": [
"https://physics.stackexchange.com/questions/202628",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45837/"
]
} |
202,823 | In this video an airport worker (in blue) tries to prevent a Boeing 737 from sliding on ice in heavy winds: Did he even have a chance? On one side of the argument, the airplane is sliding due to the force of the wind being stronger than the force of friction against the ice. If the worker can counter that force to prevent it from rising above the static coefficient of friction with the ice, then he could prevent the plane from sliding further. Even if the wind could apply 1 Newton of force over the static coefficient of friction of the tires, the plane would slide. The worker should be able to apply a few tens of Newtons to counteract that a force of that magnitude. On the other hand, how much force could he reasonably apply? Certainly no more than his own static friction with the ice would allow. Maybe the fact that he could dig heels into the ice would make his contribution significant? Of course I don't expect the worker to stop the sliding plane, but could he reasonably have prevented the plane from sliding further ? | Simple (Wrong) Analysis Shoes Assuming the coefficient of friction on the ice is approximately the same for the tires and shoes. It would do just as much good to get into the plane as to try to push it. Both would increase the frictional force by at most $\mu\,m\,g$ Having established an upper bound for the effectiveness of pushing we can compare this to the magnitudes of the forces already in place. An empty 737 has a mass around 30,000 kg. A human on the larger side has a mass around 100 kg. So the human pushing on the plane could increase it's resistance to sliding by about 0.3%. Considering wind generally comes in gusts that vary by more than 0.3%, the plane would still slide during the gusts, though in theory slightly less than it otherwise would have slid. Ice Cleats Suppose ice cleats were available. Now the human's force to resist movement is not limited by friction but by strength. Unfortunately, the human squat record is for a force less than 6 KN. The coefficient of friction between rubber and ice is around 0.2 so the plane was already dealing with 60 KN of force. So in this case the human could increase the resistance to sliding by no more than 10%, this may be close to the variation in wind in non-extreme weather situations, and might have actually helped. However, it would have been extremely dangerous, and likely could have only helped for a very very limited time. It it were me, I'd just shove the ice cleat under the tire and be done with it. A more complicated Analysis Having watched the video it appears that the aircraft is rotating rather than sliding sideways. This is important as it seems the rear tires are keeping traction and only the front tire is slipping. Thinking about the weight distribution of an aircraft, most of the weight is on the rear tires. In fact, for a 737 the front wheel weight seems to be around 15 KN, so using the 0.2 coefficient of friction that's only 3KN of friction. It seems that not only could a cleated (very strong) human help out, but could in fact push the plane back into place. Someone in just shoes could still help out to the tune of around 5-10% which could be significant. Note though it would be most effective to push near the nose as that would give the longest lever arm to give the most torque. | {
"source": [
"https://physics.stackexchange.com/questions/202823",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/4877/"
]
} |
202,842 | I (and we all) know that acceleration due to gravity $g\propto\frac{1}{r^2}$.Now my question is can I use this for depth.If not,why?If we can use it for depth or not struck me when I was trying to prove that acceleration due to gravity at center of earth is zero (hence weightlessness).If I use the above formula that the value of $g$ tends to infinity and not zero(I put radius equal to 0).Please help me with the limitations of the above formula and how to prove that $g$ is zero at the center of the earth where radius is zero. | Simple (Wrong) Analysis Shoes Assuming the coefficient of friction on the ice is approximately the same for the tires and shoes. It would do just as much good to get into the plane as to try to push it. Both would increase the frictional force by at most $\mu\,m\,g$ Having established an upper bound for the effectiveness of pushing we can compare this to the magnitudes of the forces already in place. An empty 737 has a mass around 30,000 kg. A human on the larger side has a mass around 100 kg. So the human pushing on the plane could increase it's resistance to sliding by about 0.3%. Considering wind generally comes in gusts that vary by more than 0.3%, the plane would still slide during the gusts, though in theory slightly less than it otherwise would have slid. Ice Cleats Suppose ice cleats were available. Now the human's force to resist movement is not limited by friction but by strength. Unfortunately, the human squat record is for a force less than 6 KN. The coefficient of friction between rubber and ice is around 0.2 so the plane was already dealing with 60 KN of force. So in this case the human could increase the resistance to sliding by no more than 10%, this may be close to the variation in wind in non-extreme weather situations, and might have actually helped. However, it would have been extremely dangerous, and likely could have only helped for a very very limited time. It it were me, I'd just shove the ice cleat under the tire and be done with it. A more complicated Analysis Having watched the video it appears that the aircraft is rotating rather than sliding sideways. This is important as it seems the rear tires are keeping traction and only the front tire is slipping. Thinking about the weight distribution of an aircraft, most of the weight is on the rear tires. In fact, for a 737 the front wheel weight seems to be around 15 KN, so using the 0.2 coefficient of friction that's only 3KN of friction. It seems that not only could a cleated (very strong) human help out, but could in fact push the plane back into place. Someone in just shoes could still help out to the tune of around 5-10% which could be significant. Note though it would be most effective to push near the nose as that would give the longest lever arm to give the most torque. | {
"source": [
"https://physics.stackexchange.com/questions/202842",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/78126/"
]
} |
202,846 | To make a hologram a film is exposed to an incident plane wave and wave from the object to record the interference pattern on the film. The principle is commonly explained in a way like that in p.1212 of "University Physics" ( http://books.google.com.hk/books?id=7S1yAgAAQBAJ&pg=PA1211&lpg=PA1211&dq ) What I don't understand is why a 3D image can be made by shining a plane wave through the film. The film is grating so at some points constructive interference can produce the point representing the object. But why the overall wave is diverged (show in 36.29b, p.1212) ? | Simple (Wrong) Analysis Shoes Assuming the coefficient of friction on the ice is approximately the same for the tires and shoes. It would do just as much good to get into the plane as to try to push it. Both would increase the frictional force by at most $\mu\,m\,g$ Having established an upper bound for the effectiveness of pushing we can compare this to the magnitudes of the forces already in place. An empty 737 has a mass around 30,000 kg. A human on the larger side has a mass around 100 kg. So the human pushing on the plane could increase it's resistance to sliding by about 0.3%. Considering wind generally comes in gusts that vary by more than 0.3%, the plane would still slide during the gusts, though in theory slightly less than it otherwise would have slid. Ice Cleats Suppose ice cleats were available. Now the human's force to resist movement is not limited by friction but by strength. Unfortunately, the human squat record is for a force less than 6 KN. The coefficient of friction between rubber and ice is around 0.2 so the plane was already dealing with 60 KN of force. So in this case the human could increase the resistance to sliding by no more than 10%, this may be close to the variation in wind in non-extreme weather situations, and might have actually helped. However, it would have been extremely dangerous, and likely could have only helped for a very very limited time. It it were me, I'd just shove the ice cleat under the tire and be done with it. A more complicated Analysis Having watched the video it appears that the aircraft is rotating rather than sliding sideways. This is important as it seems the rear tires are keeping traction and only the front tire is slipping. Thinking about the weight distribution of an aircraft, most of the weight is on the rear tires. In fact, for a 737 the front wheel weight seems to be around 15 KN, so using the 0.2 coefficient of friction that's only 3KN of friction. It seems that not only could a cleated (very strong) human help out, but could in fact push the plane back into place. Someone in just shoes could still help out to the tune of around 5-10% which could be significant. Note though it would be most effective to push near the nose as that would give the longest lever arm to give the most torque. | {
"source": [
"https://physics.stackexchange.com/questions/202846",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/48904/"
]
} |
202,849 | Fourier transformations: $$\phi(\vec{k}) = \left( \frac{1}{\sqrt{2 \pi}} \right)^3 \int_{r\text{ space}} \psi(\vec{r}) e^{-i \mathbf{k} \cdot \mathbf{r}} d^3r$$ for momentum space and $$\psi(\vec{r}) = \left( \frac{1}{\sqrt{2 \pi}} \right)^3 \int_{k\text{ space}} \phi(\vec{k}) e^{i \mathbf{k} \cdot \mathbf{r}} d^3k$$ for position space. How do we know that $\psi$ is not the Fourier transform of $\phi$ but we suppose that its the other way around ($\psi$ would be proportional to $\exp[-ikr]$ and $\phi$ would be proportional to $\exp[ikr]$)? If there was no difference in the signs, wouldn't there be a problem in the integration from minus inf. to plus inf. if the probability is asymmetric around zero? What is the physical reason that in the integral for momentum space we have $\exp[-ikr]$? I agree about the exponent for position space which can be explained as follows: its the sum of all definite momentum states of the system, but what about the Fourier of the momentum space? How can we explain the integral (not mathematically)? | Simple (Wrong) Analysis Shoes Assuming the coefficient of friction on the ice is approximately the same for the tires and shoes. It would do just as much good to get into the plane as to try to push it. Both would increase the frictional force by at most $\mu\,m\,g$ Having established an upper bound for the effectiveness of pushing we can compare this to the magnitudes of the forces already in place. An empty 737 has a mass around 30,000 kg. A human on the larger side has a mass around 100 kg. So the human pushing on the plane could increase it's resistance to sliding by about 0.3%. Considering wind generally comes in gusts that vary by more than 0.3%, the plane would still slide during the gusts, though in theory slightly less than it otherwise would have slid. Ice Cleats Suppose ice cleats were available. Now the human's force to resist movement is not limited by friction but by strength. Unfortunately, the human squat record is for a force less than 6 KN. The coefficient of friction between rubber and ice is around 0.2 so the plane was already dealing with 60 KN of force. So in this case the human could increase the resistance to sliding by no more than 10%, this may be close to the variation in wind in non-extreme weather situations, and might have actually helped. However, it would have been extremely dangerous, and likely could have only helped for a very very limited time. It it were me, I'd just shove the ice cleat under the tire and be done with it. A more complicated Analysis Having watched the video it appears that the aircraft is rotating rather than sliding sideways. This is important as it seems the rear tires are keeping traction and only the front tire is slipping. Thinking about the weight distribution of an aircraft, most of the weight is on the rear tires. In fact, for a 737 the front wheel weight seems to be around 15 KN, so using the 0.2 coefficient of friction that's only 3KN of friction. It seems that not only could a cleated (very strong) human help out, but could in fact push the plane back into place. Someone in just shoes could still help out to the tune of around 5-10% which could be significant. Note though it would be most effective to push near the nose as that would give the longest lever arm to give the most torque. | {
"source": [
"https://physics.stackexchange.com/questions/202849",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75628/"
]
} |
203,576 | I am gazing through my office window into a heavy rain. I am thinking that raindrops are like small lenses that bend the light. Thus I am surprised, that I can clearly see other buildings through the window. So, why is it that we can see through the rain? Is the density of raindrops simply too low? | Many of the photons coming from nearby objects will travel to your eye without striking a rain drop. However, photons traveling from more distant objects have a greater chance of hitting a rain drop before reaching you. This makes more distant objects seem dimmer or more difficult to see. | {
"source": [
"https://physics.stackexchange.com/questions/203576",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1485/"
]
} |
203,697 | In many places in statistical physics we use the partition function . To me, the explanations of their use are clear, but I wonder what their physical significance is. Can anyone please explain with a good example without too many mathematical complications? | The partition function is a measure of the volume occupied by the system in phase space. Basically, it tells you how many microstates are accessible to your system in a given ensemble. This can be easily seen starting from the microcanonical ensemble . In the microcanonical ensemble, where every microstate with energy between $E$ and $E+\Delta E$ is equally probable, the partition function is $$Z_{mc}(N,V,E)= \frac 1 {N! h^{3N}}\int_{E<\mathcal H(\{p,q\})<E+\Delta E} d^{3N}p \ d^{3N} q \tag{1}$$ where the integral is just the hypervolume of the region of phase space where the energy (hamiltonian) $\mathcal H$ of the system is between $E$ and $E+\Delta E$, normalized by $h^{3N}$ to make it dimensionless. The factor $N!^{-1}$ takes into account the fact that by exchanging the "label" on two particles the microstate does not change. The Boltzmann equation $$S=k_B \log(Z_{mc})\tag{2}$$ tells you that the entropy is proportional to the logarithm of the total number of microstates corresponding to the macrostate of your system, and this number is just $Z_{mc}$. In the canonical and grand-canonical ensembles the meaning of the partition function remains the same, but since energy is not anymore fixed the expression is going to change. The canonical partition function is $$Z_c(N,V,T)= \frac 1 {N! h^{3N}}\int e^{-\beta \mathcal H(\{p,q\})} d^{3N}p \ d^{3N} q\tag{3}$$ In this case, we integrate over all the phase space, but we assign to every point $\{p,q\}=(\mathbf p_1, \dots \mathbf p_N, \mathbf q_1, \dots \mathbf q_N)$ a weight $\exp(-\beta \mathcal H)$, where $\beta=(k_B T)^{-1}$, so that those states with energy much higher than $k_B T$ are less probable. In this case, the connection with thermodynamics is given by $$-\frac{F}{T}=k_B \log(Z_c)\tag{4}$$ where $F$ is the Helmholtz free energy . The grand canonical partition function is $$Z_{gc}(\mu,V,T)=\sum_{N=0}^\infty e^{\beta \mu N} Z_c(N,V,T)\tag{5}$$ where this time we are also summing over all the possible values of the number of particles $N$, weighting each term by $\exp(\beta \mu N)$, where $\mu$ is the chemical potential . The connection with thermodynamics is given by $$\frac{PV}{ T} = k_B \log (Z_{gc}\tag{6})$$ | {
"source": [
"https://physics.stackexchange.com/questions/203697",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20190/"
]
} |
203,974 | I was thinking about a completely unrelated problem (Quantum Field Theory Peskin & Schroeder kind of unrelated!) when the diagram below sprang into my mind for no apparent reason. After some thinking, I can't figure out why it wouldn't work, other than the theoretical reason that it systematically decreases entropy in a closed system: We have a thick, insulating barrier between two "ventricles" of a closed-off system. The only opening in this barrier is made by a solid shaft snugly (without permitting the transfer of heat between the two compartments) but frictionlessly. On the left ventricle we have a paddle attached, and on the right we have a coiled wire, as illustrated in the diagram. There are fixed magnets surrounding the coil of wire, providing a constant magnetic field through it. Let's say the whole contraption is so small that a single air molecule hitting the paddle will contribute a small but not immaterial amount of angular momentum to the shaft (again, I never said this machine was practical!). Therefore, random chance will make air molecules hit the paddle so that it will start turning in Brownian-motion style. This rotation is damped by the energy dissipated when the coil turns the current induced by the magnets moving relative to the turning reference frame of the coil into heat through a resistor in the coil. Thus, the air molecules contribute a small portion of their kinetic energy to the paddle, which is then expended as heat on the other side of the border, making the air molecules on the left colder, while air molecules on the right heat up. Doesn't this mean a decrease in entropy? (To see that it can't be an increase in entropy, take away the barrier, and note that the molecules go back to thermal equilibrium naturally, meaning that entropy increases naturally when undoing our actions). To further show this mythical contraption to be an impossibility, we could create a barrier with two pieces of equipment forming a passage between the two ventricles. One port-hole of energy would be the paddle already envisioned, the other would be a Carnot engine taking energy from the hot right to the cold left. The paddle would take energy from the left to the right effortlessly for a period of time, and then the Carnot engine would move the heat back the other way, gaining energy that came from nowhere in the process!! Where has my logic gone wrong? Clearly entropy must not decrease, and energy cannot be created by the fundamental axioms of physics. Why does this paddle fail to transfer energy from one ventricle to the other? An explanation of what has gone wrong with my reasoning would be greatly appreciated! | Thus, the air molecules contribute a small portion of their kinetic energy to the paddle, which is then expended as heat on the other side of the border, making the air molecules on the left colder, while air molecules on the right heat up. Doesn't this mean a decrease in entropy? Yes it does. However, we need to take the thermal noise of the resistor into account. Hot resistors make noise As discovered by John B. Johnson in 1928 and theoretically explained by Harry Nyquist , a resistor at temperature $T$ exhibits a non-zero open circuit voltage.
This voltage is stochastic and characterized by a (single sided) spectral density $$S_V(f) = 4 k_b T R \frac{h f / k_b T}{\exp \left(h f / k_b T \right) - 1} \, . \tag{1}$$ At room temperature we find $k_b T / h = 6 \times 10^{12} \, \text{Hz}$, which is a ridiculously high frequency for electrical systems.
Therefore, for the loop of wire and resistor circuit in the device under consideration, we can roughly assume that $$\exp(h f / k_b T) \approx 1 + h f /k_b T$$ so that $$S_V(f) \approx 4 k_b T R \tag{2}$$ which we traditionally call the "Johnson noise" formula.
If we short circuit the resistor as in the diagram where its ends are connected by a simple wire, then the current noise spectral density is (just divide by $R^2$) $$S_I(f) = 4 k_b T / R \, .\tag{3}$$ Another way to think about this is that the resistor generates random current which is Gaussian distributed with standard deviation $\sigma_I = \sqrt{4 k_b T B / R}$ where $B$ is the bandwidth of whatever circuit is connected to the resistor. Johnson noise keeps the system in equilibrium Anyway, the point is that the little resistor in the machine actually generates random currents in the wire!
These little currents cause the rod to twist back and forth for exactly the same reason that the twists in the rod induced by air molecules crashing into the paddles caused currents in the resistor (i.e. Faraday's law).
Therefore, the thermal noise of the resistor shakes the paddles and heats up the air. So, while heat travels from the air on the left side to the resistor on the right, precisely the opposite process also occurs: heat travels from the resistor on the right to the air on the left.
The heat flow is always occurring in both directions.
By definition, in equilibrium the left-to-right flow has the same magnitude as the right-to-left flow and both sides just sit at equal temperature; no entropy flows from one side to the other. Fluctuation-dissipation Note that the resistor is both dissipative and noisy.
The resistance $R$ means that the resistor turns current/voltage into heat; the power dissipated by a resistor is $$P = I^2 R = V^2 / R \, . \tag{4}$$ The noise is characterized by a spectral density given in Eq. (1).
Note the conspicuous appearance of the dissipation parameter $R$ in the spectral density.
This is no accident.
There is a profound link between dissipation and noise in all physical systems.
Using thermodynamics (or actually even quantum mechanics!) one can prove that any physical system which acts as a dissipator of energy must also be noisy.
The link between noisy fluctuations and dissipation is described by the fluctuation-dissipation theorem , which is one of the most interesting laws in all of physics. The machine originally looked like it moved entropy from the left to the right because we assumed the resistor was dissipative without being noisy , but as explained via the fluctuation-dissipation theorem this is entirely impossible; all dissipative systems exhibit noisy fluctuations. P.S. I really, really like this question. | {
"source": [
"https://physics.stackexchange.com/questions/203974",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/90803/"
]
} |
204,090 | It is usually said that the points on the surface of the Bloch sphere represent the pure states of a single 2-level quantum system. A pure state being of the form:
$$
|\psi\rangle = a |0\rangle+b |1\rangle
$$
And typically the north and south poles of this sphere correspond to the $|0\rangle$ and $|1\rangle$ states. Image: ("Bloch Sphere" by Glosser.ca - Own work. Licensed under CC BY-SA 3.0 via Commons - https://commons.wikimedia.org/wiki/File:Bloch_Sphere.svg#/media/File:Bloch_Sphere.svg ) But isn't this very confusing? If the north and south poles are chosen, then both states are on the same line and not orthogonal anymore, so how can one choose an arbitrary point $p$ on the surface of the sphere and possibly decompose it in terms of $0,1$ states in order to find $a$ and $b$? Does this mean that one shouldn't regard the Bloch sphere as a valid basis for our system and that it's just a visualization aid? I have seen decompositions in terms of the internal angles of the sphere, in the form of: $a=\cos{\theta/2}$ and $b=e^{i\phi}\sin{\theta/2}$ with $\theta$ the polar angle and $\phi$ the azimuthal angle. But I am clueless as to how these are obtained when $0,1$ states are on the same line. | The Bloch sphere is beautifully minimalist. Conventionally, a qubit has four real parameters; $$|\psi\rangle=a e^{i\chi} |0\rangle + b e^{i\varphi} |1\rangle.$$ However, some quick insight reveals that the a -vs- b tradeoff only has one degree of freedom due to the normalization a 2 + b 2 = 1, and some more careful insight reveals that, in the way we construct expectation values in QM, you cannot observe χ or φ themselves but only the difference χ – φ , which is 2 π -periodic. (This is covered further in the comments below but briefly: QM only predicts averages $\langle \psi|\hat A|\psi\rangle$ and shifting the overall phase of a wave function by some $|\psi\rangle\mapsto e^{i\theta}|\psi\rangle$ therefore cancels itself out in every prediction.) So if you think at the most abstract about what you need, you just draw a line from 0 to 1 representing the a -vs- b tradeoff: how much is this in one of these two states? Then you draw circles around it: how much is the phase difference? What stops it from being a cylinder is that the phase difference ceases to matter when a = 1 or b = 1, hence the circles must shrink down to points. And voila , you have something which is topologically equivalent to a sphere. The sphere contains all of the information you need for experiments, and nothing else. It’s also physical, a real sphere in 3D space. This is the more shocking fact. Given only the simple picture above, you could be forgiven for thinking that this was all harmless mathematics: no! In fact the quintessential qubit is a spin-½ system, with the Pauli matrices indicating the way that the system is spinning around the x , y , or z axes. This is a system where we identify $$|0\rangle\leftrightarrow|\uparrow\rangle, \\
|1\rangle\leftrightarrow|\downarrow\rangle,$$ and the phase difference comes in by choosing the + x -axis via $$|{+x}\rangle = \sqrt{\frac 12} |0\rangle + \sqrt{\frac 12} |1\rangle.$$ The orthogonal directions of space are not Hilbert-orthogonal in the QM treatment, because that’s just not how the physics of this system works. Hilbert-orthogonal states are incommensurate: if you’re in this state, you’re definitely not in that one. But this system has a spin with a definite total magnitude of $\sqrt{\langle L^2 \rangle} = \sqrt{3/4} \hbar$ , but only $\hbar/2$ of it points in the direction that it is “most pointed along,” meaning that it must be distributed on some sort of “ring” around that direction. Accordingly, when you measure that it’s in the + z -direction it turns out that it’s also sort-of half in the + x , half in the – x direction. (Here “sort-of” means: it is, if you follow up with an x -measurement, which will “collapse” the system to point → or ← with angular momentum $\hbar/2$ and then it will be in the corresponding “rings” around the x -axis.) Spherical coordinates from complex numbers So let’s ask “which direction is the general spin-½ $|\psi\rangle$ above, most spinning in?” This requires constructing an observable. To give an example observable, if the + z -direction is most-spun-in by a state $|\uparrow\rangle$ then the observable for $z$ -spin is the Pauli matrix $$\sigma_z = |\uparrow\rangle\langle\uparrow| - |\downarrow\rangle\langle\downarrow|=\begin{bmatrix}1&0\\0&-1\end{bmatrix},$$ which is +1 in the state it's in, -1 in the Hilbert-perpendicular state $\langle \downarrow | \uparrow \rangle = 0.$ Similarly if you look at $$\sigma_x = |\uparrow\rangle \langle \downarrow | + |\downarrow \rangle\langle \uparrow |=\begin{bmatrix}0&1\\1&0\end{bmatrix},$$ you will see that the $|{+x}\rangle$ state defined above is an eigenvector with eigenvalue +1 and similarly there should be a $|{-x}\rangle \propto |\uparrow\rangle - |\downarrow\rangle$ satisfying $\langle {+x}|{-x}\rangle = 0,$ and you can recover $\sigma_x = |{+x}\rangle\langle{+x}| - |{-x}\rangle\langle{-x}|.$ So, let’s now do it generally. The state orthogonal to $|\psi\rangle = \alpha |0\rangle + \beta |1\rangle$ is not too hard to calculate as $|\bar \psi\rangle = \beta^*|0\rangle - \alpha^* |1\rangle,$ so the observable which is +1 in that state or -1 in the opposite state is: $$
\begin{align}
|\psi\rangle\langle\psi| - |\bar\psi\rangle\langle\bar\psi| &= \begin{bmatrix}\alpha\\\beta\end{bmatrix}\begin{bmatrix}\alpha^*&\beta^*\end{bmatrix} - \begin{bmatrix}\beta^*\\-\alpha^*\end{bmatrix} \begin{bmatrix}\beta & -\alpha\end{bmatrix}\\
&=\begin{bmatrix}|\alpha|^2 - |\beta|^2 & 2 \alpha\beta^*\\
2\alpha^*\beta & |\beta|^2 - |\alpha|^2\end{bmatrix}
\end{align}$$ Writing this as $v_i \sigma_i$ where the $\sigma_i$ are the Pauli matrices we get: $$v_z = |\alpha|^2 - |\beta|^2,\\
v_x + i v_y = 2 \alpha^* \beta.$$ Now here's the magic, let's allow the Bloch prescription of writing $$\alpha=\cos\left(\frac\theta2\right),~~\beta=\sin\left(\frac\theta2\right)e^{i\varphi},$$ we find out that these are: $$\begin{align} v_z &= \cos^2(\theta/2) - \sin^2(\theta/2) &=&~ \cos \theta,\\
v_x &= 2 \cos(\theta/2)\sin(\theta/2) ~\cos(\phi) &=&~ \sin \theta~\cos\phi, \\
v_y &= 2 \cos(\theta/2)\sin(\theta/2) ~\sin(\phi) &=&~ \sin \theta~\sin\phi.
\end{align}$$ So the Bloch prescription uses a $(\theta, \phi)$ which are simply the spherical coordinates of the point on the sphere which such a $|\psi\rangle$ is “most spinning in the direction of.” So instead of being a purely theoretical visualization, we can say that the spin-½ system, the prototypical qubit, actually spins in the direction given by the Bloch sphere coordinates! (At least, insofar as a spin-up system spins up.) It is ruthlessly physical : you want to wave it away into a mathematical corner and it says, “no, for real systems I’m pointed in this direction in real 3D space and you have to pay attention to me.” How these answer your questions. Yes, N and S are spatially parallel but in the Hilbert space they are orthogonal. This Hilbert-orthogonality means that a system cannot be both spin-up and spin-down. Conversely the lack of Hilbert-orthogonality between, say, the z and x directions means that when you measure the z -spin you can still have nonzero measurements of the spin in the x -direction, which is a key feature of such systems. It is indeed a little confusing to have two different notions of “orthogonal,” one for physical space and one for the Hilbert space, but it comes from having two different spaces that you’re looking at. One way to see why the angles are physically very useful is given above. But as mentioned in the first section, you can also view it as a purely mathematical exercise of trying to describe the configuration space with a sphere: then you naturally have the polar angle as the phase difference, which is $2\pi$ -periodic, so that is a naturally ‘azimuthal’ coordinate; therefore the way that the coordinate lies along 0/1 should be a ‘polar’ coordinate with 0 mapping to $|0\rangle$ and π mapping to $|1\rangle$ . The obvious way to do this is with $\cos(\theta/2)$ mapping from 1 to 0 along this range, as the amplitude for the $|0\rangle$ state; the fact that $\cos^2 + \sin^2 = 1$ means that the $|1\rangle$ state must pick up a $\sin(\theta/2)$ amplitude to match it. | {
"source": [
"https://physics.stackexchange.com/questions/204090",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/90886/"
]
} |
204,099 | When a black hole absorbs matter is it destroying that mass, thereby destroying energy, therefore violating the first law of thermodynamics? | Do black holes violate the first law of thermodynamics? No. See Wikipedia re the first law of thermodynamics : "The first law of thermodynamics is a version of the law of conservation of energy, adapted for thermodynamic systems. The law of conservation of energy states that the total energy of an isolated system is constant; energy can be transformed from one form to another, but cannot be created or destroyed". If you throw a one-kilogram object into a zillion-kilogram black hole, the black hole mass increases by one kilogram. The object might get destroyed, but you can't destroy energy. Or create it. There are no perpetual motion machines. Energy is fundamental. Everything is made of it, including light and matter, and black holes. When a black hole absorbs matter is it destroying that mass, thereby destroying energy, therefore violating the first law of thermodynamics? No. It destroys the matter, but the total mass stays the same, as does the total energy. In the scenario above, you start with one zillion and one kilograms, and you end up with with one zillion and one kilograms. The black hole's gravitational field increases a little because you increased its mass-energy by one kilogram. | {
"source": [
"https://physics.stackexchange.com/questions/204099",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/90892/"
]
} |
205,738 | Is the net angular momentum vector of our solar system pointing in roughly the same direction as the Milky Way galaxy's net angular momentum vector? If yes or no, is that common for most stars in the galaxy? | There is no alignment between the Sun or the Solar System's net angular momentum and the "spin axis" of the Galaxy. Think for a moment about whether the line of the ecliptic (which marks the "equatorial line" of the Solar System) and the Milky Way (which roughly marks the plane of the Galaxy) are lined up? If this were so, then you would always see the planets (Jupiter, Mars, etc.) projected against the Milky Way. In fact, the spin axes of the Solar System and Galaxy are inclined at an angle of 63 degrees with respect to each other (see cartoon below - note, the Solar System is not drawn to scale compared with the Galaxy!). We do not know much about the alignments of other solar systems. Both the Doppler shift discovery method and the transit discovery method have a rotational ambiguity about the plane of the exoplanets' orbits. In other words, if we were to observe a transiting planet, we know that the inclination is close to 90 degrees to the line of sight, but we could rotate the system around our line of sight by any angle, and would see the same observational signatures. The general assumption is that there is no relationship between the angular momentum directions of stars (and their planetary systems) and the Galaxy. Turbulence in molecular clouds on relatively small scales compared with the dimensions of the Milky way randomises the angular momentum vectors of collapsing prestellar cores. A possible alignment mechanism could come about through the threading of giant molecular clouds by the Galactic magnetic field. If we knew what fraction of stars had close-in, potentially transiting planets, we could use the numbers of detected transiting exoplanets in the Kepler field to say whether that number was consistent with random orientations or not. Alternatively, if we had another Kepler field pointing in a different Galactic direction, but with similar sensitivity to the original Kepler field, then the relative numbers of detected transiting planets in the two fields might tell us of any non-random orientations. For example, if orbital planes were all aligned with the Galactic plane, then no transits would be seen for any star viewed out of the Galactic plane. (I think this extreme possibility can already be ruled out.) | {
"source": [
"https://physics.stackexchange.com/questions/205738",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/74534/"
]
} |
205,810 | The clips that I have seen of rockets launching all seem to be carried out during daytime. However, we learnt at school that rockets are fired closer to the equator and towards the east to take maximum advantage of Earth's rotational motion, getting a boost in speed of roughly $460m/s$ at the equator. If we would go to such lengths to gain a boost of speed of $460m/s$, why don't we take advantage of the fact that the earth orbits the sun at the massive speed of $30km/s$ to get an extra boost in velocity relative to other stellar objects? But to take advantage of both the rotational speed of the earth about its own axis and the rotational speed of the earth about the sun the rocket would have to be launched at nighttime: as the only times when the boost from the purple rotation was in line with the boost from the red rotation is during the night-time. It seems that this would greatly reduce the fuel needed to reach distant objects (for instance Pluto: New Horizons seems to have launched during the daytime). | When launching into a low Earth orbit only your velocity relative to the Earth matters, as seen from the not-rotating reference frame of the Earth. Your velocity relative to the sun does not matter, because once you are in the orbit your velocity vector relative to the Earth will oscillate between pointing towards and away from the velocity vector of the Earth relative to the sun. When performing an interplanetary transfer the Earth's velocity does matter. Usually such transfer is performed when in low Earth orbit. So if you want to travel to space outside Earth's orbit, then you want to leave Earth's "gravity" in the same direction as its velocity relative to the sun, also called prograde. But because the Earth will also slightly curve your escape trajectory you will have to burn while near trailing side of the Earth (where the sun is setting) such that you pass behind Earth's night side. The opposite is true when you want to go to space inside Earth's orbit. | {
"source": [
"https://physics.stackexchange.com/questions/205810",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/60080/"
]
} |
206,248 | If I go for a walk at, say 4 km/hour, unless there is a breeze blowing, I probably won't notice the air around me at all. If I go for a swim though, I will immediately notice the viscosity of the water and the effort needed to move through it. On that sort of scale, I wonder is it possible to estimate how normal still air applies in terms of viscosity, to a mosquito or other similar sized insect, utilising standard fluid dynamics techniques? I don't wish to ask a biology based question, or how any insect actually flies, which can be found at Insect Flight . This article implies that insect flight is still a subject of active investigation. The range of Reynolds number in insect flight is about 10 to $10^4$, which lies in between the two limits that are convenient for theories: inviscid steady flows around an airfoil and Stokes flow experienced by a swimming bacterium. For this reason, this intermediate range is not well understood. Instead I wonder do we know, compared to the human experience with respect to the fluid viscosity difference between still air and water, what air "feels" like to move through for an insect, such as a mosquito? In other words, is it possible to scale up the insect flying "experience" to the human level, and get an idea of what the human equivalent of the viscosity involved is? I appreciate it may be impossible to answer this question without referring back to the flight dynamics of insects, in which case my apologies as there may be no current answer. | What you need to compare when looking at bodies of different sizes and asking how the forces relate, is in general, the Reynolds Number as you included in your question. This is defined as: $$ Re = \frac{u L}{\nu} $$ where $u$ is the fluid velocity, $L$ is a representative length scale and $\nu$ is the kinematic viscosity of the fluid. This can also be thought of as the ratio of the inertial forces to the viscous forces. So, when this number is small, the viscous forces dominate and when it is large, the inertial forces dominate. The hardest part is picking an $L$. In this case though, it's not so bad. Let's assume that a mosquito is approximately a sphere. Adults rarely exceed 16mm in length, so let's just approximate and say they are 10mm long, so as a sphere they would have a radius of 5mm. Let's then take a normal day at standard temperature and pressure (STP) so that the kinematic viscosity of air is $\nu = 15.11e-6$. And let's assume a light breeze, say 5 m/s. This gives us a Reynolds number of (which hey, also matches the range you posted -- good start!): $$ Re = \frac{u L}{\nu} = \frac{5 \times 0.005}{15.11e-6} \approx 1655 $$ Okay, so now if we want a human to feel the same inertial-to-viscous force ratio, we want to keep the Reynolds number the same. We can pretend a human is a cylinder. And we can further say that an average human is, roughly, 0.4 meters wide which would give a radius of 0.2 meters. We'll assume the Reynolds number is the same and the air viscosity is the same and solve for a wind velocity to give a similar feel: $$ u = \frac{\nu Re}{L} = \frac{15.11e-6 \times 1655}{0.2} \approx 0.12 m/s$$ Counter-intuitive maybe, but what we're considering here is what velocities are required to feel the same ratio of inertial to viscous forces. In this case, we altered the wind speed but we could also alter the viscosity. If we wanted to do that, let's say we held the speed the same, we would get: $$ \nu = \frac{u L}{Re} = \frac{5 \times 0.2}{1655} \approx 0.0006 m^2/s$$ This number is almost 40 times larger than the viscosity of air. This means that for a human to feel an equivalent set of forces, they would have to be in a 5 m/s flow of something like hot asphalt, SAE 150 gear oil or diesel fuel . None of which sounds very pleasant, but honestly neither does flying around as a mosquito. | {
"source": [
"https://physics.stackexchange.com/questions/206248",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
206,440 | My question is set in the following situation: You have a completely empty universe without boundaries. In this universe is a single gun which holds one bullet. The gun fires the bullet and the recoil sends both flying in opposite directions. For simplicity I'll take the inertial frame of reference of the gun. The gun fired the bullet from its center of mass so it does not rotate. We now have a bullet speeding away from the gun. There is no friction. The only thing in this universe to exert gravity is the gun and the bullet. Would, given a large enough amount of time, the bullet fall back to the gun? Or is there a limit to the distance gravity can reach? | Does a gun exert enough gravity on the bullet it fired to stop it? No. Would, given a large enough amount of time, the bullet fall back to the gun? No. Or is there a limit to the distance gravity can reach? No. But the bullet's velocity exceeds escape velocity . See Wikipedia where you can read that escape velocity at a given distance is calculated by the formula $$v_e = \sqrt{\frac{2GM}{r}}$$ Imagine you play this scenario in reverse. You have a bullet and a gun, a zillion light years apart, motionless with respect to another. You watch and wait, and after a gazillion years you notice that they're moving towards one another due to gravity. (To simplify matters we'll say the gun is motionless and the bullet is falling towards the gun). After another bazillion years you've followed the bullet all the way back to the gun, and you notice that they collide at 0.001 m/s. You check your sums and you work out that this is about right, given that if the gun was as massive as the Earth's 5.972 × 10$^{24}$ kg, the bullet would have collided with it at 11.7 km/s. Escape velocity is the final speed of a falling body that starts out at an "infinite" distance. If you launch a projectile from Earth with more than escape velocity, it ain't ever coming back. OK, now let's go back to the original scenario. You fire the gun, and the bullet departs at 1000 m/s. When the bullet is a zillion light years away, its speed has reduced to 999.999 m/s. Because the gun's escape velocity is 0.001 m/s. The gun's gravity is never going to be enough to stop that bullet, even if it had all the time in the world and all the tea in China. | {
"source": [
"https://physics.stackexchange.com/questions/206440",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/38828/"
]
} |
206,443 | How can I rotate a state $|\psi>=\alpha|0>+\beta|1>$ to $|\psi'>=\delta|0>+\gamma|1>$ using a unitary U, where the values of $\alpha, \beta, \gamma, \delta$ are known? | Does a gun exert enough gravity on the bullet it fired to stop it? No. Would, given a large enough amount of time, the bullet fall back to the gun? No. Or is there a limit to the distance gravity can reach? No. But the bullet's velocity exceeds escape velocity . See Wikipedia where you can read that escape velocity at a given distance is calculated by the formula $$v_e = \sqrt{\frac{2GM}{r}}$$ Imagine you play this scenario in reverse. You have a bullet and a gun, a zillion light years apart, motionless with respect to another. You watch and wait, and after a gazillion years you notice that they're moving towards one another due to gravity. (To simplify matters we'll say the gun is motionless and the bullet is falling towards the gun). After another bazillion years you've followed the bullet all the way back to the gun, and you notice that they collide at 0.001 m/s. You check your sums and you work out that this is about right, given that if the gun was as massive as the Earth's 5.972 × 10$^{24}$ kg, the bullet would have collided with it at 11.7 km/s. Escape velocity is the final speed of a falling body that starts out at an "infinite" distance. If you launch a projectile from Earth with more than escape velocity, it ain't ever coming back. OK, now let's go back to the original scenario. You fire the gun, and the bullet departs at 1000 m/s. When the bullet is a zillion light years away, its speed has reduced to 999.999 m/s. Because the gun's escape velocity is 0.001 m/s. The gun's gravity is never going to be enough to stop that bullet, even if it had all the time in the world and all the tea in China. | {
"source": [
"https://physics.stackexchange.com/questions/206443",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/92508/"
]
} |
206,669 | Often times you pass by an electrical box on an electrical pole and you hear a distinct hum emanating from it. What causes that tone? Does the flow of electricity itself have a sound? Or does the flow rattle the metal parts at a certain frequency, causing the sound? | Varying (due to AC) electromagnetic forces exerted on the components cause them to vibrate thereby causing the hum. Components that typically hum noticeably are transformers (where the coils and cores vibrate/magnetostrict in the varying magnetic field), and under certain circumstances capacitors (typically they have higher resonant frequencies and audible capacitor hum is typical for electronic equipment, e.g. computers, here the forces on the plates cause the dielectric to oscillate mechanically). The frequency of the typical hum is the mains frequency of (in Europe) $50\,\mathrm{Hz}$ or $100\,\mathrm{Hz}$ (if it is due to magnetostriction or there are rectifiers that effectively double the lowest frequency), respectively $60\,\mathrm{Hz}$ ($120\,\mathrm{Hz}$) in the US. In certain cases the most audible frequency might also be a higher harmonic (due to nonlinearity, resonance phenomena and the response of the human ear). The mechanical oscillations due to magnetostriction double the frequency, because the length change does not depend on the direction of the magnetization (so it will have one extremum when the flux through the transformer core reaches zero and the other when the flux is maximal or minimal). | {
"source": [
"https://physics.stackexchange.com/questions/206669",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/84975/"
]
} |
206,822 | Imagine a large diameter piston filled with water connected to a small funnel. When you press on the piston slowly but with considerable force the water will move very quickly from the funnel in form of a jet. But how is it possible on a molecular level? Water molecules are constantly moving about in the piston with various speeds and directions bumping into each other and exchanging momentum like billiard balls, however water molecules from the funnel are moving uniformly at great speed. I want to know how it is possible for slow molecules to be adding momentum to the ones that are already moving faster than the average. In billiard ball analogy slow moving ball moving in the same direction would never catch up with the faster one to further increase its momentum and if it was moving in the opposite direction then it could only receive momentum from the faster one and therefore only slow it down. Now I imagine that this question probably sounds silly but I can't find any answer after searching for it, so I decided to ask here. | Adjacent molecules in a liquid all repel each other because of the electron clouds that surround the nuclei that they contain. In that sense these molecules never even 'touch' each other (at least not in the intuitive sense of the word). When you apply pressure to the liquid you're squeezing them into a (very slightly) smaller volume, thereby increasing the repulsive forces between them. Now allow an outlet (your funnel or the hole in the milk carton of the previous answer) and these increased repulsive forces now propel molecules through the outlet in a macroscopic flow. The higher the pressure, the more the volume is decreased (and thus inter-molecular distances are reduced), the more the repulsive forces are increased and the higher the macroscopic flow rate through the outlet. | {
"source": [
"https://physics.stackexchange.com/questions/206822",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/38438/"
]
} |
207,295 | While making a cup of tea in the office kitchen, a colleague asked me this question and neither of us could answer with any certainty. We're guessing it has something to do with the pressure of the column of water, or temperature differences between the top and bottom, but does anyone know the real reason? | Energy is needed to convert water to steam. This is called the latent heat of vapourisation and for water it is 2.26 MJ/kg. So to boil away 1 kg (about a litre) of water at 100 °C the kettle would need to supply 2.26 MJ. Assuming the kettle has a power of 1 kW this would take 2260 seconds. Given the unexpected interest in this question let me expand a bit on what happens to the water. Suppose we start with water at room temperature and we turn the kettle on. We'll take the power of the element to be $W$ (units of joules per second) so we have $W$ J/s going into the water. This power can be used for two purposes: to heat the water to evaporate (boil away) the water Let the rate of temperature increase per second be $\Delta T$ , then the power used for this increase is $C\,\Delta T$ , where $C$ is the specific heat of the water. Let the rate of evaporation be $\Delta M$ kg/s, then the power used to evaporate the water is $L\,\Delta M$ , where $L$ is the latent heat of vapourisation. These two must add up to the power being supplied so: $$ W = C\,\Delta T + L\,\Delta M $$ When we start heating, and the water is cool, the rate of evaporation is very low so we can ignore it and say $\Delta M \approx 0$ . In that case we find the water heats up at a rate of: $$ \Delta T = \frac{W}{C} $$ When the water is boiling the rate of temperature increase is zero because the water can't get (much) hotter than 100 °C so $\Delta T = 0$ . In that case we find the water evaporates at a rate of: $$ \Delta M = \frac{W}{L} $$ So at the start the water is mainly getting hotter at a rate of $W/C$ degrees per second, and when boiling the water is turning to steam at a rate of $W/L$ kilograms per second. In between the water will be both getting hotter and evaporating at some rate lower than these two limits. | {
"source": [
"https://physics.stackexchange.com/questions/207295",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/85324/"
]
} |
207,598 | Recent pictures from the New Horizons spacecraft, shown below, seem to indicate that, at Pluto's distance, we are entering a twilight zone, with a distinct lack of colors, although that could be due to the planet terrain itself or the camera used to take the picture. I will call Pluto a planet, I grew up being told it was one, so it's a habit. How far out from the Sun is visible light still sufficient to read a book? Could we expect colors at this distance? Obviously there is no sharp cutoff point, so I use the criterion, "still sufficient to read a book" as a rough indication of the solar luminosity available at the furthest distance possible. I ask this question from pure curiosity, as I am completely amazed that the Sun can still provide so much light for New Horizon's camera at a distance of 4, 787, 131, 862 kilometres (give or take a bit). Regarding colour, Chris White's comment clarifies the matter: It's starting to become more appreciated, but isn't fully widespread, that all space exploration and astronomy images are grayscale. Different color channels are combined in post-production, either by scientists or by press offices, but they are always taken separately. Indeed, all digital consumer cameras are grayscale imagers too, just they automatically take three filtered images and combine them for you, to make your life easier. NASA's Pluto time calculator from zibadawa timmy in comments below. | This is very rough and based on eyeballing without careful measurements: I've got a four-watt nightlight. I can read by it (not comfortably) at a distance of about a meter. The sphere of radius 1 meter has a surface area of about 12 square meters, so it appears that 1/3 of a watt per square meter will (barely) suffice for reading. The earth gets about 1400 watts per square meter from the sun. This falls off, of course, like the square of the distance, which means Pluto (if I've done this right) should get about 1 watt per square meter, or about three times what I get when I'm reading uncomfortably 1 meter in front of my nightlight. If you multiply the distance to Pluto by about $\sqrt{3}\approx 1.7$, you'll get to a place (about 8 billion km out) where you're down to what I get from my nightlight. | {
"source": [
"https://physics.stackexchange.com/questions/207598",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
207,644 | We know that the sum of the masses of the quarks in a proton is approximately $9.4^{+1.9}_{-1.3}~\text{MeV}/c^2$, whereas the mass of a proton is $\approx931~\text{MeV}/c^2$. This extra mass is attributed to the kinetic energy of the confined quarks and the confining field of the strong force. Now, when we talk about energetically favourably bound systems, they have a total mass-energy less than the sum of the mass-energies of the constituent entities. How does a proton, a bound system of quarks with its mass-energy so much more than its constituent entities, remain stable? The strong force and other energetic interactions supposedly contribute this mass-energy by the mass-energy equivalence principle, but how exactly does this occur? | You say: Now, when we talk about energetically favourably bound systems, they have a total mass-energy less than the sum of the mass-energies of the constituent entities. and this is perfectly true. For example if we consider a hydrogen atom then its mass is 13.6ev less than the mass of a proton and electron separated to infinity - 13.6eV is the binding energy. It is generally true that if we take a bound system and separate its constituents then the total mass will increase. This applies to atoms, nuclei and even gravitationally bound systems. It applies to quarks in a baryon as well, but with a wrinkle. For atoms, nuclei and gravitationally bound systems the potential goes to zero as the constituents are separated so the behaviour at infinity is well defined. If the constitiuents of these systems are separated to be at rest an infinite distance apart then the total mass is just the sum of the individual rest masses. So the bound state must have a mass less then the sum of the individual rest masses. As Hritik explains in his answer, for the quarks bound into a baryon by the strong force the potential does not go to zero at infinity - in fact it goes to infinity at infinity. If we could (we can't!) separate the quarks in a proton to infinity the resulting system would have an infinite mass. So the bound state does have a total mass less than the separated state. It's just that the mass of the separated state does not have a mass equal to the masses of the individual particles. You can look at this a different way. To separate the electron and proton in a hydrogen atom we need to add energy to the system so if the added energy is $E$ the mass goes up by $E/c^2$. As the separation goes to infinity the energy $E$ goes to 13.6eV. If we try to separate the quarks in a proton by a small distance we have to put energy in and the mass also goes up by $E/c^2$ just as in any bound system. But with the strong force the energy keeps going up as we increase the separation and doesn't tend to any finite limit. | {
"source": [
"https://physics.stackexchange.com/questions/207644",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/83476/"
]
} |
207,647 | Assume a conductor in a rectangle shape for simplicity. Now, if I only choose one side of this rectangle having length L, and apply external electrical field ∑ only to it(along with the wire), what EMF would I create on the conductor? I would simply say ∑*L, however then I had the following idea, and I started to doubt if I create 2∑*L instead. Here is what bugs me: (say ∑ L=I R, kirchoffs law: R is the conductors resistance, ∑ is the field we apply)
Since nucleus of atoms are almost stable, most current will be due to electron movement, accelerating due to the force of the electrical field. Then electrons will create a current I obviously. However, there is this topic we covered in semiconductors class in university, that is called hole current. Since electrons move from one atom to other atom, the destination atom is should initially be positively charged to be able to get the atom. When electron completes its movement, destination atom is now neutral, but the source atom is positively charged. Although only one electron moved physically, there is also a positively charged 'hole' moved in the opposite direction, which doubles the equivalent current, making it 2I. Then it means we had created 2∑*L equivalent voltage on the semiconductor by applying only ∑ electrical field. Do we have ∑*L or 2∑*L voltage as a result of this experiment? And does this change depending on the material we use for example if I use metal or semiconductor, would this result change? Here is the wiki page for electron hole: https://en.wikipedia.org/wiki/Electron_hole In here, its stated that we treat differently to metals and semiconductors, (we ignore holes in metals) : https://en.wikipedia.org/wiki/Charge_carrier | You say: Now, when we talk about energetically favourably bound systems, they have a total mass-energy less than the sum of the mass-energies of the constituent entities. and this is perfectly true. For example if we consider a hydrogen atom then its mass is 13.6ev less than the mass of a proton and electron separated to infinity - 13.6eV is the binding energy. It is generally true that if we take a bound system and separate its constituents then the total mass will increase. This applies to atoms, nuclei and even gravitationally bound systems. It applies to quarks in a baryon as well, but with a wrinkle. For atoms, nuclei and gravitationally bound systems the potential goes to zero as the constituents are separated so the behaviour at infinity is well defined. If the constitiuents of these systems are separated to be at rest an infinite distance apart then the total mass is just the sum of the individual rest masses. So the bound state must have a mass less then the sum of the individual rest masses. As Hritik explains in his answer, for the quarks bound into a baryon by the strong force the potential does not go to zero at infinity - in fact it goes to infinity at infinity. If we could (we can't!) separate the quarks in a proton to infinity the resulting system would have an infinite mass. So the bound state does have a total mass less than the separated state. It's just that the mass of the separated state does not have a mass equal to the masses of the individual particles. You can look at this a different way. To separate the electron and proton in a hydrogen atom we need to add energy to the system so if the added energy is $E$ the mass goes up by $E/c^2$. As the separation goes to infinity the energy $E$ goes to 13.6eV. If we try to separate the quarks in a proton by a small distance we have to put energy in and the mass also goes up by $E/c^2$ just as in any bound system. But with the strong force the energy keeps going up as we increase the separation and doesn't tend to any finite limit. | {
"source": [
"https://physics.stackexchange.com/questions/207647",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/80854/"
]
} |
207,991 | Say one has a mechanical system with hamiltonian $H$, and two other arbitrary observables $f,g$. $H$ is super useful since $\{H, \cdot\} = \frac{d}{dt}$. But does $\{f,g\}$ give any useful information in and of itself? I'm currently going through "Lectures on Quantum Mechanics for Mathematics Students" by Faddeev and Yakubovskii (with not terribly much background in classical physics). | Well, $\{f, \cdot \}$, similarly to $\{H,\cdot\}$, computes the derivative of the argument $\cdot$ with respect to the action of the one-parameter group of canonical transformations generated by $f$ (see the note below for the complete definition) $$\phi_a^{(f)} : F \to F\:,\quad a \in \mathbb R\:,$$
satisfying
$$\phi_a^{(f)} \circ \phi_b^{(f)}= \phi_{a+b}^{(f)}\:, \quad\phi_{-a}^{(f)} = (\phi_a^{(f)})^{-1} \:, \quad \phi_0^{(f)}= id$$
Here $F$ is the space of phases. Indeed it holds (see below) $$\{f,g\}(x)= \frac{d}{da}|_{a=0} g(\phi_a^{(f)}(x))\:,\tag{1}$$
where $g: F \to \mathbb R$ is sufficiently regular. Therefore, $\{f,g\}(x)=0$ everywhere in $F$ means that $g$ is invariant under the group of transformations generated by $f$ (the fact that the derivative is computed at $a=0$ is immaterial, as the group structure implies that the derivative vanishes for every value of $a$). In particular $\{f,H\}=0$ means that the Hamiltonian function is invariant under the action generated by $f$. This fact is remarkable because it gives rise to the Hamiltonian version of Noether theorem . As a matter of fact, since $\{H,f\}=- \{f,H\}=0$, invariance of $H$ under the action of $f$ is equivalent to the invariance of $f$ under the action of $H$ (i.e. under time evolution ). In other words, $H$ is invariant under the action of the one-parameter group of canonical transformations generated by $f : F \to \mathbb R$ if and only if $f$ is constant along the motion of the physical system. Finally, let $X_h$ be the vector field over $F$ tangent to the orbits of the curves $\mathbb R \ni a \mapsto \phi_a^{(h)}(x)$ for every $x\in F$ (this vector field is fully defined in the note below).
Since $$[X_f,X_g]=X_{\{f,g\}} \tag{1'}\:,$$ $\{H,f\}=0$ implies that, if $t \mapsto x(t)$ solves Hamilton equations, $t \mapsto \phi^{(f)}_a(x(t))$ does for every value of $a$. In other words, $\{H,f\} =0$ also implies that the group of canonical transformations generated by $f$ transforms motions of the physical system to motions of the system as well. (a) $\{f,g\}=0$ implies, via (1') and using $X_0=0$, that (b) the action of the group of transformations on the states of the system (points in $F$) and on observables (real valued functions on $F$) generated by $f$ and the one generated by $g$ commute. Since $X_h=X_l$ if and only if $h=l + const.$, the two statements (a) and (b) are not completely equivalent. This non-equivalence turns out to be fundamental in quantization procedures since it permits to deal with CCR and central extensions of groups. NOTE regarding used definitions [1] if $\omega$ is the symplectic form on $F$,
the Hamiltonian field associated to $f\in C^\infty(F,\mathbb R)$ is defined as the unique vector field, $X_f$, such that
$$\omega_x(X_f,u)= \langle df_x, u\rangle \tag{2}$$
for every vector $u \in T_xF$. $X_f$ is uniquely defined this way since $\omega$ is non-degenerate by definition. [2] The one-parameter group of canonical diffeomorphisms $\phi^{(f)}$ generated by $f$ is properly defined as follows.
$$\mathbb R \ni a \mapsto \phi_a^{(f)}(x) =: y_x(a)\in F \tag{3}\:,\quad \forall x \in F$$
where $y_x$ is the unique (maximal) solution of the Cauchy problem $$\frac{dy}{da} = X_f(y(a))\:, \quad y(0) =x \tag{4}$$
(I am assuming that the solution is complete, as it happens if $f$ is compactly supported of $F$ itself is compact, otherwise some subtleties regrading domains are to be fixed and $\phi_a^{(f)}(x)$ is only locally defined in the variable $a$.) [3] The Poisson bracket is defined as
$$\{f,g\}:= \omega(X_f,X_g) \quad f,g \in C^\infty(F,\mathbb R)\:.\tag{5}$$ With these definitions, (3) and (4) imply, as asserted in the main text, that $X_f$ is tangent to the curves
$\mathbb R \ni a \mapsto \phi_a^{(f)}(x)$. Next (4) and (5) easily produce (1).
An explicit expression of the action of $\phi^{(f)}$ on a function $g : F \to \mathbb R$,
$$\left(\phi^{(f)*}_{a}[g]\right)(x):= g(\phi^{(f)}_{a}(x))$$
is provided by the formula
$$\phi^{(f)*}_{a}[g] = \sum_{n=0}^{+\infty} \frac{a^n}{n!}\{f,\:\:\}^n g\:.$$
This identity holds if $f,g$ are real analytic and not only smooth. It is finally worth stressing that the equations in (4) are nothing but the standard Hamilton equations if $f$ is indicated by $H$. | {
"source": [
"https://physics.stackexchange.com/questions/207991",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/93263/"
]
} |
208,234 | How can we know that North Korea and Iran (to name a few) are exploding nuclear weapons if no inspectors have ever been granted access to suspected nuclear sites in these countries? How can we passively detect a secret detonation of a nuclear warhead? What are the telltale signs of a nuclear detonation? | There are many ways to detect a nuclear explosion, and there are people working to detect it. The Comprehensive Nuclear-Test-Ban Treaty Organization or CTBTO is one such organization. They are using a global network called the International Monitoring System . This is capable of detecting any nuclear detonations anywhere on the Earth (underwater, or in the atmosphere, or deep underground). First of all, please note that a nuclear explosion releases a lot of energy, and I mean a lot of energy. Ionizing radiation, shock-waves, and heat are all generated in a nuclear explosion. Let's talk first about how to detect shock waves from a nuclear detonation. These waves can travel through the air, water, and ground at the relative speed of sound in each medium. The CTBTO use an Infrasound Monitoring System to detect shock waves traveling through the atmosphere. Infrasound is generated in exploding volcanoes, earthquakes, meteors, storms and auroras; nuclear, mining and large chemical explosions, as well as aircraft and rocket launches in the man-made arena. Nuclear generated shock waves in the air are easily detected since a nuclear explosion is much much more powerful then pretty much anything else in the natural world. Underwater nuclear explosion can be detected using a Hydroacoustic Monitoring System . They are kind of microphones at the bottom of the ocean. Theses microphones also pick up sounds from underwater earthquakes, underwater volcanoes eruptions, submarines, whale, and of course, nuclear detonation. Like an above ground explosion, nothing beats nuclear generated shock waves in terms of intensity. Therefore, an underwater nuclear explosion is also quite easy to detect. Underground nuclear detonations can be detected using seismometers. Seismometers are used mainly to detect earthquakes. Different from the Infrasound Monitoring System, which detects sounds in the air, seismometers detect sounds in the ground. Scientists have documented the various characteristics of shock waves created by earthquakes to volcanic activities to even planes crashing. Thus, if powerful unrecognized waves are detected, they can link their observations to nuclear detonation. Shock waves can be detected by multiple sensors as described above. Scientists can use this result to triangulate and correctly pinpoint the site of a nuclear explosion. This is how we know that North Korea secretly detonated nuclear weapons in 2006, 2009, and 2013. Lastly, to be sure if an explosion is nuclear, scientists need to confirm the presence of a radioactive dust cloud using Radionuclide Detection . However, in the case of a perfectly contained nuclear detonation, radioactive dust cannot be detected. Therefore, teams of inspectors have to be sent to the suspected site. | {
"source": [
"https://physics.stackexchange.com/questions/208234",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
208,315 | I have encountered $\gg$ in many physics text books where it's used as a relation between constants or functions but in none of the text books I have read is it properly defined anywhere. If $A \gg B$ where $A$ and $B$ are constants, or $f(x) \gg g(x)$ does this simply mean that $A \geq 10B$ and $f(x) \geq 10g(x)$ for all $x$? Note: I'm asking this here because mathematicians don't use the much greater than relation. At least not that I know of. | There is a consistent definition, but it involves a couple of arbitrary thresholds, so I doubt you'd consider it rigorous. The construction $X \gg Y$ means that the ratio $\frac{Y}{X}$ is small enough that subleading terms in the series expansion for $f\bigl(\frac{Y}{X}\bigr) - f(0)$ can be neglected, where $f$ is some relevant function involved in the calculation. This depends on what you mean by "can be neglected", of course; typically that will be determined by the uncertainties of your data or your theoretical parameters. Technically it depends on $f$ , too, but the functions we use in physics normally have power series with coefficients that are close to 1, or within a couple orders of magnitude at least. As long as the coefficients don't grow exponentially (by which I mean $f_n \approx f_0 k^n$ for some $k$ ), they don't have much of an effect on which terms in the series are negligible. As an example, consider a typical condition in relativity, $v \ll c$ . (Or $c \gg v$ if you prefer.) The relevant function is, of course, the gamma factor: $$\gamma\biggl(\frac{v}{c}\biggr) = \frac{1}{\sqrt{1 - v^2/c^2}} = 1 + \frac{1}{2}\biggl(\frac{v}{c}\biggr)^2 + \frac{3}{8}\biggl(\frac{v}{c}\biggr)^4 + \cdots$$ so we should examine the subleading term of $\gamma\bigl(\frac{v}{c}\bigr) - \gamma(0)$ , which is $\frac{3}{8}\bigl(\frac{v}{c}\bigr)^4$ , to see if it's negligible relative to the leading term, which is $\frac{1}{2}\bigl(\frac{v}{c}\bigr)^2$ . This is equivalent to checking how the ratio of the two terms, $\frac{3}{4}\bigl(\frac{v}{c}\bigr)^2$ , compares to $1$ . I've included the ratio of all subleading terms to the leading term, as well as the ratio of only the first subleading term to the first one, but for slow speeds they're nearly the same anyway. And for high speeds you clearly have no business claiming $v \ll c$ in the first place. Here is where the most obvious ambiguity comes in: where do you draw the line? Interestingly enough there can be an actual line: you draw a horizontal line across the plot at whatever level you're comfortable considering "negligible", and where that crosses the curve, that tells you what value of $v$ sufficiently satisfies $v \ll c$ . The value depends on your requirements, of course, but you can see that for $v \lesssim 0.1c$ , the curves are nearly indistinguishable from
the bottom of the plot. So it's very common to consider $v \ll c$ satisfied when $v \lesssim 0.1c$ . | {
"source": [
"https://physics.stackexchange.com/questions/208315",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
208,321 | What is the explanation for the apparent size difference of North America in these two photos from NASA? Image source Image source | This is a perspective effect. In essence, the second image is taken from a lower orbit which is closer to Earth, and the Earth only looks spherical because of the use of a fisheye lens that strongly distorts the edges of the image. This means that the field of view is a lot smaller. The Earth still looks like a circle on the page, though from close up the edges can look a bit distorted. In the second image there is no land to be distorted in the edges, and there are effects from the camera lens which can look weird to the human eye (to make the apparent sizes match you're comparing a very wide angle lens with a much narrower one). However, this effect is not photoshop magic. (That said, the first image is, in fact, a very carefully reconstructed mosaic that is made from images taken at much lower altitudes, in a painstaking process that is explained in detail in this Earth Observatory post . It's important to emphasize that, from whatever altitude Simmon simulated, this is indeed the continental layout that you would observe with your naked eye. The original posting of this image clearly identifies it as a mosaic: NASA is always very careful to precisely label every image it publishes in a correct fashion.) I can't find, unfortunately, the altitude that Simmons used to simulate the first image. Any brave takers care to dig through the documentation and source files to see if it's there? The second image, referenced here , was taken by Suomi NPP from an altitude of ~830 km, from where the perspective looks roughly like this, where it is obvious that the wide field of view is only possible because of the fisheye lens, with its associated distorsions. | {
"source": [
"https://physics.stackexchange.com/questions/208321",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/88914/"
]
} |
208,344 | Relatively recent measurements indicate that the Sun is nearly the roundest object ever measured. If scaled to the size of a beach ball, it would be so round that the difference between the widest and narrow diameters would be much less than the width of a human hair. I do appreciate that above result is just one measurement , and I looked for confirmation of the result. However, Wikipedia accepts its validity: By this measure, the Sun is a near-perfect sphere with an oblateness estimated at about 9 millionths, which means that its polar diameter differs from its equatorial diameter by only 10 kilometres (6.2 mi). Two questions on this subject: To me at least, it is a completely counter-intuitive result. Can anyone explain from what causes this symmetry emerged? Is it a combination of a slow rotation rate combined with a highly isotropic central gravitational field? I thought there would be an equatorial bulge, even though the rotation rate is slow. Does this result, for just one ordinary, as far as I know, star indicate that asymmetrical stellar collapses are much less likely than may have been previously envisaged? Admitted, it is just one star out of countless billions, but on the other hand, as it is a random sample, it may well be indicative of many more similar "extremely" (if I can use that word) spherical objects. | The symmetry of the Sun has got very little to do with any symmetry in its formation. The Sun has had plenty of time to reach an equilibrium between its self gravity and its internal pressure gradient. Any departure from symmetry would imply a difference in pressure in regions at a similar radius but different polar or azimuthal angles. The resultant pressure gradient would trigger fluid flows that would erase the asymmetry. Possible sources of asymmetry in stars could include rapid rotation or the presence of a binary companion, both of which break the symmetry of the effective gravitational potential, even if the star were spherically symmetric. The Sun has neither of these (the centrifugal acceleration at the equator is only about 20 millionths of the surface gravity, and Jupiter is too small and far away to have an effect) and simply relaxes to an almost spherically symmetric configuration. The relationship between oblateness/ellipticity and rotation rate is treated in some detail here for a uniform density , self-gravitating spheroid and the following analytic approximation is obtained for the ratio of equatorial to polar radius
$$ \frac{r_e}{r_p} = \frac{1 + \epsilon/3}{1-2\epsilon/3}, $$
where $\epsilon$, the ellipticity is related to rotation and mass as
$$\epsilon = \frac{5}{4}\frac{\Omega^2 a^3}{GM}$$
and $a$ is the mean radius, $\Omega$ the angular velocity. Putting in numbers for the Sun (using the equatorial rotation period), I get $\epsilon=2.8\times10^{-5}$ and hence $r_e/r_p =1.000028$ or $r_e-r_p = \epsilon a = 19.5$ km. Thus this simple calculation gives the observed value to a small factor, but is obviously only an approximation because (a) the Sun does not have a uniform density and (b) rotates differentially with latitude in its outer envelope. A final thought. The oblateness of a single star like the Sun depends on its rotation. You might ask, how typical is the (small) rotation rate of the Sun that leads to a very small oblateness? More rapidly rotating sun-like (and especially more massive) stars do exist; very young stars can rotate up to about 100 times faster than the Sun, leading to significant oblateness. However, Sun-like stars spin-down through a magnetised wind as they get older. The spin-down rate depends on the rotation rate and this means that single (or at least stars that are not in close, tidally locked binary systems) stars converge to a close-to-unique rotation-age relationship at ages beyond a billion years. Thus we expect (it remains to be proven, since stellar ages are hard to estimate) that all Sun-like stars with a similar age to the Sun should have similar rotation rates and similarly small oblateness. | {
"source": [
"https://physics.stackexchange.com/questions/208344",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.