source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
90,592 | I've seen many science popularisation documentaries and read few books (obviously not being scientist myself). I am able to process and understand basic ideas behind most of these. However for general relativity there is this one illustration, which is being used over and over (image from Wikipedia): I always thought that general relativity gives another way how you can describe gravity. However for this illustration to work, there needs to be another force, pulling the object down (referring to a direction in the attached image). If I put two non-moving objects in the image, what force will pull them together? So where is my understanding incorrect? Or is general relativity not about explaining gravity and just describes how heavy objects bends spacetime (in that case the analogy is being used not correctly in my opinion)? UPDATE Thank you for the answers and comments. Namely the XKCD comics is a spot on. I understand that the analogy with bent sheet of fabric pretty bad, but it seems that it can be fixed if you don't bent the fabric, but just distort the drawn grid. Would you be so kind and answer the second part of the question as well - whether general relativity is explaining gravitational force. To me it seems that it is not (bending of spacetime simply can not affect two non-moving objects). However most of the time it is being presented that it does. | You're quite correct that the metaphor is misleading, and indeed you'll find professional relativists tend to be rather scornful of it. There are a number of problems with it, of which the problem you mention is just one. For example the diagram implies only space is bent, while the bending is of spacetime so time is bent as well. The diagram also implies there is a third dimension out of the plane in which the bending occurs. Applied to our 4D spacetime this would mean there would have to be a fifth dimension for spacetime to bend in, but this isn't the case and the type of bending that occurs is called intrinsic curvature and needs no extra dimensions. The problem is that GR is really, really unintuitive. If you want to know more than the hints suggested by the rubber sheet metaphor the only course is to roll up your sleeves and start learning the maths. It would be nice if there were some intermediate course between the misleading but simple rubber sheet metaphor and the maths, but I don't know of anything. I think the problem is that you won't get anywhere without first understanding coordinate invariance and this is a really tough idea to understand. If you really want to learn more I'd start with special relativity as this contains the seeds of the ideas you'll need. Response to comment: In your edit you say bending of spacetime simply can not affect two non-moving objects . I'm guessing that you're thinking about objects rolling around on a curved surface as shown in the common metaphors for GR. The question is then why objects that aren't rolling around should experience a force. The reason for this is that an apparantly stationary object is moving because it's moving in time. For the usual 3-D velocities we see around us we describe velocity as a 3-vector $\vec{v} = (v_x, v_y, v_z)$. But remember that spacetime is four dimensional, and the velocity for objects in relativity is a 4-vector called the 4-velocity that includes change in the time coordinate. The reason a stationary object experiences a force is that the time coordinate is curved just like the space coordinates. This brings me back to one of my criticisms of the rubber sheet analogy i.e. that it cannot show that the time coordinate is curved just like the spatial coordinates. At the risk of getting repetitive, it's hard to explain why curvature in time causes the force without getting into the maths. The simplest explanation I've seen is in twistor59 's answer to What is the weight equation through general relativity? . This shows, with the bare minimum of algebra, why a stationary object in a gravitational field experiences a force. | {
"source": [
"https://physics.stackexchange.com/questions/90592",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/35998/"
]
} |
90,597 | Assume the fluid to be air and the container to be a car, will there be a pressure gradient which will result the fluid to stack at one end of the car or there is something else that I'm missing? | You're quite correct that the metaphor is misleading, and indeed you'll find professional relativists tend to be rather scornful of it. There are a number of problems with it, of which the problem you mention is just one. For example the diagram implies only space is bent, while the bending is of spacetime so time is bent as well. The diagram also implies there is a third dimension out of the plane in which the bending occurs. Applied to our 4D spacetime this would mean there would have to be a fifth dimension for spacetime to bend in, but this isn't the case and the type of bending that occurs is called intrinsic curvature and needs no extra dimensions. The problem is that GR is really, really unintuitive. If you want to know more than the hints suggested by the rubber sheet metaphor the only course is to roll up your sleeves and start learning the maths. It would be nice if there were some intermediate course between the misleading but simple rubber sheet metaphor and the maths, but I don't know of anything. I think the problem is that you won't get anywhere without first understanding coordinate invariance and this is a really tough idea to understand. If you really want to learn more I'd start with special relativity as this contains the seeds of the ideas you'll need. Response to comment: In your edit you say bending of spacetime simply can not affect two non-moving objects . I'm guessing that you're thinking about objects rolling around on a curved surface as shown in the common metaphors for GR. The question is then why objects that aren't rolling around should experience a force. The reason for this is that an apparantly stationary object is moving because it's moving in time. For the usual 3-D velocities we see around us we describe velocity as a 3-vector $\vec{v} = (v_x, v_y, v_z)$. But remember that spacetime is four dimensional, and the velocity for objects in relativity is a 4-vector called the 4-velocity that includes change in the time coordinate. The reason a stationary object experiences a force is that the time coordinate is curved just like the space coordinates. This brings me back to one of my criticisms of the rubber sheet analogy i.e. that it cannot show that the time coordinate is curved just like the spatial coordinates. At the risk of getting repetitive, it's hard to explain why curvature in time causes the force without getting into the maths. The simplest explanation I've seen is in twistor59 's answer to What is the weight equation through general relativity? . This shows, with the bare minimum of algebra, why a stationary object in a gravitational field experiences a force. | {
"source": [
"https://physics.stackexchange.com/questions/90597",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36000/"
]
} |
90,646 | At the end of this nice video ( https://youtu.be/XiHVe8U5PhU?t=10m27s ), she says that electromagnetic wave is a chain reaction of electric and magnetic fields creating each other so the chain of wave moves forward. I wonder where the photon is in this explanation. What is the relation between electromagnetic wave and photon? | Both the wave theory of light and the particle theory of light are approximations to a deeper theory called Quantum Electrodynamics (QED for short). Light is not a wave nor a particle but instead it is an excitation in a quantum field. QED is a complicated theory, so while it is possible to do calculations directly in QED we often find it simpler to use an approximation. The wave theory of light is often a good approximation when we are looking at how light propagates, and the particle theory of light is often a good approximation when we are looking at how light interacts i.e. exchanges energy with something else. So it isn't really possible to answer the question where the photon is in this explanation . In general if you're looking at a system, like the one in the video, where the wave theory is a good description of light you'll find the photon theory to be a poor description of light, and vice versa . The two ways of looking at light are complementary. For example if you look at the experiment described in Anna's answer (which is one of the seminal experiments in understanding diffraction!) the wave theory gives us a good description of how the light travels through the Young's slits and creates the interference pattern, but it cannot describe how the light interacts with the photomultiplier used to record the image. By contrast the photon theory gives us a good explanation of how the light interacts with the photomultiplier but cannot describe how it travelled through the slits and formed the diffraction pattern. | {
"source": [
"https://physics.stackexchange.com/questions/90646",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/27310/"
]
} |
90,979 | We all know that gravitation force between two small (not heavenly) bodies is negligible. We give a reason that their mass is VERY small. But according to inverse square law, as $r\to 0$, then $F\to \infty$. But in real life we observe that even if we bring two objects very close, no such force is seen. Why is this so? | The inverse-square law holds for spherically symmetric objects, but in that case the main problem is that $r$ is the distance between their centers. So "very close" spheres are still quite a bit apart--$r$ would be at least the sum of their radii. For two spheres of equal density and size just touching each other, the magnitude of the gravitational force between them is
$$F = G\frac{M^2}{(2r)^2} = \frac{4}{9}G\pi^2\rho^2r^4\text{,}$$
which definitely does not go to infinity as $r\to 0$ unless the density $\rho$ is increased, but ordinary matter has densities of only up to $\rho \sim 20\,\mathrm{g/cm^3}$ or so. Tests of Newton's law for small spheres began with the Cavendish experiment , and this paper has a collection of references to more modern $1/r^2$ tests. | {
"source": [
"https://physics.stackexchange.com/questions/90979",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36025/"
]
} |
91,243 | I have a bottle of vodka that has a load of gold flakes suspended in it. It has been sat still for over 24 hours and the flakes are all still suspended within the liquid: they have not risen to the surface or sunk to the bottom. Any ideas as to the physics behind this? | The viscosity of water ethanol mixtures isn't especially high, though the wetting properties of vodka may make it seem oily. Actually water ethanol mixtures are highly non-ideal: both water and ethanol have a viscosity of about 1 mPa.s at room temperature, but a mixture can achieve a viscosity of over 3 mPa.s. See this paper or Google for many such tables. The real reason gold leaf will stay suspended for so long is that it is extraordinarily light. The thickness of gold leaf is around 100nm , and since the density of gold is 19300 kg/m$^3$ a flake of gold leaf 1mm by 1mm weighs just 2 $\mu g$ so the downward force due to gravity is 20 nano-Newtons. At such low forces water is viscous enough to keep the gold suspended for long periods. | {
"source": [
"https://physics.stackexchange.com/questions/91243",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/34427/"
]
} |
91,501 | I understand the energy and mass can change back and forth according to Einstein. It is fluid; it can go from one to the other. So, what keeps mass from just turning into energy? Is there some force holding a subatomic particle together? What keeps mass in it's state? I hope this is not a silly question but I am clueless. Thanks | This is inevitably going to be an unsatisfactory answer because your question is vastly more complicated than you (probably) realise. I'll attempt an answer in general terms, but you have to appreciate this is a pale shadow of the physics that describes this area. Anyhow, Einstein was the first to spot that energy and mass were equivalent, and you've no doubt heard of his famous equation $E = mc^2$. These days we write this as: $$ E^2 = p^2c^2 + m^2c^4 $$ where $p$ is the momentum and $m$ is the rest mass. However relativity does not explain how matter and energy can be interchanged. That had to wait several decades for the development of quantum field theory (QFT for short). If you have never encountered QFT it will strike you as a very odd way of looking at the world. We are used to thinking of particles like electrons as objects, much like macroscopic objects except smaller and fuzzier. However in QFT there is an electron field that pervades the whole universe, and what we think of as an electron is an excitation in this field. Similarly there is a photon field, and photons are excitations in the photon field. In fact all elementary particles are excitations in their corresponding quantum field. QFT explains matter-energy conversion because you can, for example, add energy to the electron field to excite it and thereby create an electron. Alternatively an excitation in the electron field, i.e. an electron, can disappear by transferring energy to something else. So, for example, in the Large Hadron Collider two quarks meet with huge kinetic energies and they can transfer some of this energy into excitations of various quantum fields to produce a shower of particles. But this can't happen in any way you please. QFT gives us the equations to describe how the kinetic energy of particles can excite quantum fields and thereby create matter. This is why, to return to your question, mass can't just keep turning into energy. Quantum field excitations only occur in specific ways described by quantum field theory. And that I think is about all that can be said at this level. | {
"source": [
"https://physics.stackexchange.com/questions/91501",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36385/"
]
} |
91,685 | It is known fact, that boiling point of water decreases by decreasing of pressure. So there is a pressure at which water boils at room temperature.
Would it be possible to cook e.g. pasta at room temperature in vacuum chamber with low enough pressure? Or "magic" of cooking pasta is not in boiling and we would be able to cook pasta at 100°C without boiling water (at high pressure)? | No. Boiling itself doesn't mean that the water will cook anything. If you have boiling water at 30°C you could touch it (if we forget that it's at really low pressure) and nothing would happen. Boiling is not what cooks, but temperature. In fact, if you want to purify water at high altitudes, you need to boil water for a longer time because it will be at a lower temperature. | {
"source": [
"https://physics.stackexchange.com/questions/91685",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36458/"
]
} |
91,895 | Why is a proton assumed to be always at the center while applying the Schrödinger equation ? Isn't it a quantum particle? | There is a rigorous formal analysis which lets you do this. The true problem, of course allows both the proton and the electron to move. The corresponding Schrödinger equation thus has the coordinates of both as variables. To simplify things, one usually transforms those variables to the relative separation and the centre-of-mass position. It turns out that the problem then separates (for a central force) into a "stationary proton" equation and a free particle equation for the COM. There is a small price to pay for this: the mass for the centre of mass motion is the total mass - as you'd expect - but the radial equation has a mass given by the reduced mass $$\mu=\frac {Mm}{M+m}=\frac{m}{1+m/M} ,$$
which is close to the electron mass $m$ since the proton mass $M$ is much greater. It's important to note that an exactly analogous separation holds for the classical treatment of the Kepler problem . Regarding self-interactions, these are very hard to deal with without invoking the full machinery of quantum electrodynamics. Fortunately, in the low-energy limits where hydrogen atoms can form, it turns out you can completely neglect them. | {
"source": [
"https://physics.stackexchange.com/questions/91895",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/540/"
]
} |
92,051 | In inelastic collisions, kinetic energy changes, so the velocities of the objects also change. So how is momentum conserved in inelastic collisions? | I think all of the existing answers miss the real difference between energy and momentum in an inelastic collision. We know energy is always conserved and momentum is always conserved so how is it that there can be a difference in an inelastic collision? It comes down to the fact that momentum is a vector and energy is a scalar . Imagine for a moment there is a "low energy" ball traveling to the right. The individual molecules in that ball all have some energy and momentum associated with them: The momentum of this ball is the sum of the momentum vectors of each molecule in the ball. The net sum is a momentum pointing to the right. You can see the molecules in the ball are all relatively low energy because they have a short tail. Now after a "simplified single ball" inelastic collision here is the same ball: As you can see, each molecule now has a different momentum and energy but the sum of all of their momentums is still the same value to the right. Even if the individual moment of every molecule in the ball is increased in the collision, the net sum of all of their momentum vectors doesn't have to increase. Because energy isn't a vector, increasing the kinetic energy of molecules increases the total energy of the system. This is why you can convert kinetic energy of the whole ball to other forms of energy (like heat) but you can't convert the net momentum of the ball to anything else. | {
"source": [
"https://physics.stackexchange.com/questions/92051",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36604/"
]
} |
92,244 | Could you give me an idea of what bound states mean and what is their importance in quantum-mechanics problems with a potential (e.g. a potential described by a delta function)? Why, when a stable bound state exists, the energies of the related stationary wavefunctions are negative? I figured it out, mathematically (for instance in the case of a potential described by a Delta function), but what is the physical meaning? | If you have a copy of Griffiths, he has a nice discussion of this in the delta function potential section. In summary, if the energy is less than the potential at $-\infty$ and $+\infty$, then it is a bound state, and the spectrum will be discrete:
$$
\Psi\left(x,t\right) = \sum_n c_n \Psi_n\left(x,t\right).
$$
Otherwise (if the energy is greater than the potential at $-\infty$ or $+\infty$), it is a scattering state, and the spectrum will be continuous:
$$
\Psi\left(x,t\right) = \int dk \ c\left(k\right) \Psi_k\left(x,t\right).
$$
For a potential like the infinite square well or harmonic oscillator, the potential goes to $+\infty$ at $\pm \infty$, so there are only bound states. For a free particle ($V=0$), the energy can never be less than the potential anywhere***, so there are only scattering states. For the hydrogen atom, $V\left(r\right) = - a / r$ with $a > 0$, so there are bound states for $E < 0$ and scattering states for $E>0$. Update *** @Alex asked a couple questions in the comments about why $E>0$ for a free particle, so I thought I'd expand on this point. If you rearrange the time independent Schrödinger equation as
$$
\psi''= \frac{2m}{\hbar^2} \left(V-E\right) \psi
$$
you see that $\psi''$ and $\psi$ would have the same sign for all $x$ if $E < V_{min}$, and $\psi$ would not be normalizable (can't go to $0$ at $\pm\infty$). But why do we discount the $E<V_{min}=0$ solutions for this reason, yet keep the $E>0$ solutions, $\psi = e^{ikx}$, when they too aren't normalizable? The answer is to consider the normalization of the total wave function at $t=0$, using the fact that if a wave function is normalized at $t=0$, it will stay normalized for all time (see argument starting at equation 147 here ): $$
\left<\Psi | \Psi\right> = \int dx \ \Psi^*\left(x,0\right) \Psi\left(x,0\right) = \int dk' \int dk \ c^*\left(k'\right) c\left(k\right) \left[\int dx \ \psi^*_{k'}\left(x\right) \psi_k\left(x\right)\right]
$$ For $E>0$, $\psi_k\left(x\right) = e^{ikx}$ where $k^2 = 2 m E / \hbar^2$, and the $x$ integral in square brackets is $2\pi\delta\left(k-k'\right)$, so $$
\left<\Psi | \Psi\right> = 2\pi \int dk \ \left|c\left(k\right)\right|^2
$$
which can equal $1$ for a suitable choice of $c\left(k\right)$. For $E<0$, $\psi_k\left(x\right) = e^{kx}$ where $k^2 = - 2 m E / \hbar^2$, and the $x$ integral in square brackets diverges, so $\left<\Psi | \Psi\right>$ cannot equal $1$. | {
"source": [
"https://physics.stackexchange.com/questions/92244",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36121/"
]
} |
92,314 | My lecturer told me that $\mu$ , the Chemical potential , is zero or negative, and in the following example, mathematically it acts as a normalization constant. But is there any physical insight about why boson gas can be zero or negative? I think it is due to the fact the photon gas can pop up from nowhere (i.e. vacuum fluctuation). $$
f_{BE}(\varepsilon)=\dfrac{1}{e^{(\varepsilon-\mu)/(k_B T)}-1}$$ | The chemical potential can be thought of as how accepting the system is of new particles -- how much work you have to do to stick a new particle in the system. Since you can stick as many bosons in a given state as you want, the system is always accepting of new particles. At worst, you have to do zero work to add a boson (corresponding to $\mu=0$), and often the system is happy to take in a new particle (corresponding to $\mu<0$). In contrast, you can only put one fermion in a given state. If you have a fermion with a certain energy, and you want to add it to a system where the state of that energy is already occupied, the system has to play musical chairs to make that happen. You may have to push that fermion in there, in which case the system will not be happy about it; you'd have to do some work ($\mu>0$). In the case of photons, the system will take any energy you give it, but it won't reward you for it; it just doesn't care. $\mu=0$. It would be weird if $\mu$ were negative, because that would make it suck in all the photons (energy) it could get its hands on. Edit in response to question in comment: Why is $\mu$ the energy needed to stick in another particle? Let's work with a Maxwell-Boltzmann distribution because it's simpler. (Truthfully, I'm not sure how to do it with Bose-Einstein or Fermi-Dirac, but I'm not going to lose sleep over it; you can have fun with that.) Say you have states of energy $\epsilon_i$, $N$ particles, and $E$ total energy. You then have two normalization conditions: $$N=\sum_in\left(\epsilon_i\right)=\sum_ie^{\alpha+\beta\epsilon_i}$$
$$E=\sum_i\epsilon_in\left(\epsilon_i\right)=\sum_i\epsilon_ie^{\alpha+\beta\epsilon_i}$$ (I like this notation better; $\beta=\left(k_bT\right)^{-1}$, and $\alpha=-\beta\mu$) We want to show that the chemical potential is the change in system energy when increasing the number of particles: $\frac{\partial E}{\partial N}=-\mu$. (Why the minus sign? That's just how it's defined. There are lots of odd definitions in stat mech.) Starting off: $$N=e^\alpha\sum_ie^{\beta\epsilon_i}$$
$$e^\alpha=\frac{N}{Z}$$ where $Z=\sum_ie^{\beta\epsilon_i}$. Note that $\frac{\partial Z}{\partial\beta}=\sum_i\epsilon_ie^{\beta\epsilon_i}$ Then $$E=\frac{N}{Z}\sum_i\epsilon_ie^{\beta\epsilon_i}$$
$$E=\frac{N}{Z}\frac{\partial Z}{\partial\beta}$$
$$E=-N\frac{\partial}{\partial\beta}\ln{Z}$$
$$\frac{\partial E}{\partial N}=-\frac{\partial}{\partial\beta}\ln{Z}$$ Putting it all together in terms of $\mu$ $$e^{-\beta\mu}=\frac{N}{Z}$$
$$-\beta\mu=\ln{N}-\ln{Z}$$
$$-\ln{Z}=\ln{N}-\beta\mu$$
$$-\frac{\partial}{\partial\beta}\ln{Z}=-\mu$$
$$\frac{\partial E}{\partial N}=-\mu$$
QED | {
"source": [
"https://physics.stackexchange.com/questions/92314",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23103/"
]
} |
92,969 | I could understand that the definition of a second wouldn't have an uncertainty when related to the transition of the Cs atom, so it doesn't have an error because it's an absolute reference and we measure other stuff using the physical definition of a second, like atomic clocks do. But why doesn't the speed of light have uncertainty? Isn't the speed of light something that's measured physically? Check out that at NIST . | The second and the speed of light are precisely defined, and the metre is then specified as a function of $c$ and the second . So when you experimentally measure the speed of light you are effectively measuring the length of the metre i.e. the experimental error is the error in the measurement of the metre not the error in the speed of light or the second. It may seem odd to treat the metre as variable and the speed of light as a fixed quantity, but it's not as odd as you may think. The speed of light is not just some number, it's a fundamental property of the universe and is related to its geometry. By contrast the metre is just a length that happens to be convenient for humans. See What is so special about speed of light in vacuum? for more info. | {
"source": [
"https://physics.stackexchange.com/questions/92969",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/30124/"
]
} |
93,830 | Question : As we know, (1) the macroscopic spatial dimension of our universe is 3 dimension, and (2) gravity attracts massive objects together and the gravitational force is isotropic without directional preferences. Why do we have the spiral 2D plane-like Galaxy(galaxies), instead of spherical or elliptic-like galaxies? Input :
Gravity is (at least, seems to be) isotropic from its force law (Newtonian gravity). It should show no directional preferences from the form of force vector $\vec{F}=\frac{GM(r_1)m(r_2)}{(\vec{r_1}-\vec{r_2})^2} \hat{r_{12}}$ . The Einstein gravity also does not show directional dependence at least microscopically. If the gravity attracts massive objects together isotropically , and the macroscopic space dimension is 3-dimensional , it seems to be natural to have a spherical shape of massive objects gather together. Such as the globular clusters, or GC , are roughly spherical groupings Star cluster , as shown in the Wiki picture: However, my impression is that, even if we have observed some more spherical or more-ball-like Elliptical galaxy , it is more common to find more-planar Spiral galaxy such as our Milky Way ? (Is this statement correct? Let me know if I am wrong.) Also, such have a look at this more-planar Spiral "galaxy" as this NGC 4414 galaxy: Is there some physics or math theory explains why the Galaxy turns out to be planar-like (or spiral-like) instead of spherical-like? See also a somehow related question in a smaller scale: Can I produce a true 3D orbit? p.s. Other than the classical stability of a 2D plane perpendicular to a classical angular momentum, is there an interpretation in terms of the quantum theory of vortices in a macroscopic manner (just my personal speculation)? Thank you for your comments/answers! | Short answer: A spiral galaxy is , in fact, spherical-like. To understand how, let us as a starting point look at Wikipedia's sketch of the structure of a spiral galaxy: A spiral galaxy consists of a disk embedded in a spheroidal halo. The galaxy rotates around an axis through the centre, parallel to the GNP$\leftrightarrow$GSP axis in the image. The spheroidal halo consists mostly of Dark Matter (DM), and the DM makes up $\sim90\%$ of the mass of the Milky Way. Dynamically, it is the DM, that, ehrm, matters. And DM will always arrange itself in a ellipsoid configuration. So the question should rather be: Why is there even a disk , why isn't the galaxy just an elliptical? The key to answering this lies in the gas content of a galaxy. Both stars and Dark Matter particles - whatever they are - are collisionless ; they only interact with each other through gravity. Collisionless systems tend to form spheroid or ellipsoid systems, like we are used to from elliptical galaxies, globular clusters etc.; all of which share the characteristic that they are very gas-poor. With gas it is different: gas molecules can collide , and do it all the time. These collisions can transfer energy and angular momentum. The energy can be turned into other kinds of energy, which can escape, through radiation, galactic winds etc., and as energy escapes, the gas cools and settles down into a lower energy configuration. The gas' angular momentum, however, is harder to transfer out of the galaxy, so this is more or less conserved. The result - a collisional system with low energy but a relatively high angular momentum - is the typical thin disk of a spiral galaxy. (Something similar, but not perfectly analogous, happens in the formation of protoplanetary disks). Stars also do not collide, so they should in theory also make up an ellipsoid shape. And some do in fact: the halo stars , including but not limited to the globular clusters. These are all very old stars, formed when the gas of the galaxy hadn't settled into the disk yet (or, for a few, formed in the disk but later ejected due to gravitational disturbances). But the large majority of stars are formed in the gas after it has settled into the disk , and so the large majority of stars will be found in the same disk. Elliptical galaxies So why are there even elliptical galaxies? Elliptical galaxies are typically very gas-poor, so gas dynamics is not important in these, they are rather a classical gravitational many-body system like a DM halo. The gas is depleted from these galaxies due to many different processes such as star formation, collisions with other galaxies (which are quite common), gas ejection due to radiational pressure from strongly star forming regions, supernovae or quasars, etc. etc. - many are the ways for a galaxy to lose its gas. If colliding galaxies are sufficiently gas-depleted (and the collision results in a merger), then the resulting galaxy will not have any gas which can settle into a disk, and kinetic energy of the stars in the new galaxy will tend to be distributed randomly due to the chaotic nature of the interaction. (This picture is simplified, as the whole business of galactic dynamics is quite hairy, but I hope it gets the fundamentals right and more or less understandable). | {
"source": [
"https://physics.stackexchange.com/questions/93830",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/12813/"
]
} |
94,001 | As you know, it is quite obvious that bicycle spokes attach the hub in the center to the rim. What else do they do? If you compare the wheels today with the ones from ancient times, there are more spokes now on motor bikes and bicycles than of a wheel of a chariot. Why is that? What effect does it have on the vehicle if there is a higher number of spokes on the wheels? | In comparing wheels of today to those in history, there are traditionally more spokes now. However, that's because wheels in the past (even large wagon wheels in not-so-ancient times) used relatively thick wooden spokes that behaved like a column and dealt with the load of the wheel with compression. However, modern spokes are very thin. Far too thin to actually support any compressive load without buckling. Modern metal spokes are very easy to bend. However, when wheels are built, the spokes are threaded into nipples and the nipples are tightened so that the spokes are in tension at all times. A rod under tension does not buckle, so the instability is gone from the spoke. How much tension is very important of course but this isn't an answer about wheel building (although if you want information on that, hit me up in chat or ask over on Bicycles.SE , I've built many, many bicycle wheels). The real benefit to using thin spokes is two-fold. First, they are considerably lighter weight than the giant column-type spokes used before. Second, they are also considerably more comfortable because they do flex some under loading, how much can be tuned by the number, material, lacing pattern, and tension of the spokes. So there's a much greater control over the characteristics of the wheel with modern spokes than the giant column-type spokes of yesteryear. In addition to carrying the load, bicycle and motorcycle wheels have to handle the transfer of power. On a wagon or chariot, the wheels just respond and have to roll. On a bicycle or motorcycle, the power from the rider or engine is transmitted to the hub, forcing the hub to rotate. The spokes then need to transfer that power to the rim to make the wheel spin. This shearing rotation is why rear wheels rarely have a radial spoke pattern and instead have spokes that are at various angles to the hub. | {
"source": [
"https://physics.stackexchange.com/questions/94001",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36966/"
]
} |
94,181 | I've been looking at the Planck 2013 cosmological parameters paper, trying to update my toy cosmology simulator with the most recent data. Most of the interesting values such as $H_0$, $\Omega_m$, and $\Omega_\Lambda$ can be found in Table 2 on page 12, but the one thing I didn't find was an estimate of the energy density of radiation. Can this be derived from some other parameters in these data? | The radiation density has two components: the present-day photon density $\rho_\gamma$ and the neutrino density $\rho_\nu$ . The photon density as a function of frequency can be derived directly from the CMB: the photon number density follows the Planck law $$
n(\nu)\,\text{d}\nu = \frac{8\pi\nu^2}{c^3}\frac{\text{d}\nu}{e^{h\nu/k_B T_0}-1},
$$ with $k_B$ the Stefan-Boltzmann constant, and $T_0$ the current CMB temperature. The photon energy density is then $$
\rho_\gamma\, c^2 = \int_0^{\infty}h\nu\,n(\nu)\,\text{d}\nu = a_B\, T_0^4,
$$ where $$
a_B = \frac{8\pi^5 k_B^4}{15h^3c^3} = 7.56577\times 10^{-16}\;\text{J}\,\text{m}^{-3}\,\text{K}^{-4}
$$ is the radiation energy constant. With $T_0=2.7255\,\text{K}$ , we get $$
\rho_\gamma = \frac{a_B\, T_0^4}{c^2} = 4.64511\times 10^{-31}\;\text{kg}\,\text{m}^{-3}.
$$ The neutrino density is related to the photon density: in Eq. (1) on page 5 in the paper, you see that $$
\rho_\nu = 3.046\frac{7}{8}\left(\frac{4}{11}\right)^{4/3}\rho_\gamma.
$$ This relation can be derived from physics in the early universe, when neutrinos and photons were in thermal equilibrium. So $$
\rho_\nu = 3.21334\times 10^{-31}\;\text{kg}\,\text{m}^{-3},
$$ and the total present-day radiation density is $$
\rho_{R,0} = \rho_\gamma + \rho_\nu = 7.85846\times 10^{-31}\;\text{kg}\,\text{m}^{-3}.
$$ We can also express this relative to the present-day critical density $$
\rho_{c,0} = \frac{3H_0^{2}}{8\pi G} = 1.87847\,h^{2}\times 10^{-26}\;\text{kg}\,\text{m}^{-3},
$$ where the Hubble constant is expressed in terms of the dimensionless parameter $h$ , as $$
H_0 = 100\,h\;\text{km}\,\text{s}^{-1}\,\text{Mpc}^{-1},
$$ so we get $$
\begin{align}
\Omega_{\gamma}\,h^2 &= \dfrac{\rho_\gamma}{\rho_{c,0}}h^2 = 2.47282\times 10^{-5},\\
\Omega_{\nu}\,h^2 &= \dfrac{\rho_\nu}{\rho_{c,0}}h^2 = 1.71061\times 10^{-5},\\
\Omega_{R,0}\,h^2 &= \Omega_{\gamma}\,h^2 + \Omega_{\nu}\,h^2 = 4.18343\times 10^{-5}.
\end{align}
$$ For a Hubble value $h=0.673$ , one finds $\Omega_{R,0} = 9.23640\times 10^{-5}$ . I should point out that the formulae for the primordial neutrinos is only valid when they are relativistic, which was true in the early universe. Since neutrinos have a tiny mass, they are probably no longer relativistic in the present-day universe, and behave now like matter instead of radiation. Therefore, neutrinos only contributed to the radiation density in the early universe, while the present-day radiation density only consists of photons. | {
"source": [
"https://physics.stackexchange.com/questions/94181",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/19917/"
]
} |
94,235 | I was laying on my bed, reading a book when the sun shone through the windows on my left. I happened to look at the wall on my right and noticed this very strange effect. The shadow of my elbow, when near the pages of the book, joined up with the shadow of the book even though I wasn't physically touching it. Here's what I saw: The video seems to be the wrong way up, but you still get the idea of what is happening. What is causing this? Some sort of optical illusion where the light gets bent?
Coincidentally, I have been wondering about a similar effect recently where if you focus your eye on a nearby object, say, your finger, objects behind it in the distance seem to get curved/distorted around the edge of your finger. It seems awfully related... EDIT: I could see the bulge with my bare eyes to the same extent as in the video! The room was well light and the wall was indeed quite bright. | As said by John Rennie, it has to do with the shadows' fuzzyness. However, that alone doesn't quite explain it. Let's do this with actual fuzzyness: I've simulated shadow by blurring each shape and multiplying the brightness values 1 . Here's the GIMP file, so you can see how exactly and move the shapes around yourself. I don't think you'd say there's any bending going on, at least to me the book's edge still looks perfectly straight. So what's happening in your experiment, then? Nonlinear response is the answer. In particular in your video, the directly-sunlit wall is overexposed, i.e. regardless of the "exact brightness", the pixel-value is pure white. For dark shades, the camera's noise surpression clips the values to black. We can simulate this for the above picture: Now that looks a lot like your video, doesn't it? With bare eyes, you'll normally not notice this, because our eyes are kind of trained to compensate for the effect, which is why nothing looks bent in the unprocessed picture. This only fails at rather extreme light conditions: probably, most of your room is dark, with a rather narrow beam of light making for a very large luminocity range. Then, the eyes also behave too non-linear, and the brain cannot reconstruct how the shapes would have looked without the fuzzyness anymore. Actually of course, the brightness topography is always the same, as seen by quantising the colour palette: 1 To simulate shadows properly, you need to use convolution of the whole aperture, with the sun's shape as a kernel. As Ilmari Karonen remarks, this does make a relevant difference: the convolution of a product of two sharp shadows $A$ and $B$ with blurring kernel $K$ is $$\begin{aligned}
C(\mathbf{x}) =& \int_{\mathbb{R}^2}\!\mathrm{d}{\mathbf{x'}}\:
\Bigl(
A(\mathbf{x} - \mathbf{x}') \cdot B(\mathbf{x} - \mathbf{x'})
\Bigr) \cdot K(\mathbf{x}')
\\ =& \mathrm{IFT}\left(\backslash{\mathbf{k}} \to
\mathrm{FT}\Bigl(\backslash\mathbf{x}' \to
A(\mathbf{x}') \cdot B(\mathbf{x}')
\Bigr)(\mathbf{k})
\cdot \tilde{K}(\mathbf{k})
\right)(\mathbf{x})
\end{aligned}
$$ whereas seperate blurring yields $$\begin{aligned}
D(\mathbf{x}) =& \left( \int_{\mathbb{R}^2}\!\mathrm{d}{\mathbf{x'}}\:
A(\mathbf{x} - \mathbf{x}')
\cdot K(\mathbf{x}') \right)
\cdot \int_{\mathbb{R}^2}\!\mathrm{d}{\mathbf{x'}}\:
B(\mathbf{x} - \mathbf{x'})
\cdot K(\mathbf{x}')
\\ =& \mathrm{IFT}\left(\backslash{\mathbf{k}} \to
\tilde{A}(\mathbf{k}) \cdot \tilde{K}(\mathbf{k})
\right)(\mathbf{x})
\cdot \mathrm{IFT}\left(\backslash{\mathbf{k}} \to
\tilde{B}(\mathbf{k}) \cdot \tilde{K}(\mathbf{k})
\right)(\mathbf{x}).
\end{aligned}
$$ If we carry this out for a narrow slit of width $w$ between two shadows (almost a Dirac peak), the product's Fourier transform can be approximated by a constant proportional to $w$, while the $\mathrm{FT}$ of each shadow remains $\mathrm{sinc}$-shaped , so if we take the Taylor-series for the narrow overlap it shows the brightness will only decay as $\sqrt{w}$, i.e. stay brighter at close distances, which of course surpresses the bulging. And indeed, if we properly blur both shadows together , even without any nonlinearity, we get much more of a "bridging-effect": But that still looks nowhere as "bulgy" as what's seen in your video. | {
"source": [
"https://physics.stackexchange.com/questions/94235",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/32287/"
]
} |
94,416 | In many experiments in quantum mechanics, a single photon is sent to a mirror which it passes through or bounces off with 50% probability, then the same for some more similar mirrors, and at the end we get interference between the various paths. This is fairly easy to observe in the laboratory. The interference means there is no which-path information stored anywhere in the mirrors. The mirrors are made of 10^20-something atoms, they aren't necessarily ultra-pure crystals, and they're at room temperature. Nonetheless, they act on the photons as very simple unitary operators. Why is it that the mirrors retain no or very little trace of the photon's path, so that very little decoherence occurs? In general, how do I look at a physical situation and predict when there will be enough noisy interaction with the environment for a quantum state to decohere? | Nobody is answering this question, so I'll take a stab at it. Consider the mirror. Suppose you started your experiment by (somehow) putting it in a nearly-exact momentum state, meaning there is a large uncertainty in its position. Now, when you send a photon at it, the photon either bounces off or passes through. If the photon bounces off the mirror, it will change the momentum of the mirror. You could theoretically measure the "which-way" information by measuring the momentum of the mirror after you've done the experiment. In this scenario, there wouldn't be any interference. However, you didn't do that. You started the mirror off in a thermal state at room temperature. This state can be considered as a superposition of different momentum states of the mirror 1 , with a phase associated to each one. If you change the momentum by a small amount, the phase associated to this state in the superposition only changes by a small amount. Now, let $p_\gamma$ and $p_m$ be the original momenta of the photon and the mirror, and let $\Delta p_\gamma$ be the change in the momentum when the photon bounces off the mirror. When you send the photon towards the mirror, the original state $p_m$ (photon passes through) will end up in the same configuration as the original state $p'_m = p_m - \Delta p_\gamma$ (photon bounces off). These two states $p_m$ and $p'_m$ had nearly the same phase before you aimed the photon at the mirror, so they will interfere, and if the phase on these two states are really close, the interference will be nearly perfect. Of course, a change in momentum isn't the only way for the mirror to gain which-way information. However, I think what happens when you consider the other ways is that they behave much like this, only not anywhere near as cleanly, so they're harder to work with. 1 Technically, it's a mixed state, i.e., a density matrix, and not a pure state. But the basic idea of the above explanation still holds. | {
"source": [
"https://physics.stackexchange.com/questions/94416",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/74/"
]
} |
94,471 | Today a friend's six year old sister asked me the question "why don't people on the other side of the earth fall off?". I tried to explain that the Earth is a huge sphere and there's a special force called "gravity" that tries to attract everything to the center of the Earth, but she doesn't seem to understand it. I also made some attempts using a globe, saying that "Up" and "Down" are all local perspective and people on the other side of the Earth feel they're on the top, but she still doesn't get it. How can I explain the concept of gravity to a six year old in a simple and meaningful way? | Having my own 6-year-old and having successfully explained this, here's my advice from experience: Don't try to explain gravity as a mysterious force. It doesn't make sense to most adults (sad, but true! talk to non-physicists about it and you'll see), it won't make sense to a 6yo. The reason this won't work is that it requires inference from general principles to specific applications, plus it requires advanced abstract thinking to even grasp the concept of invisible forces. Those are not skills a 6-year-old has at their fingertips. Most things they're figuring out right now is piecemeal and they won't start fitting their experiences to best-fit conscious models of reality for a few years yet. Do exploit 6-year-old's tendency to take descriptions of actions-that-happen at face value as simple piecemeal facts. Stuff pulls other stuff to itself. When you have a lot of stuff, it pulls other things a lot . The bigger things pull the smaller things to them. Them having previously understood the shape of the solar system and a loose grasp of the fact of orbits (not how they work—that's a different piece—just that planets and moons move in "circular" tracks around heavier things like the Sun and Earth) may be useful before embarking on these parts of the conversation. I'm not sure, but that was a thing my 6yo already had started to grasp at this point. These conversations were also mixed in with our conversations about how Earth formed from debris, and how the pull was involved in making that happen, and how it made the pull more and more. So, I can't really separate out that background; it may also help/be necessary. Don't try to correct a 6-year-old's confusion about up and down being relative, but use it instead. There's a lot of Earth under us, and it pulls us down when we jump. If we jumped off the side, it would pull us back sideways. If we fell off the bottom, it would pull us back up. You can follow this up later with a Socratic dialogue about the relative nature of up and down, but don't muddy the waters with that immediately. That won't have any purchase until they accept the fact that Earth will pull you "back up" if you fall off. Build it up over a series of conversations. They won't get it the first time, or the tenth, but pieces of it will stick. Don't try to instill a grasp of the overall working model. If you can successfully give them some single, disconnected facts that they actually believe, putting them together will happen as they age and mature and get more exposure to this stuff. All this is assuming a decently smart but not prodigious child, of course. (A 6-year-old prodigy can probably grasp a lay adult's model of gravity, but if that's who you're dealing with then you don't need to adjust your teaching.) For some more context, this was also after my child's class started experimenting with magnets at school. I was inspired to attempt to explain gravity when my kid told me that trees didn't float off into space because the Earth was a giant magnet. (True! But not why trees don't float away.) Comparing gravity and magnetism might help, to give them an example of invisible pull that they can feel, but it might just confuse the subject a lot too since I had a lot of work (over multiple conversations) to convince my own that trees aren't sticking to the ground because of magnetism, even if the Earth is a giant magnet. And, a final piece of advice that's incidental, but can help: Once you've had a few of these conversations, play Kerbal Space Program while they watch. (Again, this comes from experience. My kid loves to watch KSP.) Seeing a practical example of gravity at work in it natural environment will go a long way to cementing the previous conversations. It may sound like a sign-off joke, but seeing a system moving and being manipulated makes a huge difference to a young child's comprehension, because it is no longer abstract or requires building mental abstractions to grasp, like showing them a globe does. | {
"source": [
"https://physics.stackexchange.com/questions/94471",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/28097/"
]
} |
95,254 | Can anyone explain why the error for $\ln (x)$ (where for $x$ we have $x\pm\Delta x$) is simply said to be $\frac{\Delta x}{x}$? I would very much appreciate a somewhat rigorous rationalization of this step. Additionally, is this the case for other logarithms (e.g. $\log_2(x)$), or how would that be done? | Simple error analysis assumes that the error of a function $\Delta f(x)$ by a given error $\Delta x$ of the input argument is approximately
$$
\Delta f(x) \approx \frac{\text{d}f(x)}{\text{d}x}\cdot\Delta x
$$
The mathematical reasoning behind this is the Taylor series and the character of $\frac{\text{d}f(x)}{\text{d}x}$ describing how the function $f(x)$ changes when its input argument changes a little bit. In fact this assumption makes only sense if $\Delta x \ll x$ (see Emilio Pisanty's answer for details on this) and if your function isnt too nonlinear at the specific point (in which case the presentation of a result in the form $f(x) \pm \Delta f(x)$ wouldnt make sense anyway). Note that sometimes $\left| \frac{\text{d}f(x)}{\text{d}x}\right|$ is used to avoid getting negative erros. Since
$$
\frac{\text{d}\ln(x)}{\text{d}x} = \frac{1}{x}
$$
the error would be
$$
\Delta \ln(x) \approx \frac{\Delta x}{x}
$$ For arbitraty logarithms we can use the change of the logarithm base:
$$
\log_b x = \frac{\ln x}{\ln b}\\
(\ln x = \log_\text{e} x)
$$
to obtain
$$
\Delta \log_b x \approx \frac{\Delta x}{x \cdot \ln b}
$$ | {
"source": [
"https://physics.stackexchange.com/questions/95254",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/33768/"
]
} |
95,366 | Recently, I read in the journal Nature that Stephen Hawking wrote a paper claiming that black holes do not exist. How is this possible? Please explain it to me because I didn't understand what he said. References: Article in Nature News: Stephen Hawking: 'There are no black holes' (Zeeya Merali, January 24, 2014). S. Hawking, Information Preservation and Weather Forecasting for Black Holes, arXiv:1401.5761 . | The paper by Dr. Stephen Hawking doesn't say that black holes don't exist. What he says is that black holes can exist without " event horizons ". To understand what an event horizon is, we first have to understand what is meant by escape velocity. This last one is the speed you need to escape a body. Now, here is where the event horizon and the escape velocity comes in play: the event horizon is the boundary between where the speed needed to escape a black hole is less than that of light, and where the speed needed to escape a black hole is greater than the speed of light. So Hawking says that instead of event horizon, there may be " apparent horizons " that would hold light and information only temporarily before releasing them back into space in a " garbled form ". | {
"source": [
"https://physics.stackexchange.com/questions/95366",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/38121/"
]
} |
95,815 | This question is a consequence of another question of mine which is about spin. Here is my spin question . What is the difference between these two fields? How do they occur? Am I right if I say that a magnetic field is about photons (because they occur between N and S poles of a magnet) and an electric field is about electrons? How they are related? Finally; when and why do we use the word "electromagnetism"? | Electric forces are attractive or repulsive forces between "charged objects", e.g. comb and dry hair after some friction. Charged objects are those that carry some nonzero electric charge $Q$. The lightest – and therefore easiest to move – charged particle is the electron so the surplus or deficit of electrons is the most typical reason why some objects are charged. Magnetic forces are attractive or repulsive forces between magnets, like magnetized pieces of iron. The amount of "magnetic dipole" carried by a magnet is completely independent of its electric charge. They're as independent as the gravitational and electrostatic forces i.e. as independent as the mass and the charge of an object. For centuries, these two forces were thought of as independent. Only a few centuries ago, due to Faraday and others, relationships between the electric and magnetic forces began to be uncovered. Magnets may be produced by coils – by electric charges moving in loops. They become indistinguishable from bar magnets. Similarly, moving magnets produce electric fields. In the middle of the 19th century, because of these "mutual influences" between electricity and magnetism, a unified theory was gradually found. Because electricity and magnetism influence each other, we need to talk about a whole – electromagnetism or, to point out that magnetism is related to moving electric charges, electrodynamics (dynamics sort of means "motion" or "reasons for motion"). James Clerk Maxwell wrote the unified equations for electricity and magnetism which exhibited a near perfect symmetry between electricity and magnetism. They are two independent "siblings" but they affect one another and the inner mechanisms in them are analogous. Maxwell's theory also implied that there are electromagnetic waves – disturbances in space where the electric field goes up and down and so does the magnetic field which is excited by the electric one and vice versa. Moreover, he proved that light was a special example of the electromagnetic wave. In the 20th century, it was realized that the existence of the other force follows from one force (e.g. magnetism followed from electricity) due to a symmetry between inertial observers who are moving relatively to each other, i.e. due to the Lorentz symmetry which underlies Einstein's special relativity. It was also found out that the electromagnetic waves may be thought of as collections of photons and that the exchange of the photon is the "reason" behind electric as well as magnetic forces. So the photons are the messengers of electromagnetism – both electricity and magnetism. Electrons are the most important carriers of the electric charge which means that they're the most important particles that produce the electric and magnetic (when electrons are moving or spinning) fields. These fields arise and affect other pieces of matter (especially electrons) due to the "messenger role" of the photons. Photons are "units" of the electromagnetic waves. | {
"source": [
"https://physics.stackexchange.com/questions/95815",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36632/"
]
} |
95,833 | Why turning velocity of star towards point "A" makes it's satelites, change their apoapsis to side opposite of velocity vector? Sentence above turned a bit crazy or not understandable (possible reasons: my vocabulary isn't that good; I don't know what the heck I am talking about). So I mean this: http://phet.colorado.edu/sims/my-solar-system/my-solar-system_en.html The first you get is star (yellow) and planet (purple). If you closely, you notice circle around star. It's velocity, if you move it in direction opposite to purple star. Purple planet get's crazy and flies away, as if star turns it's magnetic polar towards that planet. | Electric forces are attractive or repulsive forces between "charged objects", e.g. comb and dry hair after some friction. Charged objects are those that carry some nonzero electric charge $Q$. The lightest – and therefore easiest to move – charged particle is the electron so the surplus or deficit of electrons is the most typical reason why some objects are charged. Magnetic forces are attractive or repulsive forces between magnets, like magnetized pieces of iron. The amount of "magnetic dipole" carried by a magnet is completely independent of its electric charge. They're as independent as the gravitational and electrostatic forces i.e. as independent as the mass and the charge of an object. For centuries, these two forces were thought of as independent. Only a few centuries ago, due to Faraday and others, relationships between the electric and magnetic forces began to be uncovered. Magnets may be produced by coils – by electric charges moving in loops. They become indistinguishable from bar magnets. Similarly, moving magnets produce electric fields. In the middle of the 19th century, because of these "mutual influences" between electricity and magnetism, a unified theory was gradually found. Because electricity and magnetism influence each other, we need to talk about a whole – electromagnetism or, to point out that magnetism is related to moving electric charges, electrodynamics (dynamics sort of means "motion" or "reasons for motion"). James Clerk Maxwell wrote the unified equations for electricity and magnetism which exhibited a near perfect symmetry between electricity and magnetism. They are two independent "siblings" but they affect one another and the inner mechanisms in them are analogous. Maxwell's theory also implied that there are electromagnetic waves – disturbances in space where the electric field goes up and down and so does the magnetic field which is excited by the electric one and vice versa. Moreover, he proved that light was a special example of the electromagnetic wave. In the 20th century, it was realized that the existence of the other force follows from one force (e.g. magnetism followed from electricity) due to a symmetry between inertial observers who are moving relatively to each other, i.e. due to the Lorentz symmetry which underlies Einstein's special relativity. It was also found out that the electromagnetic waves may be thought of as collections of photons and that the exchange of the photon is the "reason" behind electric as well as magnetic forces. So the photons are the messengers of electromagnetism – both electricity and magnetism. Electrons are the most important carriers of the electric charge which means that they're the most important particles that produce the electric and magnetic (when electrons are moving or spinning) fields. These fields arise and affect other pieces of matter (especially electrons) due to the "messenger role" of the photons. Photons are "units" of the electromagnetic waves. | {
"source": [
"https://physics.stackexchange.com/questions/95833",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/38547/"
]
} |
95,912 | Is it possible to describe the physical meaning of Maxwell's equations and show how they lead to electromagnetic wave, with little involvement of mathematics ? | Is it possible to describe the physical meaning of Maxwell's equations and show how they lead to electromagnetic wave, with little involvement of mathematics ? No. But don't be put off, I'll go through the mathematics slow. Starting with Maxwell's equations. Maxwell's equations are relatively old and don't take into account quantum mechanics and are used (like Newtons law of gravity) because it gets the right answer (pretty much) the better quantum field theory (General relativity for gravity) is too maths heavy to be practical for most uses. $$
\nabla \cdot \vec E = \frac{\rho}{ \epsilon}
$$ $$
\nabla \cdot \vec B = 0
$$ $$
\nabla \times \vec E = - \frac {\partial \vec B}{\partial t}
$$ $$
\nabla \times \vec B = \mu \vec J + \epsilon \mu \frac {\partial \vec E}{\partial t}
$$ Where: $\vec B$ is magnetic field $\vec E$ is electric field The arrows $\vec E$ and $\vec B$ indicate that this quantity is a vector and points (flows) in a given direction $\rho$ is the charge density (amount of charge) $\vec J$ is current (density) $\epsilon , \mu$ are just constants $\nabla$ (upside down triangle [aka del]) on its own is the gradient operator (just a mathematical thing [that does stuff to functions]) (it describes how something changes in space) $\nabla \cdot$ (up-side-down triangle followed by a dot [aka del dot]) is an The divergence operator (how something changes in space [in an expanding or contracting kind of way]) $\nabla \times$ (up-side-down triangle followed by a multiplied by "x" [aka del cross]) is the curl operator (how something changes in space [in an turning kind of way]) $\frac{\partial}{\partial t}$ is the time derivative operator (how something changes in time).
Let's start with the easiest one: $$
\nabla \cdot \vec B = 0
$$ This just says the divergence of the magnetic field is zero always no matter what. This means that the magnetic $\vec B$ field can be thought of as a tank of water where it can move but never will there be bubbles of no water or areas of higher density water. (unlike air which can compress if you push it hard enough) Which can also be written as: $$
\iint_{S} \vec B \cdot d \vec s = 0
$$ Which says the same thing in a different way, that if you add up $\iint$ all of the $\vec B$ field escaping through the tiny bits of area $d\vec s$ of a closed surface $S$ then they will add to zero. So basically if some $\vec B$ field enters at one point it has to leave again at another point. Next: $$
\nabla \cdot \vec E = \frac{\rho}{ \epsilon}
$$ This says a very similar thing: if there is no charge then the $\vec E$ field is divergent free i.e. and in-compressible fluid. However if there is a charge e.g. a proton then it acts like the end of a hose pipe and $\vec E$ "fluid" bursts out in all directions (referred to as a source). If there is a negative charge then the $\vec E$ "fluid" gets sucked in (referred to as a sink).
This can be re written as: $$
\iint_{S} \vec E \cdot d\vec s = \frac {1}{\epsilon} \iiint_{V}\, \rho dv
$$ This way of saying it means that if you add up $\iint$ all of the $E$ field escaping through the tiny bits of area $d\vec s$ of a closed surface $S$ then they will be equal to what you get if you add up $\iiint$ the amount of charge density $\rho$ times all of the little volumes $dv$ inside the volume $V$ of the surface $S$ . So basically you can tell the number of hose ends within an area by the amount of water that flows from that area. If there are no charge (or the same number of sucking hoses as blowing hoses) then this adds to zero. Next $$
\nabla \times \vec E = - \frac {\partial \vec B}{\partial t}
$$ This says that if $\vec B$ changes in time $\dfrac {\partial \vec B}{\partial t}$ then it will cause $\vec E$ to move in space $\nabla \times$ i.e. that a changing magnetic field will cause an electric field which if it were in a wire we would call a voltage. And finally: $$
\nabla \times \vec B = \mu \vec J +\epsilon \mu \frac {\partial \vec E}{\partial t}
$$ This has three parts so lets break it down. Note that $\vec J$ appears for the first time, this is current density (the movement of charge carriers). If there are no charge carriers (i.e. no protons and electrons) like the way there are no (Maybe there are I'm still waiting on you LHC) magnetic charge carriers (called mono-poles) Then this law looks very like the one previous one. If $$
\vec J = \vec 0
$$ Then: $$
\nabla \times \vec B = \epsilon \mu \frac {\partial \vec E}{\partial t}
$$ Looks a lot like: $$
\nabla \times \vec E = - \frac {\partial \vec B}{\partial t}
$$ This is actually the one of the conditions for light to propagate. We want to prove that Maxwell's equations have wave solutions. We can prove this by producing the wave equation: $$
\frac {\partial^2 \vec A}{\partial x^2}= \frac{1}{v^2}\frac {\partial^2 \vec A}{\partial t^2}
$$ Or in 3 dimensions: $$
\nabla^2 \vec A = \frac{1}{v^2}\frac {\partial^2 \vec A}{\partial t^2}
$$ Where $\nabla^2 $ is the 3D analog of $\frac {\partial^2}{\partial x^2}$ i.e. $\frac {\partial^2}{\partial x^2}+\frac {\partial^2}{\partial y^2}+\frac {\partial^2}{\partial z^2}$ Any system that obeys the above wave equation behaves with wavelike properties. To prove that Maxwell's equations have these solutions we also have to let: $$
\rho=0
$$ Giving: $$
\nabla \cdot \vec E = 0
$$ We then take the curl of both sides of: $\nabla \times \vec E = - \frac {\partial \vec B}{\partial t}$ To get: $$
\nabla \times \nabla \times \vec E = -\nabla \times \frac {\partial \vec B}{\partial t}
$$ A theorem in calculus that states that the curl of a curl of a field can be written as: $$
\nabla \times \nabla \times \vec A = \nabla (\nabla \cdot \vec A) - \nabla^2 \vec A
$$ So $$
\nabla \times \nabla \times \vec E = \nabla (\nabla \cdot \vec E) - \nabla^2 \vec E
$$ But $$
\nabla \cdot \vec E = 0
$$ So $$
\nabla \times \nabla \times \vec E = - \nabla^2 \vec E
$$ Replacing that in $\nabla \times \nabla \times \vec E = -\nabla \times \frac {\partial \vec B}{\partial t}$ $$
- \nabla^2 \vec E = -\nabla \times \frac {\partial \vec B}{\partial t}
$$ Cancelling the minus sign. Switching $-\nabla \times$ and $\frac {\partial}{\partial t}$ $$
\nabla^2 \vec E = \frac {\partial}{\partial t} \nabla \times \vec B
$$ From Maxwell's equations we know that: $$
\nabla \times \vec B = \epsilon \mu \frac {\partial \vec E}{\partial t}
$$ Replacing that we get $$
\nabla^2 \vec E = \frac {\partial}{\partial t} (\epsilon \mu \frac {\partial \vec E}{\partial t})
$$ Or $$
\nabla^2 \vec E =\epsilon \mu \frac {\partial^2\vec E}{\partial t^2}
$$ Comparing this to our original wave equation. $$
\nabla^2 \vec A = \frac{1}{v^2}\frac {\partial^2 \vec A}{\partial t^2}
$$ We see that $\vec E$ corresponds to $\vec A$ And $\epsilon \mu$ corresponds to $\frac{1}{v^2}$ So the speed at which this wave travels is $v = \frac{1}{\sqrt{\epsilon \mu}}$ This speed is dubbed $c$ This derivation can be repeated for the magnetic field $\vec B$ to get that in wave solutions the two field always come in pairs perpendicular to each other and to the direction that they are going. So basically changing electric field creates changing magnetic field which creates a changing electric field etc. This also tells us the speed of light $c$ is equal to $c = \dfrac{1}{\sqrt{\epsilon \mu}}$ Note: Einstein thought that it was interesting not what terms show up in the equation of the speed of light but the term missing- that this speed of light is independent of how fast you are travelling, and he thought that everybody regardless of speed would measure this speed to be the same. Next we let $\vec J = \vec 0$ This time we are going to let $\vec E$ be constant in time so that $$
\dfrac {\partial \vec E}{\partial t} = \vec 0
$$ . In this case: $$
\nabla \times \vec B = \mu \vec J
$$ The current (density) $\vec J$ creates a magnetic field (that changes in space but not time) so since this magnetic field does not chance in time it cannot create the $\vec E$ field from equation: $$
\nabla \times \vec E = - \dfrac {\partial \vec B}{\partial t}
$$ This is how electro-magnets work. The current (density) $\vec J$ creates a magnetic field around the wire, this magnetic field can be intensified by having the wire in loops like a solenoid the "soft" iron core of the solenoid helps the magnetic fluid to flow by changing the values of (μ) to make it easier to channel through it instead of the air. Back to: $$
\nabla \times \vec B = \mu \vec J +\epsilon \mu \dfrac {\partial \vec E}{\partial t}
$$ If $\vec J$ is net zero over time so the charge carriers only move back and forth. So we ignore $\vec J$ (but charge carriers exist this time) $$
\nabla \times \vec B = \epsilon \mu \dfrac {\partial \vec E}{\partial t}
$$ It is possible for the change in electric field caused by the changing moving electrons to create a changing moving magnetic field. (Since the rate of change of the electric field is not constant [they are not just getting faster and faster they are getting faster then slower then faster {sorry if that's kinda confusing}]). So changing $\vec E$ field creates changing $\vec B$ field creates changing $\vec E$ field on the far side of the Transformer core (it also creates loops of current [Eddie currents] inside the core that is why the core is made of layers instead of one solid core because we don't want current there it would generate heat and waste energy.) It is possible to define a displacement current $\vec J_D$ that has the same units of $\vec J$ but also takes into account the energy that flows through the capacitor or transformer. | {
"source": [
"https://physics.stackexchange.com/questions/95912",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/38578/"
]
} |
95,974 | I'm a philosophy student (I, regrettably, don't know calculus or much physics). Last year I spent some time learning how work, power, speed, velocity, energy, force, and acceleration relate. But I was never able to fit my understanding of acceleration into my understanding of the world. I think my biggest challenge was understanding how you can have (m/s)/s. I.e. the 'per-second per-second' part doesn't make sense to me — and how that represents an increase in speed. Does it express, every second a thing travels X meters/second faster ? That's my best guess, but I see a lot of problems with that guess, so I presume that it's incorrect. In words, why is acceleration expressed as (m/s)/s? How does that expression relate (if at all) to the everyday notion of acceleration? | Maybe it will be even clearer to you if one explained it in a more fundamental way, but for this, we need a little bit of senior grade mathematics. I am assuming you have heard of derivatives; if my assumption is false, I am sorry for that, but in this case this answer might not be helpful to you. Let's get clear about something important (but rather philosophical) first. This speed and acceleration stuff isn't real . It is some sort of thought experiment that is quite useful in that it helps describing our world. Let's take some object - without restriction and for the sake of simplicity, let's assume it's an apple - and push it around (in your head). What is happening? The position of the object changes over time , so here we've got a connection of two fundamental physical units, distance and time . You can speak of distance as a function over time (that means, you can plot it with the x axis being the time axis and the distance at a given time are the y values). Now, let's have a look at the speed (and now, again for sake of simplicity, assume the object is travelling in a straigt line, otherwise you'll get some more general vector spaces that might be nasty to imagine). How do you calculate average speed ? So, if the apple would have been travelling at the same speed all the time, how big would this speed need to be? Basically, the formula is $v_{average} = \frac{\text{distance}}{\text{time}}$ (quite intuitive, I think). But again, this is already of theoretical nature. It is not some sort of "inherent property", but physicists have "invented" it to describe processes. If you don't want to calculate the average speed for the whole distance, but only for a certain period of time, the formula is still $v_{average} = \frac{\text{distance}}{\text{time}}$, but of course, you have to change the values for time and distance accordingly. Here's a picture: $\Delta$ is the Greek letter Delta and means "difference" - difference between start and end distance and start and end time. The straight line in the picture is called a secant and its slope is equal to the average distance. (Just believe me on this one - I don't know how to make it appear more plausible at the moment.) Now you can ask the question of speed at a certain moment and you have to realize that the equation above won't work any more.Looking at only one point, the difference between start and end time and start and end position is zero. Now, you are not allowed to divide by zero, and that's a problem. Imagine this geometrically: you are moving one of the two points along the curve until the two points are identical. The secant from above has always been dependent on two points. Nor, there's only one, so theoretically, there's an infinite amount of lines that go through this one point. However, only one line (well, supposing all this is differentiable - ignore that) does actually give us what we want. It should be the tangent to the curve. Now, that's what we call speed. All the slopes of the tangents in a point of the curve form a new graph which gives you speed over time , which is the derivative of position with respect to time. Completely analoguously, if something is travelling, you might want to know how the speed changed. For example, imagine an inclined plane with our apple on it. Depending on the material of the plane and the slope of it, the apple might become faster (friction not so big), remain constant (friction equal to gravitation that pushes the apple "downwards") or may become slower (lots of friction). This is described with acceleration . If the speed is constant, acceleration is zero, because nothing happens. If the object becomes faster, the acceleration is positive, because acceleration is the rate of change of speed. Similarly, if the apple becomes slower, there is negative acceleration. Now, to measure the average acceleration, we do the same thing as above: $\text{time} * \text{acceleration} = \text{velocity} \implies \text{acceleration} = \frac{\text{velocity}}{\text{time}}$. Now just look at the units: On the right side, you already have meters per second for speed, and now you are looking at the change of this speed over time. this gives you (meters per second) per second. By the way, you can apply exactly the same ideas I mentioned before (secant, tangent, derivative) to the velocity graph and you will see that acceleration is the derivative of speed. By the way, I would really encourage you to keep reading and thinking about physics, mathematics and the other sciences. It is always good to work interdisciplinary and I think it is crucial for a philosopher to know what those science people seem to "know" about everything out there. I have seen too many philosophers building theories that just - well - didn't match reality. I think, this youtube series on the topic is quite well done and you might enjoy watching it. | {
"source": [
"https://physics.stackexchange.com/questions/95974",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23290/"
]
} |
95,976 | I am having a difficult time solving this. Say that electrons are emitted from a source S at a very slow rate. If both slits S1 and S2 are observed, we would have roughly 50% probability of detecting an electron at one of the two slits. The interference pattern is lost and the intensity distribution will appear as the sum of two individual sources: I = I1 + I2. But what if only one slit (S1) is observed? The observed slit (S1) will appear to produce a normal distribution, but what about the unobserved slit? This experiment has been performed with individual electrons, so we know that if both S1 and S2 are unobserved the intensity distribution contains an oscillating term for each electron. Does concluding that an electron must have passed through the unobserved slit count as an observation, and therefore destroy the interference pattern? Edit: changed the source to electrons | Maybe it will be even clearer to you if one explained it in a more fundamental way, but for this, we need a little bit of senior grade mathematics. I am assuming you have heard of derivatives; if my assumption is false, I am sorry for that, but in this case this answer might not be helpful to you. Let's get clear about something important (but rather philosophical) first. This speed and acceleration stuff isn't real . It is some sort of thought experiment that is quite useful in that it helps describing our world. Let's take some object - without restriction and for the sake of simplicity, let's assume it's an apple - and push it around (in your head). What is happening? The position of the object changes over time , so here we've got a connection of two fundamental physical units, distance and time . You can speak of distance as a function over time (that means, you can plot it with the x axis being the time axis and the distance at a given time are the y values). Now, let's have a look at the speed (and now, again for sake of simplicity, assume the object is travelling in a straigt line, otherwise you'll get some more general vector spaces that might be nasty to imagine). How do you calculate average speed ? So, if the apple would have been travelling at the same speed all the time, how big would this speed need to be? Basically, the formula is $v_{average} = \frac{\text{distance}}{\text{time}}$ (quite intuitive, I think). But again, this is already of theoretical nature. It is not some sort of "inherent property", but physicists have "invented" it to describe processes. If you don't want to calculate the average speed for the whole distance, but only for a certain period of time, the formula is still $v_{average} = \frac{\text{distance}}{\text{time}}$, but of course, you have to change the values for time and distance accordingly. Here's a picture: $\Delta$ is the Greek letter Delta and means "difference" - difference between start and end distance and start and end time. The straight line in the picture is called a secant and its slope is equal to the average distance. (Just believe me on this one - I don't know how to make it appear more plausible at the moment.) Now you can ask the question of speed at a certain moment and you have to realize that the equation above won't work any more.Looking at only one point, the difference between start and end time and start and end position is zero. Now, you are not allowed to divide by zero, and that's a problem. Imagine this geometrically: you are moving one of the two points along the curve until the two points are identical. The secant from above has always been dependent on two points. Nor, there's only one, so theoretically, there's an infinite amount of lines that go through this one point. However, only one line (well, supposing all this is differentiable - ignore that) does actually give us what we want. It should be the tangent to the curve. Now, that's what we call speed. All the slopes of the tangents in a point of the curve form a new graph which gives you speed over time , which is the derivative of position with respect to time. Completely analoguously, if something is travelling, you might want to know how the speed changed. For example, imagine an inclined plane with our apple on it. Depending on the material of the plane and the slope of it, the apple might become faster (friction not so big), remain constant (friction equal to gravitation that pushes the apple "downwards") or may become slower (lots of friction). This is described with acceleration . If the speed is constant, acceleration is zero, because nothing happens. If the object becomes faster, the acceleration is positive, because acceleration is the rate of change of speed. Similarly, if the apple becomes slower, there is negative acceleration. Now, to measure the average acceleration, we do the same thing as above: $\text{time} * \text{acceleration} = \text{velocity} \implies \text{acceleration} = \frac{\text{velocity}}{\text{time}}$. Now just look at the units: On the right side, you already have meters per second for speed, and now you are looking at the change of this speed over time. this gives you (meters per second) per second. By the way, you can apply exactly the same ideas I mentioned before (secant, tangent, derivative) to the velocity graph and you will see that acceleration is the derivative of speed. By the way, I would really encourage you to keep reading and thinking about physics, mathematics and the other sciences. It is always good to work interdisciplinary and I think it is crucial for a philosopher to know what those science people seem to "know" about everything out there. I have seen too many philosophers building theories that just - well - didn't match reality. I think, this youtube series on the topic is quite well done and you might enjoy watching it. | {
"source": [
"https://physics.stackexchange.com/questions/95976",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/38594/"
]
} |
96,045 | $SU(2)$ is the covering group of $SO(3)$. What does it mean and does it have a physical consequence? I heard that this fact is related to the description of bosons and fermions. But how does it follow from the fact that $SU(2)$ is the double cover of $SO(3)$? | Great, important question. Here's the basic logic: We start with Wigner's Theorem which tells us that a symmetry transformation on a quantum system can be written, up to phase, as either a unitary or anti-unitary operator on the Hilbert space $\mathcal H$ of the system. It follows that if we want to represent a Lie group $G$ of symmetries of a system via transformations on the Hilbert space, then we must do so with a projective unitary representation of the Lie group $G$. The projective part comes from the fact that the transformations are unitary or anti-unitary "up to phase," namely we represent such symmetries with a mapping $U:G\to \mathscr U(\mathcal H)$ such that for each $g_1,g_2\in G $, there exists a phase $c(g_1, g_2)$ such that
\begin{align}
U(g_1g_2) = c(g_1, g_2) U(g_1) U(g_2)
\end{align}
where $\mathscr U(\mathcal H)$ is the group of unitary operators on $\mathcal H$. In other words, a projective unitary representation is just an ordinary unitary representation with an extra phase factor that prevents it from being an honest homomorphism. Working with projective representations isn't as easy as working with ordinary representations since they have the pesky phase factor $c$, so we try to look for ways of avoiding them. In some cases, this can be achieved by noting that the projective representations of a group $G$ are equivalent to the ordinary representations of $G'$ its universal covering group , and in this case, we therefore elect to examine the representations of the universal cover instead. In the case of $\mathrm{SO}(3)$, the group of rotations, we notice that its universal cover, which is often called $\mathrm{Spin}(3)$, is isomorphic to $\mathrm{SU}(2)$, and that the projective representations of $\mathrm{SO}(3)$ match the ordinary representations of $\mathrm{SU}(2)$, so we elect to examine the ordinary representations of $\mathrm{SU}(2)$ since it's more convenient. This is all very physically important. If we had only considered the ordinary representations of $\mathrm{SO}(3)$, then we would have missed the "half-integer spin" representations, namely those that arise when considering rotations on fermionic systems. So, we must be careful to consider projective representations, and this naturally leads to looking for the universal cover. Note : The same sort of thing happens with the Lorentz group in relativistic quantum theories. We consider projective representations of $\mathrm{SO}(3,1)$ because Wigner says we ought to, and this naturally leads us to consider its universal cover $\mathrm{SL}(2,\mathbb C)$. | {
"source": [
"https://physics.stackexchange.com/questions/96045",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36793/"
]
} |
96,074 | In thermodynamics, the first law can be written in differential form as
$$dU = \delta Q - \delta W$$
Here, $dU$ is the differential $1$-form of the internal energy but $\delta Q$ and $\delta W$ are inexact differentials, which is emphasized with the replacement of $d$ with $\delta $. My question is why we regard heat (or work) as differential forms? Suppose our system can be described with state variables $(x_1,\ \cdots ,\ x_n)$. To my understanding, a general $1$-form is written as
$$df = f_1\ dx_1 + f_2\ dx_2 + \cdots + f_n\ dx_n$$
In particular, differential forms are linear functionals of our state variables. Is there any good reason to presuppose that $\delta Q$ is linear in the state variables? In other words, if the infinitesimal heat transferred when moving from state $\mathbf{x}$ to $\mathbf{x} + d\mathbf{x}_1$ is $\delta Q_1$ and from $\mathbf{x}$ to $\mathbf{x} + d\mathbf{x}_2$ is $\delta Q_2$, is there any physical reason why the heat transferred from $\mathbf{x}$ to $\mathbf{x} + d\mathbf{x}_1 + d\mathbf{x}_2$ should be $\delta Q_1 + \delta Q_2$? I apologize if the question is a bit unclear, I am still trying to formulate some of these ideas in my head. P.S. I know that for quasi-static processes, we have $\delta Q = T\ dS$ (and $\delta W = p\ dV$) so have shown $\delta Q$ is a differential form in this case. I guess my question is about non-quasi-static processes in general. | We want to show that "infinitesimal" changes in heat along a given path in thermodynamic state space can be modeled via a differential 1-form conventionally called $\delta Q$. The strategy. We introduce a certain kind mathematical object called a cochain. We argue that in thermodynamics, heat can naturally be modeled by a cochain. We note a mathematical theorem which says that to every sufficiently well-behaved cochain, there corresponds exactly one differential form, and in fact that the cochain is given by integration of that differential form. We argue that the differential form from step 3 is precisely what we usually call $\delta Q$ and has the interpretation of modeling "infinitesimal" changes in heat. Some math. In order to introduce cochains which we will argue should model heat, we need to introduce some other objects, namely singular cubes and chains. I know there is a lot of formalism in what follows, but bear with me because I think that understanding this stuff pays off in the end. Cubes and chains. Let the state space of the thermodynamic system be $\mathbb R^n$ for some positive integer $n$. A singular $k$-cube in $\mathbb R^n$ is a continuous function $c:[0,1]^k\to \mathbb R^n$. In particular, a singular 1-cube is simply a continuous curve segment in $\mathbb R^n$. Let $S_k$ be the set of all singular $k$-cubes in $\mathbb R^n$, and let $C_k$ denote the set of all functions $f:S_k\to\mathbb Z$ such that $f(c) = 0$ for all but finitely many $c\in S_k$. Each such function is called a $k$-chain . The set of chains is a module. It turns ut that the set of $k$-chains can be made into a vector space in the following simple way. For each $f,g\in C_k$, we define their sum $f+g$ as $(f+g)(c) = f(c) + g(c)$, and for each $a\in \mathbb Z$, we define the scalar multiple $af$ as $(af)(c) = af(c)$. I'll leave it to you to show that $f+g$ and $af$ are $k$-chains if $f$ and $g$ are. These operations turn the set $C_k$ into a module over the ring of integers $\mathbb Z$, the module of $k$-chains! Ok so what the heck is the meaning of these chains? Well, if for each singular $k$-cube $c\in S_k$ we abuse notation a bit and let it also denote a corresponding $k$-chain $f$ defined by $f(c) = 1$ and $f(c') = 0$ for all $c'\neq c$, then one can show that every singular $k$-chain can be written as a finite linear combination of singular $k$-cubes:
\begin{align}
a_1c_1 + a_2c_2 + \cdots + a_Nc_N
\end{align}
For $k=1$, namely if we consider 1-chains, then it is relatively easy to visualize what these guys are. Recall that each singular 1-cube $c_i$ in the chain is just a curve segment. We can think of each scalar multiple $a_i$ of a given cube $c_i$ in the chain as an assignment of some number, a sort of signed magnitude, to that cube in the chain. We then think of adding the different cubes in the chain as gluing the different cubes (segments) of the chain together. We are left with an object that is just a piecewise-continuous curve in $\mathbb R^n$ such that each curve segment that makes up the curve is assigned a signed magnitude. Drumroll please: introducing cochains! Now here's where we get to the cool stuff. Recall that the set $C_k$ of all $k$-chains is a module. It follows that we can consider the set of all linear functionals $F:C_k\to \mathbb R$, namely the dual module of $C_k$. This dual module is often denoted $C^k$. Every element of $C^k$ is then called a $k$-cochain (the "co" here being reminiscent of "covector" which is usually used synonymously with the term "dual vector). In summary, cochains are linear functionals on the module of chains. Heat as a $1$-cochain. I'd now like to argue that heat can naturally be thought of as a $1$-cochain, namely a linear functional on $1$-chains? We do so in steps. For each piecewise-continuous path $c$ (aka a $1$-chain) in thermodynamic state space, there is a certain amount of heat that is transferred to a system when it undergoes a quasistatic process along that path. Mathematically, then, it makes sense to model heat as a functional $Q:C_k\to\mathbb R$ that associates a real number to each path that physically represents how much heat is transferred to the system when it moves along the path. If $c_1+c_2$ is a $1$-chain with two segments, then the heat transferred to the system as it travels along this chain should be the sum of the heat transfers as it travels along $c_1$ and $c_2$ individually;
\begin{align}
Q[c_1+c_2] = Q[c_1] + Q[c_2]
\end{align}
In other words, the heat functional $Q$ should be additive. If we reverse the orientation of a chain, which physically corresponds to traveling along a path in state space in the reverse direction, then the heat transferred to the system along this reversed path should have the opposite sign;
\begin{align}
Q[-c] = -Q[c]
\end{align} If we combine steps 2 and 3, we find that $Q$ is a linear functional on chains; it is a cochain! To see why this is so, let a chain $a_1c_1 + a_2c_2$ be given. Since $a_1$ and $a_2$ are integers, we can rewrite this chain as
\begin{align}
a_1c_1 + a_2c_2 = \mathrm{sgn}(a_1) \underbrace{(c_1 + \cdots + c_1)}_{\text{$|a_1|$ terms}} +\mathrm{sgn}(a_2) \underbrace{(c_2 + \cdots + c_2)}_{\text{$|a_2|$ terms}}
\end{align}
and we can therefore compute:
\begin{align}
Q[a_1c_1 + a_2c_2]
&= Q[\mathrm{sgn}(a_1) (c_1 + \cdots + c_1) +\mathrm{sgn}(a_2) (c_2 + \cdots + c_2)] \\
&= \mathrm{sgn}(a_1)|a_1|Q[c_1] + \mathrm{sgn}(a_2)|a_2|Q[c_2] \\
&= a_1Q[c_1] + a_2 Q[c_2]
\end{align}
In summary, by thinking about heat as a functional on paths, and by imposing physically reasonable constraints on that functional, we have argued that heat is a $1$-cochain. From cochains to differential forms. Now that we have argued that heat can be thought of as a $1$-cochain, let's show how this leads to modeling "infinitesimal" changes in heat with a differential $1$-form. This is where things get really mathematically interesting. We first recall the definition of a differential $k$-form over a $k$-chain. If $\omega$ is a $k$-form, and $c = a_1c_1 + \cdots a_Nc_N$ is a $k$-chain, then we define the integral of $\omega$ over $c$ as follows:
\begin{align}
\int_c\omega = a_1\int_{c_1} \omega + \cdots + a_N\int_{c_N}\omega.
\end{align}
In other words, we integrate $\omega$ over each $k$-cube $c_i$ in the chain multiplied by the appropriate signed magnitude $a_i$ associated with that cube, and then we add up all of the results to get the integral over the chain as a whole. For example, if $k=1$ then we have an integral of a $1$-form over a $1$-chain which is usually just called a line integral . Now notice that given any $k$-form $\omega$, there exists a corresponding cochain, which we'll call $F_\omega$, defined by
\begin{align}
F_\omega[c] = \int_c\omega
\end{align}
for any $k$-chain $c$. In other words, integration of a form over a chain can simply be thought of as applying a particular linear functional to that chain. But here's the really cool thing. The construction we just exhibited shows that to every differential form, there corresponds a cochain $F_\omega$ given by integration of $\omega$. A natural question then arises: is there a mapping that goes the other way? Namely, if $F$ is a given $k$-cochain, is there a corresponding $k$-form $\omega_F$ such that $F$ can simply be written as integration over $\omega$? The answer is yes! (provided we make suitable technical assumptions). In fact, there is a mathematical theorem which basically says that Given a sufficiently smooth $k$-cochain $F$, there is a unique differential form $\omega_F$ such that
\begin{align}
F[c] = \int_c\omega_F
\end{align}
for all suitably non-pathological chains $c$. If we apply this result to the heat $1$-cochain $Q$, then we find that there exists a unique corresponding differential $1$-form $\omega_Q$ such that for any reasonable chain $c$, we have
\begin{align}
Q[c] = \int_c \omega_Q
\end{align}
This is precisely what we want. If we identify $\omega_Q$ as $\delta Q$, then we have shown that The heat transferred to a system that moves along a given path ($1$-chain) in thermodynamic state space is given by the integral of a differential $1$-form $\delta Q$ along the path. This is a precise formulation of the statement that $\delta Q$ is a one-form that represents "infinitesimal" heat transfers. Note. This is a new, totally revamped version of the answer that actually answers the OP's question instead of just reformulating it mathematically. Most of the earlier comments pertain to older versions. Acknowledgement. I did not figure this all out on my own. In the original form of the answer, I reformulated the question in a mathematical form, and I essentially posted this mathematical question on math.SE: https://math.stackexchange.com/questions/658214/when-can-a-functional-be-written-as-the-integral-of-a-1-form That question was answered by user studiosus who found that the theorem on cochains and forms to which I refer was proven by Hassler Whitney roughly 60 years ago in his good Geometric Integration Theory . In attempting to understand the theorem, and especially the concept of cochains, I found the paper "Isomorphisms of Differential Forms and Cochains" written by Jenny Harrison to be very illuminating. In particular, her discussion of theorem on forms and chains to which I refer above is nice. | {
"source": [
"https://physics.stackexchange.com/questions/96074",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3762/"
]
} |
96,327 | So I was thinking... If heat I feel is just lots of particles going wild and transferring their energy to other bodies, why am I not burned by the wind? When I thought about it more I figured out that wind usually carries some humidity, and since particles of liquid are moving same speed as the wind, they are basically static relative to each other, so no energy is transferred between them (wind and water particles). And if that water sticks to my skin and wind blows, it'll evaporate thus taking energy from my skin and make me feel cold. Thing is, I don't think that's really the case but even if it is, if I somehow dry out the wind, will it burn me if it's strong enough? And winds can reach some pretty high velocities (though I must admit I'm not sure if they are comparable to movement of atoms in warm bodies etc...). So. Bottom line. Can I be burned by wind in some perfect scenario? | Air molecules $(\require{mhchem}\ce{N2_}$ and $\ce{O_2})$ have an average speed of around $500\text{ m/s}$, varying some depending on the temperature. This means that a nice $5\text{ m/s}$ wind is a hundred times slower, and the energy represented by wind is 10,000 times smaller than the thermal energy . Therefore, wind does not have considerably more energy than calm air and will not burn you. Very high-speed winds, such as those in tornadoes, hurricanes, or the wind you would experience while sky-diving, are still only around $50\text{ m/s}$, so the energy density in the wind is still just 1% of the thermal energy density. Likewise, the ram pressure the air exerts on you would be small compared to the homogenous atmospheric pressure, so no large effects should be observed. Thus, one would not expect even high winds to burn you. The transfer of heat between you and the air is fairly complicated, and does not depend solely on the energy density of the air. Wind usually makes you feel colder, in fact. Heat travels across gradients of temperature. The air right next to your skin will be at the same temperature as your skin, but the air a small distance away will be at the ambient temperature. This creates a gradient of temperature, and heat travels across the gradient. When there is wind, the difference in temperature between your skin and the ambient air is the same, but the temperature falls down to the ambient temperature a shorter distance from your skin. This increases the temperature gradient, so that you cool down faster with a wind. Humidity also plays a role; heat transfer is not very simple. However, I think this suffices to explain why we should not expect wind to burn you. You will burn up if you travel through the air at extremely-high velocity. This happens to meteors and other astronomical objects moving at orbital velocities ( $\sim10^4\text{ m/s}$ ) when they enter Earth's atmosphere. It is also relevant for fast-moving aircraft, which do experience winds as fast as the thermal velocities of the molecules in the air. I've heard it said that the SR-71 Blackbird , the fastest airplane ever built, heated up so much due to aerodynamic heating that it had to be built to be loose at low speed so that the parts would fit together at top speed. See "Aerodynamic heating" for more. | {
"source": [
"https://physics.stackexchange.com/questions/96327",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/29972/"
]
} |
96,362 | What do we really mean when we say that the neutron and proton wavefunctions together form an $\rm SU(2)$ isospin doublet? What is the significance of this? What does this transformation really doing to the wavefunctions (or fields)? | Two particles forming an $SU(2)$ doublet means that they transform into each other under an $SU(2) $ transformation. For example a proton and neutron (which form such a doublet) transform as,
\begin{equation}
\left( \begin{array}{c}
p \\
n
\end{array} \right) \xrightarrow{SU(2)} \exp \left( - \frac{ i }{ 2} \theta_a \sigma_a \right) \left( \begin{array}{c}
p \\
n
\end{array} \right)
\end{equation}
where $ \sigma _a $ are the Pauli matrices.
It turns out the real world obeys certain symmetry properties. For example, the equations described the strong interactions of protons and neutrons are approximately invariant under unitary transformations with determinant 1 (the transformation shown above) between the proton and neutron. This didn't have to be case, but turns out that it is. Since the strong interaction is invariant under such transformations, each interaction term in the strong interaction Lagrangian is highly restricted. For one thing, this is useful since it allows one to make simple predictions about proton and neutron systems. In order to get a better understanding of this transformation and why the symmetry holds. Consider the QCD Lagrangian for the up and down quarks (which, as for the proton and neutron, also make up an isospin doublet):
\begin{equation}
{\cal L} _{ QCD} = \bar{\psi} _{u,i} i \left( \left( \gamma ^\mu D _\mu \right) _{ ij} - m _u \delta _{ ij} \right) \psi _{u,j} + \bar{\psi} _{ d ,i} \left( \left( \gamma ^\mu D _\mu \right) _{ ij} - m _d \delta _{ ij} \right) \psi _{d ,j}% \\
% & \bar{\psi} _{i} i \left( \left( \gamma ^\mu D _\mu \right) _{ ij} - M \delta _{ ij} \right) \psi _{j}
\end{equation}
where $ D ^\mu $ is the covarient derivative and the sum over $ i,j $ is a sum over the color. Notice that if $ m _{ u} \approx m _d \equiv m $ we can write this Lagrangian in a more convenient form,
\begin{equation}
{\cal L} _{ QCD} = \bar{\psi} _{i} i \left( \left( \gamma ^\mu D _\mu \right) _{ ij} - m \delta _{ ij} \right) \psi _{j}
\end{equation}
where $ \psi \equiv \left( \psi _u \, \psi _d \right) ^T $. This Lagrangian is now invariant over transformations between up and down quarks ("isospin") since the color generators commute with the isospin generators. Since proton and neutron and only differ in their ratio of up to down quarks (the more precise statement is that their quantum numbers correspond to those of $uud$ and $udd$ respectively), we would expect these particles to behave very similarly when QED can be neglected (which is often the case because QED is much weaker then QCD at low energies). As an explicit example of the use of the symmetry consider the reactions:
\begin{align}
& 1) \quad p p \rightarrow d \pi ^+ \\
& 2) \quad p n \rightarrow d \pi ^0
\end{align}
where $ d $ is deuterium, an isospin singlet, and the pions form an isospin triplet. For the first interaction, the initial isospin state is $ \left| 1/2, 1/2 \right\rangle \otimes \left| 1/2, 1/2 \right\rangle = \left| 1, 1 \right\rangle $. The products have isospin $ \left| 0,0 \right\rangle \otimes \left| 1,1 \right\rangle = \left| 1,1 \right\rangle $. The second interaction has an initial isospin state, $ \frac{1}{\sqrt{2}} \left( \left| 0,0 \right\rangle + \left| 1,0 \right\rangle \right) $, and final isospin, $ \left| 0,0 \right\rangle $. Since both cases have some overlap between the isospin wavefunctions, both can proceed. However, the second process has a suppression factor of $ 1/ \sqrt{2} $ when contracting the isospin wavefunctions. To get the probabilities this will need to be squared. Thus one can conclude,
\begin{equation}
\frac{ \mbox{Rate of 1} }{ \mbox{Rate of 2}} \approx 2
\end{equation} Notice that even without knowing anything about specifics of the system we were able to make a very powerful prediction. All we needed to know is that the process occurs through QCD. | {
"source": [
"https://physics.stackexchange.com/questions/96362",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36793/"
]
} |
96,622 | The gravity at the centre of a star is zero as in the case of any uniform solid sphere with some mass. When a massive star dies, why does it give rise to a black hole at it's centre? I know how to derive the field equations for gravity inside a star assuming the star as a uniform solid sphere of mass M and radius R. I need to know how to find the expression for the total pressure due to gravity at the centre. | It's because the value of the gravitational field at the center of a star is not the relevant quantity to describe gravitational collapse. The following argument is Newtonian. Let's assume for simplicity that the star is a sphere with uniform density $\rho$. Consider a small portion of the mass $ m$ of the star that's not at its center but rather at a distance $r$ from its center. This portion feels a gravitational interaction towards the other mass in the star. It turns out, however, that all of the mass at distances greater than $r$ from the center will contribute no net force this portion. So we focus on the mass at distances less than $r$ away from the center. Using Newton's Law of Gravitation, one can show that the net result of this mass is to exert a force on $ m$ equal in magnitude to
\begin{align}
F = \frac{G( m)(\tfrac{4}{3}\pi r^3 \rho)}{r^2} = \frac{4}{3}G m\pi\rho r
\end{align}
and pointing toward the center of the star. It follows that unless there is another force on $m$ equal in magnitude to $F$ but pointing radially outward, the mass will be pulled towards the center of the star. This is basically what happens when stars exhaust their fuel; there no longer is sufficient outward pressure to counteract gravity, and the star collapses. | {
"source": [
"https://physics.stackexchange.com/questions/96622",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36746/"
]
} |
97,713 | Could someone please explain to me the idea that Schrödinger was trying to illustrate by the cat in his box? I understand that he was trying to introduce the notion of the cat being both alive and dead at the same time. But why was it necessary to introduce this thought experiment and what did it achieve? | First, a historical subtlety: Schrödinger has actually stolen the idea of the cat from Einstein. Second, both men – Einstein and Schrödinger – used the thought experiment to "explain" a point that was wrong. They thought it was absurd for quantum mechanics to say that the state $a|{\rm alive}\rangle+b|{\rm dead}\rangle$ was possible in Nature (it was claimed to be possible in quantum mechanics) because it allowed the both "incompatible" types of the cat to exist simultaneously. Third, they were wrong because quantum mechanics does imply that such superpositions are totally allowed and must be allowed and this fact can be experimentally verified – not really with cats but with objects of a characteristic size that has been increasing. Macroscopic objects have already been put to similar "general superposition states". The men introduced it to fight against the conventional, Copenhagen-like interpretations of quantum mechanics, and that's how most people are using the meme today, too. But the men were wrong, so from a scientifically valid viewpoint, the thought experiment shows that superpositions are indeed always allowed – it is a postulate of quantum mechanics – even if such states are counterintuitive. Similar superpositions of common-sense states are measured so that only $|a|^2$ and $|b|^2$ from the coefficients matter and may be interpreted as (more or less classical) probabilities. Due to decoherence, the relative phase is virtually unmeasurable for large, chaotic systems like cats, but in principle, even the relative phase matters. Quite generally, the people who are wrong – who have a problem with quantum mechanics – like to say that the superposition means that the cat is alive "and" dead. But the right, quantum answer is that the addition in the wave function doesn't mean "and". Instead, it means a sort of "or", so the superposition simply says that the cat is dead or alive, with the appropriate probabilities (quantum mechanics determines not only the probabilities but also their complex phases, and those may matter for other questions). | {
"source": [
"https://physics.stackexchange.com/questions/97713",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37050/"
]
} |
98,241 | Dimensional analysis, and the notion that quantities with different units cannot be equal, is often used to justify very specific arguments, for example, you might use it to argue that a particular formula cannot possibly be the correct expression for a particular quantity. The usual approach to teaching this is to go "well kids, you can't add apples and oranges!" and then assume that the student will just find it obvious that you can't add meters and seconds. I'm sorry, but... I don't. I'm not convinced. $5$ meters plus $10$ seconds is $15$! Screw your rules! What are the units? I don't know, I actually don't understand what that question means. I'm specifically not convinced when this sort of thing is used to prove that certain formulae can't possibly be right. Maybe the speed of a comet is given by its period multiplied by its mass. Why not? It's a perfectly meaningful operation - just measure the quantities, multiply them, and I claim that the number you get will always equal the current speed of the comet. I don't see how "but it doesn't make sense to say mass times time is equal to distance divided by time" can be a valid counterargument, particularly because I don't really know what "mass times time" is, but that's a different issue. If it's relevant, I'm a math student and know extremely little about physics. | Physics is independent of our choice of units And for something like a length plus a time, there is no way to uniquely specify a result that does not depend on the units you choose for the length or for the time. Any measurable quantity belongs to some set $\mathcal{M}$ . Often, this measurable quantity comes with some notion of "addition" or "concatenation". For example, the length of a rod $L \in \mathcal{L}$ is a measurable quantity. You can define an addition operation $+$ on $\mathcal{L}$ by saying that $L_1 + L_2$ is the length of the rod formed by sticking rods 1 and 2 end-to-end. The fact that we attach a real number to it means that we have an isomorphism $$
u_{\mathcal{M}} \colon \mathcal{M} \to \mathbb{R},
$$ in which $$
u_{\mathcal{M}}(L_1 + L_2) = u_{\mathcal{M}}(L_1) + u_{\mathcal{M}}(L_2).
$$ A choice of units is essentially a choice of this isomorphism. Recall that an isomorphism is invertible, so for any real number $x$ you have a possible measurement $u_{\mathcal{M}}^{-1}(x)$ . I'm being fuzzy about whether $\mathbb{R}$ is the set of real numbers or just the positive numbers; i.e. whether these are groups, monoids, or something else. I don't think it matters a lot for this post and, more importantly, I haven't figured it all out. Now, since physics should be independent of our choice of units, it should be independent of the particular isomorphisms $u_Q$ , $u_R$ , $u_S$ , etc. that we use for our measurables $Q$ , $R$ , $S$ , etc. A change of units is an automorphism of the real numbers; given two units $u_Q$ and $u'_Q$ , the change of units is $$
\omega_{u,u'} \equiv u'_Q \circ u_Q^{-1}$$ or, equivalently, $$
\omega_{u,u'} \colon \mathbb{R} \to \mathbb{R} \ni \omega(x) = u'_Q(u_Q^{-1}(x)).
$$ Therefore, \begin{align}
\omega(x+y) &= u'_Q(u_Q^{-1}(x+y)) \\
&= u'_Q(u_Q^{-1}(x)+u_Q^{-1}(y)) \\
&= u'_Q(u_Q^{-1}(x)) + u'_Q(u_Q^{-1}(y)) \\
&= \omega(x) + \omega(y).
\end{align} So, since $\omega$ is an automorphism of the reals, it must be a rescaling $\omega(x) = \lambda x$ with some relative scale $\lambda$ (As pointed out by @SeleneRoutley, this requires the weak assumption that $\omega$ is a continuous function -- there are everywhere discontinuous solutions as well. Obviously units aren't useful if they are everywhere discontinuous; in particular, so that instrumental measurement error maps an allowed space of $u_\mathcal{M}$ into an interval of $\mathbb{R}$ . If we allow the existence of an order operation on $\mathcal{M}$ , or perhaps a unit-independent topology, this could be made more precise.). Consider a typical physical formula, e.g., $$
F \colon Q \times R \to S \ni F(q,r) = s,
$$ where $Q$ , $R$ , and $S$ are all additive measurable in the sense defined above. Give all three of these measurables units. Then there is a function $$
f \colon \mathbb{R} \times \mathbb{R} \to \mathbb{R}
$$ defined by $$
f(x,y) = u_S(F(u_Q^{-1}(x),u_R^{-1}(y)).
$$ The requirement that physics must be independent of units means that if the units for $Q$ and $R$ are scaled by some amounts $\lambda_Q$ and $\lambda_R$ , then there must be a rescaling of $S$ , $\lambda_S$ , such that $$
f(\lambda_Q x, \lambda_R y) = \lambda_S f(x,y).
$$ For example, imagine the momentum function taking a mass $m \in M$ and a velocity $v \in V$ to give a momentum $p \in P$ . Choosing $\text{kg}$ for mass, $\text{m/s}$ for velocity, and $\text{kg}\,\text{m/s}$ for momentum, this equation is $$
p(m,v) = m*v.
$$ Now, if the mass unit is changed to $\text{g}$ , it is scaled by $1000$ , and if the velocity is changed to $\text{cm/s}$ , it is scaled by $100$ . Unit dependence requires that there be a rescaling of momentum such that $$
p(1000m,100v) = \lambda p(m,v).
$$ This is simple -- $10^5 mv = \lambda mv$ and so $\lambda = 10^5$ . In other words, $$
p[\text{g} \, \text{cm/s}] = 10^5 p[\text{kg} \, \text{m/s}].
$$ Now, let's consider a hypothetical situation where we have a quantity called "length plus time", defined that when length is measured in meters and time in seconds, and "length plus time" in some hypothetical unit called "meter+second", the equation for "length plus time" is $$
f(l,t) = l + t.
$$ This is what you've said - $10 \text{ m} + 5 \text{ s} = 15 \text{
“m+s”}$ . Now, is this equation invariant under a change of units? Change the length scale by $\lambda_L$ and the time scale by $\lambda_T$ . Is there a number $\Lambda$ such that $$
f(\lambda_L l, \lambda_T t) = \lambda_L l + \lambda_T t
$$ is equal to $$
\Lambda f(l,t) = \Lambda(l+t)
$$ for all lengths and times $l$ and $t$ ? No! Therefore, this equation $f = l + t$ cannot be a valid representation in real numbers of a physical formula. | {
"source": [
"https://physics.stackexchange.com/questions/98241",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10549/"
]
} |
98,462 | Question. In the context of QM, I hear the phrases 'complete set of states' and 'complete basis' (among other similar expressions) thrown around rather a lot. What exactly is meant by 'complete'? Further remarks. I understand the term 'complete set', vaguely speaking, to mean a 'set from which all elements of our space can be constructed by linear combination'. However, to me this seems completely indistinct from the term basis . I thought at first that perhaps the word basis wasn't applicable to the infinite dimensional vector spaces that we often meet in QM, but having come across the existence of Schauder bases I no longer believe this to be the case. Is it then so that 'complete set of states' is a poorly-defined, somewhat redundant expression, or does it have a precise meaning distinct from 'constitutes a basis'? Two definitions that I have seen before (in the context of function spaces) are as follows: the functions $\{\phi_n\}$ are a 'complete set' or 'complete basis' if for all functions $f(x)$ there exists a set $\{a_n\}$ such that $$ \int_a^b \left| f(x) - \sum_n a_n \phi_n(x) \right|^2 w(x)\, dx = 0 \,, $$ where $w(x)$ is the weight function used in defining the norm on the space. The second definition is:
$$ \sum_n \phi_n(x) \phi_n^*(x') = \frac{1}{w(x)}\delta(x-x') \,.$$ So now I ask: are these definitions correct, and are they equivalent? Further, why are these definitions useful? When we talk about complete sets of states in QM, the relevance (in so far as I understand it) is that such sets can be used to construct all other states. If this is so, is not the term 'basis' more appropriate, since it directly expresses such a property of a set? Do the definitions above coincide with the definition of a Schauder basis for an infinite dimensional function space? Or are they different in some subtle way? I've asked several mathematicians this question; none have known the precise meaning in the sense I've described, but rather only in the sense of Cauchy sequence convergence in metric spaces. Hence my asking this on physics.SE. Thank you for reading. | There are at least three notions of basis depending on the mathematical structure you are considering. I will quickly discuss three cases relevant in physics (topological vector spaces are relevant too, but I will not consider them for the shake of brevity). (1) Pure algebraic structure (i.e. vector space structure over the field $\mathbb K=$ $\mathbb R$ or $\mathbb C$, actually the definition applies also to modules). Basis in the sense of Hamel. Given a vector space $V$ over the field $\mathbb K$, a set $B \subset V$ is called algebraic basis or Hamel basis , if its elements are linearly independent and every $v \in V$ can be decomposed as: $$v = \sum_{b \in B} c_b b$$ for a finite set of non-vanishing numbers $c_b$ in $\mathbb K$ depending on $v$. Completeness of $B$ means here that the set of finite linear combinations of elements in $B$ includes (in fact coincide to) the whole space $V$. Remarks. This definition applies to infinite dimensional vector spaces, too. Existence of algebraic bases arises from Zorn's lemma. It is possible to prove that all algebraic bases have same cardinality. Decomposition of $v$ over the basis $B$ turns out to be unique . (2) Banach space structure (i.e. the vector space over $\mathbb K$ admits a norm $||\:\:|| : V \to \mathbb R$ and it is complete with respect to the metric topology induced by that norm). Basis in the sense of Schauder. Given an infinite dimensional Banach space $V$ over the field $\mathbb K = \mathbb C$ or $\mathbb R$, a countable ordered set $B := \{b_n\}_{n\in \mathbb N} \subset V$ is called Schauder basis , if every $v \in V$ can be uniquely decomposed as: $$v = \sum_{n \in \mathbb N} c_n b_n\quad (2)$$ for a set, generally infinite, of numbers $c_n \in \mathbb K$ depending on $v$ where the convergence of the sum is referred both to the Banach space topology and to the order used in labelling $B$. Identity (2) means:
$$\lim_{N \to +\infty} \left|\left|v - \sum_{n=1}^N c_{n} b_n\right|\right| =0$$ Completeness of $B$ means here that the set of countably infinite linear combinations of elements in $B$ includes (in fact coincide to) the whole space $V$. Remarks. The elements of a Schauder basis are linearly independent (both for finite and infinite linear combinations). An infinite dimensional Banach space also admits Hamel bases since it is a vector space too. However it is possible to prove that Hamel bases are always uncountable differently form Schauder ones. Not all infinite dimensional Banach space admit Schauder bases. A necessary, but not sufficient, condition is that the space must be separable (namely it contains a dense countable subset). (3) Hilbert space structure (i.e. the vector space over $\mathbb K$ admits a scalar product $\langle \:\:| \:\:\rangle : V \to \mathbb K$ and it is complete with respect to the metric topology induced by the norm
$||\:\:||:= \sqrt{\langle \:\:| \:\:\rangle }$). Basis in the sense of Hilbert (Riesz- von Neumann). Given an infinite dimensional Hilbert space $V$ over the field $\mathbb K = \mathbb C$ or $\mathbb R$, a set $B \subset V$ is called Hilbert basis , if the following conditions are true: (1) $\langle z | z \rangle =1$ and $\langle z | z' \rangle =0$
if $z,z' \in B$ and $z\neq z'$, i.e. $B$ is an orthonormal system ; (2) if $\langle x | z \rangle =0$ for all $z\in B$ then $x=0$ (i.e. $B$ is maximal with respect to the orthogonality requirment). Hilbert bases are also called complete orthonormal systems (of vectors). The relevant properties of Hilbert bases are fully encompassed within the following pair of propositions. Proposition. If $H$ is a (complex or real) Hilbert space and $B\subset H$ is an orthonormal system (not necessarily complete) then, for every $x \in H$ the set of non-vanishing elements $\langle x| z \rangle$ with $z\in B$ is at most countable. Theorem. If $H$ is a (complex or real) Hilbert space and $B\subset H$ is a Hilbert basis, then the following identities hold, where the order employed in computing the infinite sums (in fact countable sums due to the previous proposition) does not matter: $$||x||^2 = \sum_{z\in B} |\langle x| z\rangle|^2\:, \qquad \forall x \in H\:,\qquad (3)$$ $$\langle x| y \rangle = \sum_{z\in B} \langle x|z \rangle \langle z| y\rangle\:, \qquad \forall x,y \in H\:,\qquad (4)$$ $$\lim_{n \to +\infty} \left|\left| x - \sum_{n=0}^N z_n \langle z_n|x \rangle \right|\right| =0\:, \qquad \forall x \in H \:,\qquad (5)$$ where the $z_n$ are the elements in $B$ with $\langle z|x\rangle \neq 0$. If an orthonormal system verifies one of the three requirements above then it is a Hibertian basis. Completeness of $B$ means here that the set of infinite linear combinations of elements in $B$ includes (in fact coincide to) the whole space $H$. Remarks. The elements of a Hilbert basis are linearly independent (both for finite and infinite linear combinations). All Hilbert spaces admit corresponding Hilbert bases. In a fixed Hilbert space all Hilbert bases have the same cardinality. An infinite dimensional Hilbert space is separable (i.e. it contains a dense countable subset) if and only if it admits a countable Hilbert basis. An infinite dimensional Hilbert space also admits Hamel bases, since it is a vector space as well. In a separable infinite dimensional Hilbert space a Hilbert basis is also a Schauder basis (the converse is generally false). FINAL COMMENTS. Identities like this:
$$ \sum_n \phi_n(x) \phi_n^*(x') = \frac{1}{w(x)}\delta(x-x') \,\qquad (6)$$
stay for the completeness property of a Hilbert basis in $L^2(X, w(x)dx)$: Identity (6) is nothing but a formal version of equation (4) above.
However such an identity is completely formal and, in general it does not hold if $\{\phi_n\}$ is a Hilbert basis of $L^2(X, w(x)dx)$ (also because the value of $\phi_n$ at $x$ does not make any sense in $L^2$ spaces, as its elements are defined up to zero measure sets and $\{x\}$ has zero measure). That identity sometime holds rigorously if (1) the functions $\phi_n$ are sufficiently regular and (2) the identity is understood in distributional sense, working with suitably smooth test functions like ${\cal S}(\mathbb R)$ in $\mathbb R$. In $L^2(\mathbb R, d^nx)$ spaces all Hilbert bases are countable . Think of the basis of eigenvectors of the Hamiltonian operator of an Harmonic oscillator in $L^2(\mathbb R)$ (in $\mathbb R^n$ one may use a $n$ dimensional harmonic oscillator). However, essentially for for practical computations it is convenient also speaking of formal eigenvectors of, for example, the position operator: $|x\rangle$. In this case, $x \in \mathbb R$ so it could seem that $L^2(\mathbb R)$ admits also uncountable bases. It is false! $\{|x\rangle\}_{x\in \mathbb R}$ is not an orthonormal basis. It is just a formal object , (very) useful in computations.
If you want to make rigorous these objects, you should picture the space of the states as a direct integral over $\mathbb R$ of finite dimensional spaces $\mathbb C$, or as a rigged Hilbert space . In both cases however $\{|x\rangle\}_{x\in \mathbb R}$ is not an orthonormal Hilbertian basis. And $|x\rangle$ does not belong to $L^2(\mathbb R)$. Hilbert bases are not enough to state and prove the spectral decomposition theorem for normal operators in a complex Hilbert space. Normal operators $A$ are those verifying $AA^\dagger= A^\dagger A$, unitary and self-adjoint ones are particular cases.
The notion of Hilbert basis is however enough for stating the said theorem for normal compact operators or normal operators whose resolvent is compact. In that case, the spectrum is a pure point spectrum (with only a possible point in the continuous part of the spectrum). It happens, for example, for the Hamiltonian operator of the harmonic oscillator. In general one has to introduce the notion of spectral measure or PVM (projector valued measure) to treat the general case. | {
"source": [
"https://physics.stackexchange.com/questions/98462",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/30217/"
]
} |
98,651 | How can the speed of sound be calculated for temperatures below 0 °C (down to -40 °C)? Does the calculation $v=331\ \frac{m}{s} + 0.6 \frac{m}{s°C} \times T$ still hold (where T's unit is °C)? | The speed of sound in an ideal gas is given by $$a = \sqrt{\gamma R T}$$ Where $\gamma = \frac{C_p}{C_v}$, $R$ is the specific ideal gas constant and $T$ is the absolute temperature. Taking standard values for air, this makes a graph like this: The linear approximation is plotted by your formula, $a = 331\ \frac{m}{s}\ +\ 0.6 \frac{m}{sK} (T - 273\ K)$, with the 273 K to convert it to the Kelvin scale. As you can see, the linear approximation is nearly equal to the actual value in the range marked by the two black lines, from $T \approx 240\space\mathrm{K}$ to $T \approx 350\space\mathrm{K}$. If you don't care about accuracy so much, you could even stretch your definition to $T\ \epsilon\ [200\space\mathrm{K},375\space\mathrm{K}]$, as shown by the green lines. The error is: $\approx +1.3\%$ at $T=200\space\mathrm{K}$ $\approx +1.0\%$ at $T=375\space\mathrm{K}$ As seen in the following graph of the percentage error of your approximation between $173\space\mathrm{K}$ and $473\space\mathrm{K}$. Of course, at low temperatures air doesn't behave like an ideal gas, so it all breaks down, but for the purposes of this question, I believe it's a fair assumption. | {
"source": [
"https://physics.stackexchange.com/questions/98651",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40343/"
]
} |
98,703 | In textbooks, it is sometimes written that a mixed state can be represented as mixture of $N$ (I assume here $N<+\infty$) quantum pure states $|\psi_i\rangle$ with classical probabilities $p_i$:
$$\rho = \sum_{i=1}^N p_i |\psi_i \rangle \langle \psi_i| \tag{1}\:.$$
Above $p_i \in (0,1]$ and $\sum_i p_i =1$ and a do not necessarily assume that $\langle \psi_i|\psi_j\rangle =0$ if $i\neq j$ but I require that $\langle\psi_i |\psi_i\rangle =1$ so that $\rho \geq 0$ and $tr(\rho)=1$. (There is another procedure to obtain mixed states using a partial trace on a composite system, but I am not interested on this here). I am not sure that it makes any sense to distinguish between classical probabilities embodied in the coefficients $p_i$ and quantum probabilities included in the pure states $|\psi_i\rangle$ representing the quantum part of the state. This is because, given $\rho$ as an operator, there is no way to uniquely extract the numbers $p_i$ and the states $|\psi_i\rangle$. I mean, since $\rho = \rho^\dagger$ and $\rho$ is compact, it is always possible, for instance, to decompose it on a basis of its eigenvectors (and there are many different decompositions leading to the same $\rho$ whenever $\rho$ has degenerate eigenspaces). Using non orthogonal decompositions many other possibilities arise. $$\rho = \sum_{j=1}^M q_j |\phi_j\rangle \langle \phi_j|\tag{2}$$ where again $q_j \in (0,1]$ and $\sum_j q_j =1$ and now $\langle \phi_i|\phi_j\rangle =\delta_{ij}$. I do not think there is a physical way to decide, a posteriori , through suitable measurements of observables if $\rho$ has been constructed as the incoherent superposition (1) or as the incoherent superposition (2). The mixed state has no memory of the procedure used to construct it. To pass from (1) to (2) one has, in a sense, to mix (apparently) classical and quantum probabilities. So I do not think that it is physically correct to associate a classical part and a quantum part to a mixed state, since there is no a unique physical way to extract them from it. Perhaps my impression is simply based on a too naively theoretical interpretation of the formalism. I would like to know your opinions about this issue. | Yes, the density matrix reconciles all quantum aspects of the probabilities with the classical aspect of the probabilities so that these two "parts" can no longer be separated in any invariant way. As the OP states in the discussion, the same density matrix may be prepared in numerous ways. One of them may look more "classical" – e.g. the method following the simple diagonalization from equation 1 – and another one may look more quantum, depending on states that are not orthogonal and/or that interfere with each other – like equations 2. But all predictions may be written in terms of the density matrix. For example, the probability that we will observe the property given by the projection operator $P_B$ is
$$ {\rm Prob}_B = {\rm Tr}(\rho P_B) $$
So whatever procedure produced $P_B$ will always yield the same probabilities for anything. Unlike other users, I do think that this observation by the OP has a nontrivial content, at least at the philosophical level. In a sense, it implies that the density matrix with its probabilistic interpretation should be interpreted exactly in the same way as the phase space distribution function in statistical physics – and the "quantum portion" of the probabilities inevitably arise out of this generalization because the matrices don't commute with each other. Another way to phrase the same interpretation: In classical physics, everyone agrees that we may have an incomplete knowledge about a physical system and use the phase space probability distribution to quantify that. Now, if we also agree that probabilities of different, mutually excluding states (eigenstates of the density matrix) may be calculated as eigenvalues of the density matrix, and if we assume that there is a smooth formula for probabilities of some properties, then it also follows that even pure states – whose density matrices have eigenvalues $1,0,0,0,\dots$ – must imply probabilistic predictions for most quantities. Except for observables' or matrices' nonzero commutator, the interference-related quantum probabilities are no different and no "weirder" than the classical probabilities related to the incomplete knowledge. | {
"source": [
"https://physics.stackexchange.com/questions/98703",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/35354/"
]
} |
98,835 | Saying that ice is slippery is like saying that water is wet -- it's something we've known for as long as we can be said to have known anything. Presumably, humans as a species knew ice was slippery before we knew fire was hot, or that it existed. But ask anyone why, and they won't be able to give you any better explanation than one of those cave people would have. | Apparently this is a simple question with a not-so-simple answer. I believe the general consensus is that there is a thin layer of liquid water on the surface of the ice. This thin layer and the solid ice below it are responsible for the slipperiness of ice; the water easily moves on the ice. (Well, why is that ? Perhaps another SE question.) However, this is no real agreement as to why there is a thin layer of liquid water on the surface of ice to begin with. See here for a 2006 NYT article. And if are interested in the actual physics paper that the news article is based on, see here ( DOI ). One idea states that the molecules on the surface of ice vibrate more than the inner molecules, and that this is an intrinsic property of water ice. Since the outer molecules are vibrating faster, they're more likely to be in a liquid state. Another idea is that the movement of an object over ice causes heating, though I found conflicting sources as to whether there's a consensus on this. There is a popular idea that many hold but doesn't appear to hold water. (Heh.) This idea posited that the added pressure on the ice from a foot or skate causes the melting point to rise, which would cause the thin layer of liquid to form. However, calculating the resulting pressure and increase in melting point doesn't line up with observation; the melting point certainly does rise, but not enough. | {
"source": [
"https://physics.stackexchange.com/questions/98835",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36851/"
]
} |
99,051 | Unlike rotations, the boost transformations are non-unitary. Therefore, the boost generators are not Hermitian. When boosts induce transformations in the Hilbert space, will those transformation be unitary? I think no. If that is the case, what is the physical significance of such non-unitary transformations corresponding to boosts in the Hilbert space? | On the actual Hilbert space of a consistent relativistic quantum mechanical system, the Lorentz transformations including boosts actually are unitary – which also means that the generators $J_{0i}$ are as Hermitian as the generators of rotations $J_{ij}$. We say that the Hilbert space forms a unitary representation of the Lorentz group. What the OP must be confused by is the fact that the ordinary vector representation composed of vectors $(t,x,y,z)$ is not a unitary representation of $SO(3,1)$. The $SO(3,1)$ transformations don't preserve any positively definite quadratic invariant constructed out of the coordinates $(t,x,y,z)$. After all, we know that an indefinite form, $t^2-x^2-y^2-z^2$, is conserved by the Lorentz transformations. So on a representation like the vector space of such $(t,x,y,z)$, the generators $J_{0i}$ would end up being anti-Hermitian rather than Hermitian. But if you take a Lorentz-invariant theory with a positive definite Hilbert space, like QED, the formula for $J_{0i}$ makes it manifest that it is a Hermitian operator, which means that $\langle \psi |\psi\rangle$ is preserved by the Lorentz boosts! The complex probability amplitudes for different states $c_i$ behave differently than the coordinates $t,x,y,z$ above. Note that the (non-trivial) unitary transformations of $SO(3,1)$ are inevitably infinite-dimensional. Finite-dimensional reps may be constructed out of the fundamental vector representation above and they are as non-unitary as the vector representation. But that's not true for infinite-dimensional reps. For example, the space of one-scalar-particle states in a QFT is a unitary representation of the Lorentz group. For each $p^\mu$ obeying $p^\mu p_\mu=m^2$, and there are infinitely (continuously) many values of such a vector (on the mass shell), the representation contains one basis vector (which are normalized to the Dirac delta function). The boosts just "permute them" along the mass shell which makes it obvious that the positively definite form is preserved when normalized properly. | {
"source": [
"https://physics.stackexchange.com/questions/99051",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36793/"
]
} |
99,113 | I notice yesterday that my neighbor’s car had water coming out of the exhaust pipe in the morning. My first response was since the hot exhaust is hotter than the cold tail pipe, heat is transferred from the hot exhaust through the pipe, and with enough moisture in the exhaust, enough heat leaves such that the humid air is condensed forming water drops. Is my thinking correct here? If not, please correct me. Does the water come from (i) a chemical byproduct of gasoline combustion or (ii) the humidity of the air (i.e. I've only noticed this in the winter, not the summer so I assume that during the summer this would not happen since the tail pipe is “already warm enough.”) | It can't be from the moisture in the air. If there was enough moisture in the air to produce condensation then it would be condensing on everything. There would actually be less of it condensing on the tailpipe, because the tailpipe is quite warm. In fact the water is generated by the combustion of the fuel in the car. It comes from the hydrogen in the fuel, plus some of the oxygen from the air. For example, the combustion of octane is
$$
\mathrm{2C_8H_{18}+25O_2 \to 16CO_2 + 18H_2O + \text{heat}}.
$$
This is just the net result of an extremely complex series of reactions, and motor fuel is not just octane, but ultimately burning fuel in a car will produce carbon dioxide and water in roughly equal amounts, plus much smaller amounts of a whole bunch of other things. Usually the $\mathrm{H_2O}$ will be in the form of water vapour, but if it's cold then this will condense, and this is the liquid water you see coming out of the tailpipe. | {
"source": [
"https://physics.stackexchange.com/questions/99113",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/11361/"
]
} |
99,590 | Do atoms have any uniquely identifying characteristic besides their history? For example, if we had detailed information about a specific carbon atom from one of Planck's fingerprints, and could time-travel to the cosmic event in which the atom formed, would it contain information with which we could positively identify that they two are the same? | Fundamental particles are identical. If you have two electrons, one from the big bang and the other freshly minted from the LHC, there is no experiment you can do to determine which one is which. And if there was an experiment (even in principle) that could distinguish the electrons then they would actually behave differently. Electrons tend to the lowest energy state, which is the innermost shell of an atom. If I could get a pen and write names on all of my electrons then they would all fall down into this state. However since we can't tell one electron from another only a single (well actually two since there are two spins states of an electron) electron will fit in the lowest energy state, every other electron has to fit in a unique higher energy level. Edit: people are making a lot of comments on the above paragraph and what I meant by making electrons distinguishable, so I will give a concrete example: If we have a neutral carbon atom it will have six electrons in orbitals 1s 2 2s 2 2p 2 . Muons and tauons are fundamental particles with very similar properties to the electron but different masses. Muons are ~200 times more massive than electrons and tauons are ~3477 times more massive than an electron. If we replace two of the electrons with muons and two of the electrons with tauons all of the particles would fall into the lowest energy shell (which can fit two of each kind because of spin). If in theory these particles only differed in mass by 1% or even 0.0000001% they would still be distinguishable and so all fit on the lowest energy level. Now atoms are not fundamental particles they are composite, I.E. composed of "smaller" particles, electrons, protons and neutrons. Protons and neutrons are themselves composed of quarks. But because of the way that quarks combine, they tend to always be in the lowest energy level so all protons can be considered identical, and similarly with neutrons. To take the example of carbon, there are several different isotopes, different number of neutrons, of carbon (mostly 12 C but also ~1% 13 C and ~0.0000000001% 14 C {the latter which decays with a half life of ~5,730 years [carbon dating] but is replaced by reactions with the sun's rays in the upper atmosphere}). If we take two 12 C atoms, and force all of the spins to be the same. This is not too difficult for the electrons of the atom since the inner electrons do not have a choice of spin because every spin in every level is already full. So only outer electrons matter. The nucleons also have spin. With our two 12 C atoms with all of the same spins, we now have two indistinguishable particles which if you set up an appropriate experiment (similar in principle to the electrons not being able to occupy the same state) we will be able to experimentally prove that these two atoms is indistinguishable. Answer time: Are atoms unique? No. Do atoms have any uniquely identifying characteristic besides their history? Their history of a particle does not affect it*. No particles are unique. Atoms may have isotopes or spin to identify one from another, but these are not unique from another particle with the same properties. would it contain information with which we could positively identify that they two are the same? Yes only because we could positively identify that this carbon atom is the same as almost every other carbon atom in existence. *Unless it does, in which case it may be considered a different particle with different properties. | {
"source": [
"https://physics.stackexchange.com/questions/99590",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/38453/"
]
} |
100,443 | I heard from my lecturer that electron has dual nature. For that instance in young's double slit experiment electron exhibits as a particle at ends but it acts as a wave in between the ends. It under goes diffraction and bends. But we don't see a rise in energy. It has to produce 500kev of energy (please correct if my approximation is wrong) according to mass energy equivalence relation . But wave is a form of pure energy and doesn't show properties of having mass as of experimental diffraction. So where is the mass gone? | I don't really like the whole wave-particle duality business because it obscures the more startling truth about particles: they aren't sometimes waves and sometimes particles, and they also don't transform into waves sometimes before reforming as particles, they are something completely different. It's like the story of the blind men and the elephant : a group of blind men are trying to describe an elephant by touch, but each man is touching a different part of the elephant. The man who touches the elephant's side says it's like a rough wall, the man who touches its leg says it's like a pillar, the man who touches its tail says its like a rope, and so on. All the men are right, of course, but they simply have incomplete pictures of the elephant because they cannot observe its full character. Similarly, when we observe the behavior of things like electrons or photons, we sometimes think they are acting as particles, and other times that they are acting as waves. But really, they are neither waves nor particles, but something new that has properties of both ; it is only the case that in many cases, only their particle-like or wave-like behavior happens to be relevant to their behavior, and so we treat them as such. So while the phrase "wave-particle duality" might make it seem as if particles can become waves (hence your question), what actually exists is a strange sort of object that always has properties of both particles and waves, only one of which may be easily observable in some cases, and the question of where an electron's mass goes when it becomes a wave isn't really applicable, since it doesn't become a wave at all. EDIT: It's worth noting, as Anna pointed out, that the wave-like character of objects isn't quite the same as a normal wave, as there isn't really any physical substance that's waving in any real sense. Instead, the "wave" is a probability function that assigns a probability to each point in space, representing the likelihood of finding the particle there; it just so happens that this function takes the mathematical form of a wave. This is a deep subject, so I'll refer you to Anna's answer for further information. | {
"source": [
"https://physics.stackexchange.com/questions/100443",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41171/"
]
} |
100,444 | Why is the moment of inertia (wrt. the center) for a hollow sphere higher than a solid sphere (with same radius and mass)? I have completely no idea and I am inquiring about this as it is an interesting question that popped in my head while doing physics homework. | A hollow sphere will have a much larger moment of inertia than a uniform sphere of the same size and the same mass. If this seems counterintuitive, you probably carry a mental image of creating the hollow sphere by removing internal mass from the uniform sphere. This is an incorrect image, as such a process would create a hollow sphere of much lighter mass than the uniform sphere. The correct mental model corresponds to moving internal mass to the surface of the sphere. | {
"source": [
"https://physics.stackexchange.com/questions/100444",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/28934/"
]
} |
100,951 | Does anyone know what Feynman was referring to in this interview which appears at the beginning of The Feynman Tips on Physics ? Note that he is referring to something that did not appear in the Feynman lectures. I didn't like to do the second year, because I didn't think I had
great ideas about how to present the second year. I felt that I
didn't have a good idea on how to do lectures on electrodynamics.
But, you see, in these challenges that had existed before about
lectures, they had challenged me to explain relativity, challenged me
to explain quantum mechanics, challenged me to explain the relation of
mathematics to physics, the conservation of energy. I answered every
challenge. But there was one challenge which nobody asked, which I
had set myself, because I didn't know how to do it. I've never
succeeded yet. Now I think I know how to do it. I haven't done it,
but I'll do it someday. And that is this: How would you explain
Maxwell's equations? How would you explain the laws of electricity
and magnetism to a layman, almost a layman, a very intelligent person,
in an hour lecture? How do you do it? I've never solved it. Okay,
so give me two hours of lecture. But it should be done in an hour of
lecture, somehow -- or two hours. Anyhow I've now cooked up a much better way of presenting the electrodynamics, a much more original and much more powerful way than
is in the book. But at that time I had no new way, and I complained
that I had nothing extra to contribute for myself. But they said, "Do
it anyway," and they talked me into it, so I did. Did this approach to teaching electrodynamics appear in any of his later writing? | I spent a long time researching this question for Carver Mead (mentioned by Art Brown) in 2008, because we were both curious what Feynman meant. Carver thought Feynman's "better way of presenting electrodynamics" would be something along the lines of his own "Collective Electrodynamics," but that turned out to be only partly true, as I discovered in four pages of Feynman's notes, written during the year he was teaching the FLP lectures on electrodynamics, which briefly explains his new program. [These notes can be found in The Caltech Archives: Box 62, Folder 8 of The Feynman Papers, "Working Notes And Calculations: Alternate Way to Handle Electrodynamics, 13 Dec 1963."] I asked Matt Sands if he knew anything about it, and he told me that in about the middle of the 2nd year of the FLP lectures, Feynman started to complain that he was disappointed that he had been unable to be more original. He explained that he thought he had now found the "right way to do it" -- unfortunately too late. He said that he would start with the vector and scalar potentials, then everything would be much simpler and more transparent. The notes are much more detailed than that. Unfortunately I don't have the right to publish them myself (without asking Caltech's permission)... but there is a plan to digitize the Feynman Papers and put them online - funding is being sought for that now. Mike Gottlieb:
Editor, The Feynman Lectures on Physics &
Co-author, Feynman's Tips on Physics P.S. As mentioned in my comment below, the notes have been posted. They can now be found here . | {
"source": [
"https://physics.stackexchange.com/questions/100951",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40349/"
]
} |
101,461 | Here is why I pose this question, as I advance in physics more and more this idea haunts me and in fact helped me figure out and memorize many physical laws. It seems to me that there is a relation between all the laws of physics, something that is a general law that determines how should all other laws be generally stated. I can't express this law in a complete sentence and I'm not able to verify if it has an exception or not but I will try to explain this by examples to show what I really mean: Mechanics: Oscillations : It seems that the whole process happens because the object is trying to get back into its initial state i.e the origin. Electricity: Lenz's law : An induced electromotive force (emf) always gives rise to a current whose magnetic field opposes the original change in magnetic flux. in other words: the current is trying to get its initial state by resisting the change of current. Atoms Radiation and Nuclear reactions happen because the atom is trying to be stable. Of course there are many other examples but I'm just stating few to make my point. The question is: is there something like that in physics? if not can you give me any counter example on this? | The principle of stationary action is what you're looking for. You can construct a quantity called the Lagrangian , which is kinetic energy of the system, minus the potential energy of the system, namely: $$\mathcal{L} = T-V$$ It is a function of position and velocity and for example, for a particle on a line, with a force acting on it, such that $F = -\frac{dV}{dx}$, you have
$$\mathcal{L} (x,\dot{x})= \frac{1}{2}m\dot{x}^2 - V(x)$$
If this wasn't already abstract enough, the Lagrangian is important, because we're interested in its integral from time $t_1$ to $t_2$, namely: $$\mathcal{A} = \int\limits_{t_1}^{t_2} \mathcal{L} (x,\dot{x}) dt$$
It is called the action , and it's the "thing" nature tries to minimize or, more precisely, to make stationary.
What does that mean?
Well, it means that, for a particular system, nature chooses such a Lagrangian, which will give a stationary value when integrated between two fixed points. So, as you might have guessed, the goal of the game is to find the Lagrangian which will minimize (stationarize?) the action. The Lagrangian for a particle on a line is an extremely simple case, in general it doesn't have to be kinetic minus potential energy and, generally, it's also an explicit function of time, which means that some terms can depend on time not just through position and velocity depending on time. How to get the equations of motion out of a Lagrangian? You use the Euler-Lagrange equations :
$$\frac{d}{dt} \frac{\partial \mathcal{L}}{\partial \dot{q_i}} = \frac{\partial \mathcal{L}}{\partial q_i}$$
What's that $q$? Those are generalized coordinates , they can be Cartesian coordinates but they can be all sorts of different coordinates, whatever works the best. Try out the equations on my example of a particle on a line. You might be thinking, why the Lagrangian, what does that have to do with anything and how do we even get them?
Well... mostly quantum mechanics and guesswork. After all, classical mechanics is only a limit of quantum mechanics and therefore it has to obey its underlying principles. Although the Lagrangian is also used in quantum mechanics, there's an even more elegant concept, the Hamiltonian and Hamiltonian mechanics formalism, which basically sets the rules. Bottom line, you can view it as this:
$$\text{constructing a theory} \longleftrightarrow \text{finding the Lagrangian}$$ If you want a classical intuition for why is it kinetic energy minus the potential energy, you might want to read the article "Gravity, Time, and Lagrangians", Huggins, Elisha, Physics Teacher, v48 n8 p512-515 Nov 2010. | {
"source": [
"https://physics.stackexchange.com/questions/101461",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/25814/"
]
} |
101,895 | The speed of sound depends on the density of the medium in which it is travelling and increases when the density increases. For example, in solids sound travels faster than in liquid and even faster than in gas, and the density is highest in solids, lower in liquids and lowest in gas. So iron has a density of about $7\,800\ \mathrm{kg/m^3}$ , while mercury has $13\,600\ \mathrm{kg/m^3}$ , but the speed of sound is $1\,450\ \mathrm{kg/m^3}$ in mercury and $5\,130\ \mathrm{kg/m^3}$ in iron, so mercury has a higher density, but sound travels slower in it. Why is this? | The speed of sound in a liquid is given by : $$ v = \sqrt{\frac{K}{\rho}} $$ where $K$ is the bulk modulus and $\rho$ is the density. The bulk modulus of mercury is $2.85 \times 10^{10}\ \mathrm{Pa}$ and the density is $13534\ \mathrm{kg/m^3}$ , so the equation gives $v = 1451\ \mathrm{m/s}$ . The speed of sound in solids is given by : $$ v = \sqrt{\frac{K + \tfrac{4}{3}G}{\rho}} $$ where $K$ and $G$ are the bulk modulus and shear modulus respectively. The bulk modulus of iron is $1.7 \times 10^{11}\ \mathrm{Pa}$ , the shear modulus is $8.2 \times 10^{10}\ \mathrm{Pa}$ and the density is $7874\ \mathrm{kg/m^3}$ , so the equation gives $v = 5956\ \mathrm{m/s}$ . You give a slightly different figure for the speed of sound in iron, but the speed does depend on the shape and the figure you give, $5130\ \mathrm{m/s}$ , is the speed in a long thin rod. There are more details in the Wikipedia article I've linked. | {
"source": [
"https://physics.stackexchange.com/questions/101895",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/99505/"
]
} |
101,913 | I was trying to unlock my car with a keyfob, but I was out of range. A friend of mine said that I have to hold the transmitter next to my head. It worked, so I tried the following later that day: Walked away from the car until I was out of range Put key next to my head (it worked) Put key on my chest (it worked) Put key on my leg (didn't work) So first I thought it has to do with height of the transmitter. But I am out of range if I use the key at the same height as my head but not right next to my head. Same applies when my key is at the same height as my chest. So it has nothing to do with height (as it appears). Then I thought, my body is acting like an antenna, but how is that possible if I am holding the key? Why would it only amplify the signal if I hold it against my head and not if I simply hold it into my hand? Here's a vid of Top Gear demonstrating it . | This is a really interesting question. It turns out that your body is reasonably conductive (think salt water, more on that in the answer to this question ), and that it can couple to RF sources capacitively. Referring to the Wikipedia article on keyless entry systems ; they typically operate at an RF frequency of $315\text{ MHz}$, the wavelength of which is about $1\text{ m}$. Effective antennas (ignoring fractal antennas ) typically have a length of $\frac{\lambda}{2}=\frac{1}{2}\text{m}\approx1.5\text{ ft}$. So, the effect is probably caused by one or more of the cavities in your body (maybe your head or chest cavity) acting as a resonance chamber for the RF signal from your wireless remote. For another example of how a resonance chamber can amplify waves think about the hollow area below the strings of a guitar. Without the hollow cavity the sound from the guitar would be almost imperceptible. Edit: As elucidated in the comments, a cavity doesn't necessarily need to be an empty space; just a bounded area which partially reflects electromagnetic waves at the boundaries. The area occupied by your brain satisfies these conditions. Edit 2: As pointed out in the comments, a string instrument is significantly louder with just a sounding board behind the strings, so my analogy, though true, is a bit misleading. Edit 3: As promised in the comments, I made some more careful measurements of the effect in question, using a number of different orientations of remote position and pointing. I've posted these as a separate answer to this question. | {
"source": [
"https://physics.stackexchange.com/questions/101913",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/38873/"
]
} |
102,222 | Carbon-14 has a half-life of 5,730 years. That means that after 5,730 years, half of that sample decays. After another 5,730 years, a quarter of the original sample decays (and the cycle goes on and on, and one could use virtually any radioactive isotope). Why is this so? Logically, shouldn't it take 2,865 years for the quarter to decay, rather than 5,730? | The right way to think about this is that, over 5,730 years, each single carbon-14 atom has a 50% chance of decaying . Since a typical sample has a huge number of atoms 1 , and since they decay more or less independently 2 , we can statistically say, with a very high accuracy, that after 5,730 years half of all the original carbon-14 atoms will have decayed, while the rest still remain. To answer your next natural question, no, this does not mean that the remaining carbon-14 atoms would be "just about to decay". Generally speaking, atomic nuclei do not have a memory 3 : as long as it has not decayed, a carbon-14 nucleus created yesterday is exactly identical to one created a year ago or 10,000 years ago or even a million years ago. All those nuclei, if they're still around today, have the same 50% probability of decaying within the next 5,730 years. If you like, you could imagine each carbon-14 nucleus repeatedly tossing a very biased imaginary coin very fast (faster than we could possibly measure): on each toss, with a very, very tiny chance, the coin comes up heads and the nucleus decays; otherwise, it comes up tails, and the nucleus stays together for now. Over a period of, say, a second or a day, the odds of any of the coin tosses coming up heads are still tiny — but, over 5,730 years, the many, many tiny odds gradually add up to a cumulative decay probability of about 50%. 1 A gram of carbon contains about 0.08 moles , or about 5 × 10 22 atoms. In a typical natural sample, about one in a trillion (1 / 10 12 ) of these will be carbon-14 , giving us about 50 billion (5 × 10 10 ) carbon-14 atoms in each gram of carbon. 2 Induced radioactive decay does occur, most notably in fission chain reactions . Carbon-14, however, undergoes spontaneous β − decay , whose rate is not normally affected by external influences to any significant degree. 3 Nuclear isomers and other excited nuclear states do exist, so it's not quite right to say that all nuclei of a given isotope are always identical. Still, even these can, in practice, be effectively modeled as discrete states, with spontaneous transitions between different states occurring randomly with a fixed rate over time, just as nuclear decay events do. | {
"source": [
"https://physics.stackexchange.com/questions/102222",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37390/"
]
} |
102,527 | I suspect I'm missing something obvious, but I'm coming up blank. I've gotten pretty comfortable with so-called natural units over the years: in doing quantum mechanics/QFT, it's common to set $c = \hbar = 1$ and in GR, it's common to set $c = G = 1$. I'm told that quantum gravity researchers are fond of Planck units , where (according to that Wiki page) we set $c = \hbar = G = 1$, along with a couple other fundamental constants. I don't know what to make of this, though: it's dimensionally inconsistent in 3+1 dimensions. That same article gives the dimensions of each constant: $$\begin{align}
[c] &= L T^{-1} \\
[G] &= L^3 M^{−1} T^{−2} \\
[\hbar] &= L^2 M T^{−1}
\end{align}$$ Setting $c$ and $G$ to 1 implies $L=T=M$. Setting $c$ and $\hbar$ to 1 implies $L=T=M^{-1}$. Hence, making all three unity is internally inconsistent: mass cannot simultaneously have dimensions of length and reciprocal length. So, how do Planck units work? | The short answer is that it can, if $M = 1 = M^{-1}$. In this way of looking at it, all quantities in Planck units are pure numbers. The longer answer is that there are two different ways of thinking about natural unit systems. Natural unit systems in terms of standard units One of them, and perhaps the easier one to understand, is that you're still working in a "traditional" unit system in which distinct units for all quantities exist, but the units are chosen such that the numerical values of certain constants are equal to 1. For example, if you want to set $c = 1$, you're not literally setting $c = 1$, you're actually setting $c = 1\,\frac{\text{length unit}}{\text{time unit}}$. Length and time don't actually have the same units in this interpretation; they're equivalent up to a multiplication by factors of $c$. In other words, it's understood that to convert from, say, a time unit to a length unit you multiply by $c$, and so that is left implicit. In order to do this, of course, you have to choose a length unit and time unit which are compatible with this equation. So you couldn't use meters as your length unit and seconds as your time unit, but you could use light-seconds and seconds, respectively. If you want to set multiple constants to have numerical values of 1, that constrains your possible choices of units even further. For example, suppose you're setting $c$ and $G$ to have numerical values of 1. That means your units have to satisfy both the constraints $$\begin{align}
c &= \frac{\text{length unit}}{\text{time unit}} = \frac{\ell_G}{t_G} &
G &= \frac{(\text{length unit})^3}{(\text{mass unit})(\text{time unit})^2} = \frac{\ell_G^3}{m_Gt_G^2}
\end{align}$$ where I've introduced $\ell_G$, $t_G$, and $m_G$ to stand for the length, time, and mass units in this system, respectively. You can then invert these equations to solve for $\ell_G$, $t_G$, and $m_G$ in terms of $c$ and $G$ - but as you can probably tell, the system of equations is underdetermined. It still gives you the freedom to choose one unit to be part of your unit system, such as $$\text{kilogram} = \text{mass unit} = m_G$$ Having made that choice, you can now solve for $m_G$, $\ell_G$, and $t_G$ in terms of $c$, $G$, and $\text{kilogram}$ (or whatever other choice you might have made; each choice gives you a different unit system). Running through the math for this gets you $$\begin{align}
m_G &= 1\text{ kg} &
\ell_G &= \frac{G (1\text{ kg})}{c^2} &
t_G &= \frac{\ell_G}{c} = \frac{G (1\text{ kg})}{c^3}
\end{align}$$ Now you can plug in values of $G$ and $c$ in, say, SI units, and get conversions from SI (or whatever) to this unit system. Note that, as I said, length does not literally have the same units as time or mass, but you can convert between the length unit, time unit, and mass unit by multiplying by factors of $G$ and $c$, constants which have numerical values of 1. In a sense, you can consider this multiplication by $G^ic^j$ as analogous to a gauge transformation, i.e. a transformation that has no effect on the numerical value of a quantity, and the units of length, time, and mass are mapped on to each other by this transformation just as gauge-equivalent states are mapped on to each other by a gauge transformation in QFT. So it's more proper to say $L \sim T \sim M$; the dimensions are not equal , just equivalent under some transformation. If you do the same thing but setting $c = \hbar = 1$ instead, remember what you're really doing is specifying that your units must satisfy the constraints $$\begin{align}
c &= \frac{\text{length unit}}{\text{time unit}} = \frac{\ell_Q}{t_Q} &
\hbar &= \frac{(\text{length unit})^2(\text{mass unit})}{(\text{time unit})} = \frac{\ell_Q^2m_Q}{t_Q}
\end{align}$$ ($Q$ is for "quantum" because these are typical QFT units), and then running through the math, again with $m_Q = 1\text{ kg}$, you get $$\begin{align}
m_Q &= 1\text{ kg} &
\ell_Q &= \frac{\hbar}{(1\text{ kg})c} &
t_Q &= \frac{\ell_Q}{c} = \frac{\hbar}{(1\text{ kg})c^2}
\end{align}$$ Again, the units are not literally identical, but $\ell_Q \sim t_Q \sim m_Q^{-1}$ under multiplication by factors of $\hbar$ and $c$. Of course, your third constraint doesn't have to be a choice of one of the fundamental units. You can also choose a third physical constant to have a numerical value of 1. To obtain Planck units, for example, you would specify $$\begin{align}
c &= \frac{\text{length unit}}{\text{time unit}} = \frac{\ell_P}{t_P} \\
\hbar &= \frac{(\text{length unit})^2(\text{mass unit})}{(\text{time unit})} = \frac{\ell_P^2m_P}{t_P} \\
G &= \frac{(\text{length unit})^3}{(\text{mass unit})(\text{time unit})^2} = \frac{\ell_P^3}{m_Pt_P^2}
\end{align}$$ You can tell that this is no longer an underdetermined system of equations. Solving it gives you $$\begin{align}
m_P &= \sqrt{\frac{\hbar c}{G}} &
\ell_P &= \sqrt{\frac{\hbar G}{c^3}} &
t_P &= \sqrt{\frac{\hbar G}{c^5}}
\end{align}$$ Here, since you've set three constants to have numerical values of 1, your three fundamental Planck units will be equivalent up to multiplications by factors of those three constants, $G$, $\hbar$, and $c$. In other words, multiplication by any factor of the form $G^i\hbar^jc^k$ is the equivalent to the gauge transformation I mentioned earlier. You can tell that all these units are equivalent under such a transformation, but more than that, all powers of them are equivalent! In particular, you can convert between $M$ and $M^{-1}$ by multiplying by constants whose numerical value in this unit system is equal to 1, and thus it's not a problem that $M \sim M^{-1}$ here. Unit systems as vector spaces Another way of understanding unit systems, which is kind of a logical extension of the previous section, is to think of them as a vector space. Elements of this vector space correspond to dimensions of quantities, and the basis vectors can be chosen to correspond to the fundamental dimensions $L$, $T$, and $M$. (Of course you could just as well choose another basis, but this one suits my purposes.) You might represent $$\begin{align}
L &\leftrightarrow (1,0,0) &
T &\leftrightarrow (0,1,0) &
M &\leftrightarrow (0,0,1)
\end{align}$$ Addition of vectors corresponds to multiplication of the corresponding dimensions. Derived dimensions correspond to other vectors, like $$\begin{align}
[c] = LT^{-1} &\leftrightarrow (1,-1,0) \\
[G] = L^3M^{-1}T^{-2} &\leftrightarrow (3,-2,-1) \\
[\hbar] = L^2MT^{-1} &\leftrightarrow (2,-1,1)
\end{align}$$ In this view, setting a constant to have a numerical value of 1 corresponds to projecting the vector space onto a subspace orthogonal to the vector corresponding to that constant. For example, if you want to set $c = 1$, you project the 3D vector space on to the 2D space orthogonal to $(1,-1,0)$. Any two vectors in the original space which differ by a multiple of $(1,-1,0)$ correspond to the same point in the subspace - just like how, in the previous section, any two dimensions which could be converted into each other by multiplying by factors of $c$ could be considered equivalent. But in this view, you can actually think of the two dimensions as becoming the same , so that e.g. length and time are actually measured in the same unit. Since in Planck units you set three constants to have a numerical value of one, in the dimensions-as-vector-space picture, you need to perform three projections to get to Planck units. Performing three projections on a 3D vector space leaves you with a 0D vector space - the entire space has been reduced to just a point. All the units are mapped to that one point, and are the same. So again, $M$ and $M^{-1}$ are identical, and there's no conflict. | {
"source": [
"https://physics.stackexchange.com/questions/102527",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
102,910 | It is fine to say that for an object flying past a massive object, the spacetime is curved by the massive object, and so the object flying past follows the curved path of the geodesic, so it "appears" to be experiencing gravitational acceleration. Do we also say along with it, that the object flying past in reality exeriences NO attraction force towards the massive object? Is it just following the spacetime geodesic curve while experiencing NO attractive force? Now come to the other issue: Supposing two objects are at rest relative to each other, ie they are not following any spacetime geodesic. Then why will they experience gravitational attraction towards each other? E.g. why will an apple fall to earth? Why won't it sit there in its original position high above the earth? How does the curvature of spacetime cause it to experience an attraction force towards the earth, and why would we need to exert a force in reverse direction to prevent it from falling? How does the curvature of spacetime cause this? When the apple was detatched from the branch of the tree, it was stationary, so it did not have to follow any geodesic curve. So we cannot just say that it fell to earth because its geodesic curve passed through the earth. Why did the spacetime curvature cause it to start moving in the first place? | To really understand this you should study the differential geometry of geodesics in curved spacetimes. I'll try to provide a simplified explanation. Even objects "at rest" (in a given reference frame) are actually moving through spacetime, because spacetime is not just space, but also time: apple is "getting older" - moving through time. The "velocity" through spacetime is called a four-velocity and it is always equal to the speed of light. Spacetime in gravitation field is curved, so the time axis (in simple terms) is no longer orthogonal to the space axes. The apple moving first only in the time direction (i.e. at rest in space) starts accelerating in space thanks to the curvature (the "mixing" of the space and time axes) - the velocity in time becomes velocity in space. The acceleration happens because the time flows slower when the gravitational potential is decreasing. Apple is moving deeper into the graviational field, thus its velocity in the "time direction" is changing (as time gets slower and slower). The four-velocity is conserved (always equal to the speed of light), so the object must accelerate in space. This acceleration has the direction of decreasing gravitational gradient. Edit - based on the comments I decided to clarify what the four-velocity is: 4-velocity is a four-vector, i.e. a vector with 4 components. The first component is the "speed through time" (how much of the coordinate time elapses per 1 unit of proper time). The remaining 3 components are the classical velocity vector (speed in the 3 spatial directions). $$ U=\left(c\frac{dt}{d\tau},\frac{dx}{d\tau},\frac{dy}{d\tau},\frac{dz}{d\tau}\right) $$ When you observe the apple in its rest frame (the apple is at rest - zero spatial velocity), the whole 4-velocity is in the "speed through time". It is because in the rest frame the coordinate time equals the proper time, so $\frac{dt}{d\tau} = 1$. When you observe the apple from some other reference frame, where the apple is moving at some speed, the coordinate time is no longer equal to the proper time. The time dilation causes that there is less proper time measured by the apple than the elapsed coordinate time (the time of the apple is slower than the time in the reference frame from which we are observing the apple). So in this frame, the "speed through time" of the apple is more than the speed of light ($\frac{dt}{d\tau} > 1$), but the speed through space is also increasing. The magnitude of the 4-velocity always equals c, because it is an invariant (it does not depend on the choice of the reference frame). It is defined as: $$ \left\|U\right\| =\sqrt[2]{c^2\left(\frac{dt}{d\tau}\right)^2-\left(\frac{dx}{d\tau}\right)^2-\left(\frac{dy}{d\tau}\right)^2-\left(\frac{dz}{d\tau}\right)^2} $$ Notice the minus signs in the expression - these come from the Minkowski metric. The components of the 4-velocity can change when you switch from one reference frame to another, but the magnitude stays unchanged (all the changes in components "cancel out" in the magnitude). | {
"source": [
"https://physics.stackexchange.com/questions/102910",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/42007/"
]
} |
102,930 | We studied electric fields due to point charges. The magnitude of these fields decreases with the square of the distance from the point charge. It seems to me that we could treat the positive terminal of a battery as a point charge. So, I would conclude that the magnitude of the electric field set up by the positive terminal inside a wire in a circuit should fall off with the square of the distance from the end of the battery. However, this does not happen. Instead, the electric field inside a wire in a circuit is constant. Why is this? Is it that the positive terminal can't be modeled as a point charge? Or is it perhaps some special property of the wire, or the fact that there is moving charge in the system? | You are absolutely correct, the electric field does fall off with distance from the battery. However, this is only true during the transient state (the state of the field when the battery is first connected). In fact not only are the magnitudes inconsistent, but so is the direction of the field. The field doesn't always point in the direction of the wire. The entire field is inconsistent in direction in magnitude and direction. The image below illustrates this: Ignore the green arrows. The yellow arrows indicate the direction and the magnitude of the field at that point. See how there totally wrong? Lets see what happens next. Note the field right before and after the "right bend". The field "going in" is greater in magnitude that the field "going out" and the field going out is pointing in the wrong direction! Because of this electrons start building up at the "right bend" (since more electrons are going in than going out). The build up of electrons creates a new field, which results in the fields before and after the right bend changing. In fact this happens everywhere the field isn't consistent in magnitude and direction. Electrons start building up which generates a new field, that alters the original field until everything points in the right direction and is of equal magnitude, preventing further electron build up. So during the transient state, electrons build up on certain places of the wire generating new fields until the field is consistent in magnitude and direction. You end up with something like this:  All images were taken from Matter and Interactions . Great question, unfortunately most physics books choose to completely skip over this, quite fundamental, concept. Hope that helps! | {
"source": [
"https://physics.stackexchange.com/questions/102930",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/30632/"
]
} |
103,024 | Consider there is a box of mass $m$ at rest on the floor. Most books give an example that we need to do a work of $mgh$ to lift the box $h$ upward. If we analyze this work done, the external force acting on the box by us should be equal to the weight of the box. Therefore the net force is zero which in turn there is no acceleration. If there is no acceleration and the initial velocity of the box is also zero, how can the box move upward? | In introductory problems about work you're normally taught that it's force times distance: $$ W = F \times x $$ and you treat the force as constant. If you look at the problem this way then you're quite correct that if the force is $F = mg$ then the box can't accelerate so it can't move. However a more complete way to define the work is: $$ W = \int^{x_f}_{x_i} F(x) dx $$ The force $F(x)$ can be a function of $x$, and to get the work we integrate this force from the starting point $x_i$ to the final point $x_f$. Because $F(x)$ can vary we can make $F > mg$ at the beginning to accelerate the box then make $F < mg$ towards the end so the box slows to a halt again. DavePhD comments that work is not a state function, and in general this is true. However in this case the work done is equal to the change in potential energy so as long as the box starts at $x_i$ at rest and ends at $x_f$ at rest we'll get the same work done regardless of the exact form of $F(x)$. If you're really determined to have $F$ constant then start with $F > mg$ at the beginning and $F < mg$ at the end, then gradually reduce the initial value of $F$ and increase the final value to make the force more constant. This will cause the time taken to move the box from $x_i$ to $x_f$ to increase. The limit of this process is a completely constant value for $F$, in which case it takes an infinite time to move the box. | {
"source": [
"https://physics.stackexchange.com/questions/103024",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10048/"
]
} |
103,447 | Don't say that a layer of carbon dioxide covers the flame, because our breath has more oxygen than carbon dioxide.
Also, our breath does not extinguish the flame by cooling it as it is itself warmer than the coolness required to extinguish it.
So what is happening here? | You blow away the flame from its fuel source. If you would blow less hard the flame might burn harder because more air is supplied to the flame (similar to a Bunsen burner). Because normally the flame of a candle gets its oxygen through a convectional airflow generated by the heat of the flame. The reason why the flame is blown away from the candle is because the air you blow towards it moves faster than the speed of the flame front. So the air you blow at it moves the flame away from its fuel source, where the flame burns out due to the lack of fuel. | {
"source": [
"https://physics.stackexchange.com/questions/103447",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/42420/"
]
} |
103,503 | Consider the time-dependent Schrödinger equation (or some equation in Schrödinger form) written down as $$
\tag 1 i\hbar \partial_{t} \Psi ~=~ \hat{H} \Psi .
$$ Usually, one likes to write that it has a formal solution of the form $$
\tag 2 \Psi (t) ~=~ \exp\left[-\frac{i}{\hbar} \int \limits_{0}^{t} \hat{ H}(t^{\prime}) ~\mathrm dt^{\prime}\right]\Psi (0).
$$ However, this form for the solution of $(1)$ is actually built by the method of successive approximations which actually returns a solution of the form $$
\tag 3 \Psi (t) ~=~ \color{red}{\hat{\mathrm T}} \exp\left[-\frac{i}{\hbar} \int \limits_{0}^{t} \hat{H}(t^{\prime})~\mathrm dt^{\prime}\right]\Psi (0), \qquad t>0,
$$ where $\color{red}{\hat{\mathrm T}}$ is the time-ordering operator. It seems that $(3)$ doesn't coincide with $(2)$ , but formally $(2)$ seems to be perfectly fine: it satisfies $(1)$ and the initial conditions. So where is the mistake? | I) The solution to the time-dependent Schrödinger equation (TDSE) is $$ \Psi(t_2) ~=~ U(t_2,t_1) \Psi(t_1),\tag{A}$$ where the (anti)time-ordered exponentiated Hamiltonian $$\begin{align} U(t_2,t_1)~&=~\left\{\begin{array}{rcl}
T\exp\left[-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)\right]
&\text{for}& t_1 ~<~t_2 \cr\cr
AT\exp\left[-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)\right]
&\text{for}& t_2 ~<~t_1 \end{array}\right.\cr\cr
~&=~
\underset{N\to\infty}{\lim}
\exp\left[-\frac{i}{\hbar}H(t_2)\frac{t_2-t_1}{N}\right] \cdots\exp\left[-\frac{i}{\hbar}H(t_1)\frac{t_2-t_1}{N}\right]\end{align}\tag{B} $$ is formally the unitary evolution operator, which satisfies its own two TDSEs $$ i\hbar \frac{\partial }{\partial t_2}U(t_2,t_1)
~=~H(t_2)U(t_2,t_1),\tag{C} $$ $$i\hbar \frac{\partial }{\partial t_1}U(t_2,t_1)
~=~-U(t_2,t_1)H(t_1),\tag{D} $$ along with the boundary condition $$ U(t,t)~=~{\bf 1}.\tag{E}$$ II) The evolution operator $U(t_2,t_1)$ has the group-property $$ U(t_3,t_1)~=~U(t_3,t_2)U(t_2,t_1). \tag{F}$$ The (anti)time-ordering in formula (B) is instrumental for the (anti)time-ordered expontial (B) to factorize according to the group-property (F). III) The group property (F) plays an important role in the proof that formula (B) is a solution to the TDSE (C): $$\begin{array}{ccc} \frac{U(t_2+\delta t,t_1) - U(t_2,t_1)}{\delta t} &\stackrel{(F)}{=}&
\frac{U(t_2+\delta t,t_2) - {\bf 1} }{\delta t}U(t_2,t_1)\cr\cr
\downarrow & &\downarrow\cr\cr
\frac{\partial }{\partial t_2}U(t_2,t_1)
&& -\frac{i}{\hbar}H(t_2)U(t_2,t_1).\end{array}\tag{G}$$ Remark: Often the (anti)time-ordered exponential formula (B) does not make mathematical sense directly. In such cases, the TDSEs (C) and (D) along with boundary condition (E) should be viewed as the indirect/descriptive defining properties of the (anti)time-ordered exponential (B). IV) If we define the unitary operator without the (anti)time-ordering in formula (B) as $$ V(t_2,t_1)~=~\exp\left[-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)\right],\tag{H}$$ then the factorization (F) will in general not take place, $$ V(t_3,t_1)~\neq~V(t_3,t_2)V(t_2,t_1). \tag{I}$$ There will in general appear extra contributions, cf. the BCH formula . Moreover, the unitary operator $V(t_2,t_1)$ will in general not satisfy the TDSEs (C) and (D). See also the example in section VII. V) In the special (but common) case where the Hamiltonian $H$ does not depend explicitly on time, the time-ordering may be dropped. Then formulas (B) and (H) reduce to the same expression $$ U(t_2,t_1)~=~\exp\left[-\frac{i}{\hbar}\Delta t~H\right]~=~V(t_2,t_1), \qquad \Delta t ~:=~t_2-t_1.\tag{J}$$ VI) Emilio Pisanty advocates in a comment that it is interesting to differentiate eq. (H) w.r.t. $t_2$ directly. If we Taylor expand the exponential (H) to second order, we get $$ \frac{\partial V(t_2,t_1)}{\partial t_2}
~=~-\frac{i}{\hbar}H(t_2) -\frac{1}{2\hbar^2} \left\{ H(t_2), \int_{t_1}^{t_2}\! dt~H(t) \right\}_{+} +\ldots,\tag{K} $$ where $\{ \cdot, \cdot\}_{+}$ denotes the anti-commutator. The problem is that we would like to have the operator $H(t_2)$ ordered to the left [in order to compare with the TDSE (C)]. But resolving the anti-commutator may in general produce un-wanted terms. Intuitively without the (anti)time-ordering in the exponential (H), the $t_2$ -dependence is scattered all over the place, so when we differentiate w.r.t. $t_2$ , we need afterwards to rearrange all the various contributions to the left, and that process generate non-zero terms that spoil the possibility to satisfy the TDSE (C). See also the example in section VII. VII) Example. Let the Hamiltonian be just an external time-dependent source term $$ H(t) ~=~ \overline{f(t)}a+f(t)a^{\dagger}, \qquad [a,a^{\dagger}]~=~\hbar{\bf 1},\tag{L}$$ where $f:\mathbb{R}\to\mathbb{C}$ is a function. Then according to Wick's Theorem $$ T[H(t)H(t^{\prime})] ~=~ : H(t) H(t^{\prime}): ~+ ~C(t,t^{\prime}), \tag{M}$$ where the so-called contraction $$ C(t,t^{\prime})~=~ \hbar\left(\theta(t-t^{\prime})\overline{f(t)}f(t^{\prime})
+\theta(t^{\prime}-t)\overline{f(t^{\prime})}f(t)\right) ~{\bf 1}\tag{N}$$ is a central element proportional to the identity operator. For more on Wick-type theorems, see also e.g. this , this , and this Phys.SE posts. (Let us for notational convenience assume that $t_1<t_2$ in the remainder of this answer.) Let $$ A(t_2,t_1)~=~-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)
~=~-\frac{i}{\hbar}\overline{F(t_2,t_1)} a -\frac{i}{\hbar}F(t_2,t_1) a^{\dagger} ,\tag{O}$$ where $$ F(t_2,t_1)~=~\int_{t_1}^{t_2}\! dt ~f(t). \tag{P}$$ Note that $$
\frac{\partial }{\partial t_2}A(t_2,t_1)~=~-\frac{i}{\hbar}H(t_2), \qquad
\frac{\partial }{\partial t_1}A(t_2,t_1)~=~\frac{i}{\hbar}H(t_1).\tag{Q} $$ Then the unitary operator (H) without (anti)time-order reads $$\begin{align}
V(t_2,t_1)~&=~e^{A(t_2,t_1)}
\\
~&=~\exp\left[-\frac{i}{\hbar}F(t_2,t_1) a^{\dagger}\right]\exp\left[\frac{-1}{2\hbar}|F(t_2,t_1)|^2\right]\exp\left[-\frac{i}{\hbar}\overline{F(t_2,t_1)} a\right].\tag{R}
\end{align}$$ Here the last expression in (R) displays the normal-ordered for of $V(t_2,t_1)$ . It is a straightforward exercise to show that formula (R) does not satisfy TDSEs (C) and (D). Instead the correct unitary evolution operator is $$\begin{align}
U(t_2,t_1)~&\stackrel{(B)}{=}~T\exp\left[-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)\right]
\\~&\stackrel{(M)}{=}~:\exp\left[-\frac{i}{\hbar}\int_{t_1}^{t_2}\! dt~H(t)\right]:~ \exp\left[\frac{-1}{2\hbar^2}\iint_{[t_1,t_2]^2}\! dt~dt^{\prime}~C(t,t^{\prime})\right]
\\ ~&=~ e^{A(t_2,t_1)+D(t_2,t_1)}~=~V(t_2,t_1)e^{D(t_2,t_1)}\tag{S},
\end{align}$$ where $$ D(t_2,t_1)~=~\frac{{\bf 1}}{2\hbar}\iint_{[t_1,t_2]^2}\! dt~dt^{\prime}~{\rm sgn}(t^{\prime}-t)\overline{f(t)}f(t^{\prime})\tag{T}$$ is a central element proportional to the identity operator. Note that $$\begin{align}
\frac{\partial }{\partial t_2}D(t_2,t_1)~&=~\frac{{\bf 1}}{2\hbar}\left(\overline{F(t_2,t_1)}f(t_f)-\overline{f(t_2)}F(t_2,t_1)\right)
\\ ~&=~\frac{1}{2}\left[ A(t_2,t_1), \frac{i}{\hbar}H(t_2)\right]~=~\frac{1}{2}\left[\frac{\partial }{\partial t_2}A(t_2,t_1), A(t_2,t_1)\right].\tag{U}
\end{align}$$ One may use identity (U) to check directly that the operator (S) satisfy the TDSE (C). References: Sidney Coleman, QFT lecture notes, arXiv:1110.5013 ; p. 77. | {
"source": [
"https://physics.stackexchange.com/questions/103503",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/31496/"
]
} |
103,766 | For example, take a water bottle. Fill it with water and then turn it upside down.
Instead of flowing steadily downward, it gulps down in parts. Why? | The gulping you describe is due to air being sucked into the bottle and temporarily halting the flow through the nozzle. When the bottle is filled with water, it is at a particular pressure. When you turn it over and some water leaves, the pressure is now lower in the bottle. Once the pressure in the bottle is lower than atmospheric pressure, air forces it's way back into the bottle. This equalizes the pressure and water flows again. Then the pressure drops, air gets sucked in, and so on. Eventually all the water is gone and the bottle is filled with air at the same pressure as the atmosphere. | {
"source": [
"https://physics.stackexchange.com/questions/103766",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/35595/"
]
} |
103,898 | The Harvard-Smithsonian Centre for Astrophysics held a press conference today to announce a major discovery relating to gravitational waves. What was their announcement, and what are the implications? Would this discovery be confirmation of gravitational waves as predicted by general relativity (even though Sean Carroll links to the Nobel website implying that G-waves was detected decades ago, while my book on GR ( B. Schutz ) says they're still looking...I'm confused) Also, regarding inflation theory, would this discovery confirm inflation or refute it? Or something else, a la string theory? | The actual paper (pdf) is very heavy in error quantification - and rightly so. They presented an experiment result that is statistically extremely difficult to obtain. But for the rest of us, conclusion is the most important part. The abstract says: The observed B-mode power spectrum is well fit by a lensed-$\lambda$CDM + tensor theoretical model with tensor/scalar ratio r = 0.20 As far as I can tell, this is their conclusion. To start to understand, we have to get a grasp on the component theories. Lambda-CDM_model is a cosmology model, basically The Cosmic Microwave Background (CMB) is the light from the last scatter from when the early universe was still opaque The scalar density perturbations (fluctuations) of the CMB are from quantum fluctuations in the energy universe, and these were the "seeds" that allowed galaxies to form, giving rise to the structure we live among. The tensor/scalar ratio (variable r ) in the paper is a measure of the magnitude of tensor fluctuations of the CMB relative to the already-measured scalar fluctuations. I satisfy myself by saying the tensor fluctuations are "vector" fluctuations. B-mode is the type of polarization signal. Apparently another type of this signal was discovered earlier, but this is beyond my understanding. The paper is also very clear that the r value is not 0. Their experiment proves this fact to $5.9 \sigma$. By my standards, that makes the proposition true. The physicist who predicted this was visited by someone who worked on it, and there's a video of it online . First words said was that "it's 5 sigma at .2". The 5 sigma just means it's right. The .2 is referring to the r value above. That was the shock that the media is referencing. The fact r=.2 is new information to science here. The blog post by another user's blog is also very informative. It's also much more heavy on the implications of the discovery. For instance, the discovery gives us much better information on the energy at which the inflation epoch took place. However, the impacts of the discovery are extraordinarily far reaching. So, I would say that the specific discovery at hand here is that r does not equal zero, and is close to 0.2. Here are some quotes from the first part of the paper which hint at the motivations for the experiment in the first place. Emphasis mine: Inflation predicts that the quantization of the gravitational field coupled to exponential expansion produces a primordial background of stochastic gravitational waves with a characteristic spectral shape (Grishchuk 1975; Starobinsky 1979; Rubakov et al. 1982; Fabbri & Pollock 1983; Abbott & Wise 1984; also see Krauss
& Wilczek 2013). The wording here is crucial, note "quantization of the gravitational field". This is quantum gravity , and a theory-based prediction that led to a measured result. To me, this fact is even more incredible than getting direct evidence for gravitational waves. In fact, from my reading, this seems to be from treating the graviton's properties within the context of quantum fields. For more detail: Though unlikely to be directly detectable in modern instruments, these gravitational waves would have imprinted a unique signature upon the CMB. Gravitational waves induce local quadrupole anisotropies in the radiation field within the last-scattering surface, inducing polarization in the scattered light (Polnarev 1985). This polarization pattern will include a “curl” or B-mode component at degree angular scales that cannot be generated primordially by density perturbations. This is going over how to get from gravitational waves to the polarization. I'm still a little iffy on exactly what property of gravitational waves leads to this. However, their visuals page gives a helpful hint for me. See their depiction of polarization . The "density wave" is what I had typically associated with a gravity wave. However, I recognize that a more complicated alternative is also possible. This is trivially true because general relativity uses tensors. It's the difference between pushing a slinky forward-and-back versus side-to-side. If we're talking about those side-to-side modes, then I would expect that to polarize things passing through... as opposed to just redshifting them and blueshifting them back. For more on that: Gravitational lensing of the CMB’s light by large scale structure at relatively late times produces small deflections of the primordial pattern, converting a small portion of E-mode power into B-modes. This looks like it covers some of the more fine detail, but also explains why this work is set apart from previous experiments that are said to have results regarding both the E-mode and B-modes. The polarization effect, as long as it's sufficiently ancient, would seem to necessarily have come from quantum gravity effects. I have 3 even more detailed points that I have found from various writeups of this event. These relate to what was measured, what makes BICEP2 different from other experiments, and why the experiment is so important. These specific details are: The pattern sought was a 45$^{\circ}$ polarization relative to the temperature (?) gradient Setting a lower bound on the value of r was the novel contribution of BICEP2 A relationship called the Lyth bound calculates the time/energy of inflation from this r value The first bullet comes from a youtube video by minute physics . They state that the density, motion, and temperature of matter at the genesis of the CMB impacts its polarization. Making no reference to gravity waves, we expect polarization at 0$^{\circ}$ and 90$^{\circ}$ relative to the temperature gradient (note: there is some confusion in the video whether this is density of temperature graident, they say one thing, but write another). They go on to say that the BICEP2 result is that about 15% of the polarization comes from the 45$^{\circ}$ "jiggles", which are tale-tale signs of gravity waves. It's much more difficult to explain why this should be true. Next bullet - let's clarify why this matters when the same thing has been measured by previous experiments. Other experiments have estimated the r value, but those estimates are inherently clustered around r=0. This still constrains the value, but it is not effective to determine if it is non-zero, which has value for theorists. Without a doubt, this is related to my first bullet - that the critical measurement involves measuring polarization angle relative to the density gradient of a scalar field. Third bullet, something called the Lyth bound/relationship is oft-quoted in discussions of this subject. For more reading, there is discussion on Quora . The equation is: $$ \Delta \phi = m_p N_e \sqrt{ \frac{ r}{8} } $$ The variable $N_e$ has been measured by previous experiments. It is being cited in various places, including follow-up academic articles , that the BICEP2 result narrows down the above equation to $\Delta \phi \approx 9.6 M_{pl}$. The remaining variable is just the Planck mass. I believe that this number is interpreted to be the energy that inflation "traversed". In more practical terms, this gives us the energy/time at which inflation happened. This is where people are coming from when they mention how this result allows us to look further into the past. | {
"source": [
"https://physics.stackexchange.com/questions/103898",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/31965/"
]
} |
104,031 | Is the flow of time regular? How would we come to know if the our galaxy along with everything in it stops for a while(may be a century) w.r.t to the galaxies far beyond our reach . Is there a way to know if flow of time is smooth,or irregular? PS I would describe myself as an illiterate physics enthusiast, so I hope you'll forgive me if my ignorance is borderline offensive. | Note: This answer addresses the question in its original form: Is the flow of time regular? How would we come to know if the whole
universe along with everything in it stops for a while(may be a
century). Is there a way to know if flow of time is smooth,or
irregular? Flow with respect to what? Regular with respect to what? How would we come to know if the whole universe along with everything
in it stops for a while(may be a century) A century as measured by what ? If the "whole universe stops", what would "a while" mean ? "a while" according to what ? What would "stops for a century" mean ? If you think carefully about the premises of your question, you'll find that you're imagining a 'meta clock' that doesn't stop when you stipulate that everything 'stops' (stops according to what ?) and by which one can judge the 'flow' of ordinary time. Closely examine that premise. | {
"source": [
"https://physics.stackexchange.com/questions/104031",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/31078/"
]
} |
104,152 | I'm reading the book Quantum computation and quantum information by Mike & Ike and I'm stuck at 2.60/2.61. There, the author says that, given the operator $A|ψ⟩⟨ψ|$, its trace is: $${\rm tr}(A|\psi\rangle\langle\psi|) = \sum\limits_i\langle i|A|\psi\rangle\langle\psi|i\rangle$$ Why would that be true? Why can we rearrange the bras and kets like that? | Let $\{|i\rangle\}$ be an orthonormal basis for the Hilbert space of the system. Then the trace of an operator $O$ is given by (See the Addendum below)
\begin{align}
\mathrm {tr}(O) = \sum_i \langle i|O|i\rangle
\end{align} For a given state $|\psi\rangle$, we define an operator $P_\psi$ by
\begin{align}
P_\psi|\phi\rangle = \langle\psi|\phi\rangle|\psi\rangle.
\end{align}
As a shorthand, we usually write $P_\psi = |\psi\rangle\langle\psi|$. Using steps 1 and 2, we compute:
\begin{align}
\mathrm{tr}(A|\psi\rangle\langle\psi|)
&= \mathrm{tr}(A P_\psi) \\
&= \sum_i \langle i|AP_\psi|i\rangle\\
&= \sum_i \langle i|A (\langle\psi|i\rangle|\psi\rangle)\\
&= \sum_i \langle i|A|\psi\rangle\langle\psi|i\rangle
\end{align}
which is the desired result. Addendum. (Formula for the trace) For simplicity, I'll restrict the discussion to finite-dimensional vector spaces. Recall that if $O$ is a linear operator on a vector space $V$, and if $ \{|i\rangle\}$ is a basis for $V$, then the matrix elements $O_{ij}$ of $O$ with respect to this basis are defined by it's action on this basis as follows:
\begin{align}
O|i\rangle = \sum_jO_{ji}|j\rangle. \tag{$\star$}
\end{align}
The trace of the linear operator with respect to this basis is then defined as the sum of its diagonal entries;
\begin{align}
\mathrm{tr}(O) = \sum_i O_{ii}. \tag{$\star\star$}
\end{align}
Now it turns out that the trace is a basis-independent number, so we can simply refer to the trace of the the linear operator; it's just the trace with respect to any chosen basis. Now, suppose that $V$ is equipped with an inner product, like in the case of Hilbert spaces, and let $\{|i\rangle\}$ be an orthonormal basis for $V$, then we can take the inner product of both sides of $(\star)$ with respect to an element $|k\rangle$ of the basis to obtain
\begin{align}
\langle k|O|i\rangle = \sum_j \langle k|O_{ji}|j\rangle = \sum_j O_{ji}\langle k|j\rangle = \sum_jO_{ji}\delta_{jk} = O_{ki}
\end{align}
In other words, $\langle k|O|j\rangle$ gives precisely the matrix element $O_{kj}$ of $O$ in the given basis. In particular, the diagonal entries are given by $\langle i|O|i\rangle$. Plugging this into $(\star\star)$, we get
\begin{align}
\mathrm{tr} (O) = \sum_i \langle i|O|i\rangle
\end{align}
as desired. | {
"source": [
"https://physics.stackexchange.com/questions/104152",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/42802/"
]
} |
104,153 | Many times I have read statements like, "the age of the universe is 14 billion years" . For example this wikipedia page Big Bang . Now, my question is, which observers' are these time intervals? According to whom 14 billion years? | An observer with zero comoving velocity (i.e. zero peculiar velocity). Such an observer can be defined at every point in space. They will all see the same Universe, and the Universe will look the same in all directions ("isotropic"). Note that here I'm talking about an "idealized" Universe described by the FLRW metric: $$\mathrm{d}s^2 = a^2(\tau)\left[\mathrm{d}\tau^2-\mathrm{d}\chi^2-f_K^2(\chi)(\mathrm{d}\theta^2 + \sin^2\theta\;\mathrm{d}\phi^2)\right]$$ where $a(\tau)$ is the "scale factor" and: $$f_K(\chi) = \sin\chi\;\mathrm{if}\;(K=+1)$$
$$f_K(\chi) = \chi\;\mathrm{if}\;(K=0)$$
$$f_K(\chi) = \sinh\chi\;\mathrm{if}\;(K=-1)$$ and $\tau$ is the conformal time: $$\tau(t)=\int_0^t \frac{cdt'}{a(t')}$$ The peculiar velocity is defined: $$v_\mathrm{pec} = a(t)\dot{\chi}(t)$$ so the condition of zero peculiar velocity can be expressed: $$\dot{\chi}(t) = 0\;\forall\; t$$ The "age of the Universe" of about $14\;\mathrm{Gyr}$ you frequently hear about is a good approximation for any observer whose peculiar velocity is non-relativistic at all times. In practice these are the only observers we're interested in, since peculiar velocities for any bulk object (like galaxies) tend to be non-relativistic. If you happened to be interested in the time experienced by a relativistic particle since the beginning of the Universe, it wouldn't be terribly hard to calculate. | {
"source": [
"https://physics.stackexchange.com/questions/104153",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/27384/"
]
} |
104,281 | Why can't you hear music well well over a telephone line? I was asked this question in an interview for a university study placement and I unfortunately had no idea. I was given the hint that the telephone sampling rate is 8000 samples per second. | The hint given by the interviewer is a red herring. The limitation you're hearing has been part of the phone network since long before digital sampling had any part in the telephone system. And it applies even in a local phone call where the signal is never digitized. It is related to the fact that the connection from a land-line phone in your house or office back to the "central office" of the phone company is essentially a continuous connection through a pair of wires. There's typically no active circuits such as amplifiers, repeaters, digitizers, or other electonics involved. Given the technology of 100 years ago when the phone network was first designed, a connection of this length could really only carry a very limited bandwidth. The engineers who designed the network did numerous experiments to determine just what frequencies needed to be conveyed for people to understand each other's regular speech, and designed the network only to be sure those frequencies were transmitted. They didn't add any costly components to the system if they weren't needed to achieve this goal. For example they might have used passive filters to "emphasize" high frequencies in circuits that were a bit longer (and so naturally tend to cut out the high frequencies) than average, or to cut off high frequencies in circuits that were shorter than average, to ensure all users get as much as possible the same quality of connections. Later, when they started using multiplexing to connect multiple voice circuits through a single wire (for inter-city connections, for example), the limitted bandwidth allowed them to carry more connections on a single wire, and at that point the bandwidth limitation would have been deliberately enforced by filtering to ensure that conversations didn't cross-talk between each other. Finally, when digital sampling and digital transmission was introduced into the network, the sampling theorem limitations discussed in the other answers came into play. Fortuitously, the bandwidth limitations introduced in the early days of analog telephone networks allowed digitization to be done at really low bitrates without degrading the signal quality below what it had been all along, and again this allows more conversations to be carried on a given wire in the network. Edit I want to summarize with a key point that I previously posted in a comment on another answer: The digital sampling rate (and later, compression methods) used in digital telephony was chosen to match the characteristics of the analog phone network, not the other way around. | {
"source": [
"https://physics.stackexchange.com/questions/104281",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/42873/"
]
} |
104,523 | Which is the smallest particle that has been actually seen by the scientists? When I say "actually seen", (may be using some ultra advanced microscope or any other man made eye, using any wavelength or phenomena) I really mean it; just like we have seen the red blood cells . Davidmh's answer is pretty much in line with what I am asking | Taking your question literally, you can see a single barium ion : The TRIµP group has achieved capturing a single barium ion in a Paul trap. The images show Coulomb crystals formed by a decreasing number of laser-cooled ions as detected with an EMCCD camera. This forms an important step towards the planned experiments on single radium ions to measure atomic parity violation and build an ultra-stable optical clock. They are in traps like this one: Also, Warren Nagourney from Washingtong University took a picture of a single Barium atom scattering light from a laser : Single trapped atom, glowing blue Photo credit: Warren Nagourney at the University of Washington, c. 2000 What is this? Believe it or not, this is a color photograph of a single trapped barium ion held in a radio-frequency Paul trap. Resonant blue and red lasers enter from the left and are focused to the center of the trap, where the single ion is constrained to orbit a region of space about 1 millionth of a meter in size. What's the red/blue mess on the sides? Low level out-of-focus laser scatter off of metal trap electrodes and accessories (atom ovens, electron filaments, etc.) as seen in this photo. How do we know the dot really is an atom? When one turns off the red laser, the blue dot vanishes. This is because the scattering process requires both laser colors due to a metastable state in the barium ion. If the blue dot stayed around with the red laser off, we might excuse it as being additional laser scatter off some surface. How was the photo taken? This is a scanned photo; the camera was a 35mm Nikon (I believe) with a wide open 50mm f/1.8 lens. The exposure time was two minutes. Several shots were taken at different camera positions and this one caught the ion in the very narrow depth of field. Is this how you normally "view" the ion? No, we use a 50 mm f/1.8 camera lens to image the blue dot onto a photomultiplier tube. We don't require the focus to be so good when using the PMT. Where can I see more? Lots of CCD images of one and several trapped ions are found on the Monroe group site. Only two minutes exposure time, so probably in a dark enough room, someone with good sensitivity could actually see it. | {
"source": [
"https://physics.stackexchange.com/questions/104523",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/10454/"
]
} |
105,347 | A vector space is a set whose elements satisfy certain axioms. Now there are physical entities that satisfy these properties, which may not be arrows. A co-ordinate transformation is linear map from a vector to itself with a change of basis. Now the transformation is an abstract concept, it is just a mapping. To calculate it we need basis and matrices and how a transformation ends up looking depends only on the basis we choose, a transformation can look like a diagonal matrix if an eigenbasis is used and so on. It has nothing to do with the vectors it is mapping, only the dimension of the vector spaces is important. So it is foolish to distinguish vectors on the way how their components change under a co-ordinate transformation, since it depends on the basis you used. So there is actually no difference between a contravariant and covariant vector , there is a difference between a contravariant and covariant basis as is shown in arXiv:1002.3217 . An inner product is between elements of the same vector space and not between two vector spaces, it is not how it is defined. Is this approach correct? Along with this approach mentioned, we can view covectors as members of the dual space of the contra-vector space. What advantage does this approach over the former mentioned in my post? Addendum: So now there are contra variant vectors and their duals called covariant vectors. But the duals are defined only once the contravectors are set up because they are the maps from the space of contra vectors to $R$ and thus, it won't make sense of to talk of covectors alone. Then what does it mean that the gradient is a covector ? Now saying because it transforms in a certain way makes no sense. | This is not really an answer to your question, essentially because there isn't ( currently ) a question in your post, but it is too long for a comment. Your statement that A co-ordinate transformation is linear map from a vector to itself with a change of basis. is muddled and ultimately incorrect. Take some vector space $V$ and two bases $\beta$ and $\gamma$ for $V$. Each of these bases can be used to establish a representation map $r_\beta:\mathbb R^n\to V$, given by
$$r_\beta(v)=\sum_{j=1}^nv_j e_j$$
if $v=(v_1,\ldots,v_n)$ and $\beta=\{e_1,\ldots,e_n\}$. The coordinate transformation is not a linear map from $V$ to itself. Instead, it is the map
$$r_\gamma^{-1}\circ r_\beta:\mathbb R^n\to\mathbb R^n,\tag 1$$
and takes coordinates to coordinates. Now, to go to the heart of your confusion, it should be stressed that covectors are not members of $V$ ; as such, the representation maps do not apply to them directly in any way. Instead, they belong to the dual space $V^\ast$, which I'm hoping you're familiar with. (In general, I would strongly discourage you from reading texts that pretend to lay down the law on the distinction between vectors and covectors without talking at length about the dual space.) The dual space is the vector space of all linear functionals from $V$ into its scalar field:
$$V=\{\varphi:V\to\mathbb R:\varphi\text{ is linear}\}.$$
This has the same dimension as $V$, and any basis $\beta$ has a unique dual basis $\beta^*=\{\varphi_1,\ldots,\varphi_n\}$ characterized by $\varphi_i(e_j)=\delta_{ij}$. Since it is a different basis to $\beta$, it is not surprising that the corresponding representation map is different. To lift the representation map to the dual vector space, one needs the notion of the adjoint of a linear map . As it happens, there is in general no way to lift a linear map $L:V\to W$ to a map from $V^*$ to $W^*$; instead, one needs to reverse the arrow. Given such a map, a functional $f\in W^*$ and a vector $v\in V$, there is only one combination which makes sense, which is $f(L(v))$. The mapping $$v\mapsto f(L(v))$$ is a linear mapping from $V$ into $\mathbb R$, and it's therefore in $V^*$. It is denoted by $L^*(f)$, and defines the action of the adjoint $$L^*:W^*\to V^*.$$ If you apply this to the representation maps on $V$, you get the adjoints $r_\beta^*:V^*\to\mathbb R^{n,*}$, where the latter is canonically equivalent to $\mathbb R^n$ because it has a canonical basis. The inverse of this map, $(r_\beta^*)^{-1}$, is the representation map $r_{\beta^*}:\mathbb R^n\cong\mathbb R^{n,*}\to V^*$. This is the origin of the 'inverse transpose' rule for transforming covectors. To get the transformation rule for covectors between two bases, you need to string two of these together:
$$
\left((r_\gamma^*)^{-1}\right)^{-1}\circ(r_\beta^*)^{-1}=r_\gamma^*\circ (r_\beta^*)^{-1}:\mathbb R^n\to \mathbb R^n,
$$
which is very different to the one for vectors, (1). Still think that vectors and covectors are the same thing? Addendum Let me, finally, address another misconception in your question: An inner product is between elements of the same vector space and not between two vector spaces, it is not how it is defined. Inner products are indeed defined by taking both inputs from the same vector space. Nevertheless, it is still perfectly possible to define a bilinear form $\langle \cdot,\cdot\rangle:V^*\times V\to\mathbb R$ which takes one covector and one vector to give a scalar; it is simple the action of the former on the latter:
$$\langle\varphi,v\rangle=\varphi(v).$$
This bilinear form is always guaranteed and presupposes strictly less structure than an inner product. This is the 'inner product' which reads $\varphi_j v^j$ in Einstein notation. Of course, this does relate to the inner product structure $ \langle \cdot,\cdot\rangle_\text{I.P.}$ on $V$ when there is one. Having such a structure enables one to identify vectors and covectors in a canonical way: given a vector $v$ in $V$, its corresponding covector is the linear functional
$$
\begin{align}
i(v)=\langle v,\cdot\rangle_\text{I.P.} : V&\longrightarrow\mathbb R \\
w&\mapsto \langle v,w\rangle_\text{I.P.}.
\end{align}
$$
By construction, both bilinear forms are canonically related, so that the 'inner product' $\langle\cdot,\cdot\rangle$ between $v\in V^*$ and $w\in V$ is exactly the same as the inner product $\langle\cdot,\cdot\rangle_\text{I.P.}$ between $i(v)\in V$ and $w\in V$. That use of language is perfectly justified. Addendum 2, on your question about the gradient. I should really try and convince you at this point that the transformation laws are in fact enough to show something is a covector. (The way the argument goes is that one can define a linear functional on $V$ via the form in $\mathbb R^{n*}$ given by the components, and the transformation laws ensure that this form in $V^*$ is independent of the basis; alternatively, given the components $f_\beta,f_\gamma\in\mathbb R^n$ with respect to two basis, the representation maps give the forms $r_{\beta^*}(f_\beta)=r_{\gamma^*}(f_\gamma)\in V^*$, and the two are equal because of the transformation laws.) However, there is indeed a deeper reason for the fact that the gradient is a covector. Essentially, it is to do with the fact that the equation
$$df=\nabla f\cdot dx$$
does not actually need a dot product; instead, it relies on the simpler structure of the dual-primal bilinear form $\langle \cdot,\cdot\rangle$. To make this precise, consider an arbitrary function $T:\mathbb R^n\to\mathbb R^m$. The derivative of $T$ at $x_0$ is defined to be the (unique) linear map $dT_{x_0}:\mathbb R^n\to\mathbb R^m$ such that
$$
T(x)=T(x_0)+dT_{x_0}(x-x_0)+O(|x-x_0|^2),
$$
if it exists. The gradient is exactly this map; it was born as a linear functional, whose coordinates over any basis are $\frac{\partial f}{\partial x_j}$ to ensure that the multi-dimensional chain rule,
$$
df=\sum_j \frac{\partial f}{\partial x_j}d x_j,
$$
is satisfied. To make things easier to understand to undergraduates who are fresh out of 1D calculus, this linear map is most often 'dressed up' as the corresponding vector, which is uniquely obtainable through the Euclidean structure, and whose action must therefore go back through that Euclidean structure to get to the original $df$. Addendum 3. OK, it is now sort of clear what the main question is (unless that changes again), though it is still not particularly clear in the question text. The thing that needs addressing is stated in the OP's answer in this thread: the dual vector space is itself a vector space and the fact that it needs to be cast off as a row matrix is based on how we calculate linear maps and not on what linear maps actually are. If I had defined matrix multiplication differently, this wouldn't have happened. I will also, address, then this question: given that the dual (/cotangent) space is also a vector space, what forces us to consider it 'distinct' enough from the primal that we display it as row vectors instead of columns, and say its transformation laws are different? The main reason for this is well addressed by Christoph in his answer , but I'll expand on it. The notion that something is co- or contra-variant is not well defined 'in vacuum'. Literally, the terms mean "varies with" and "varies against", and they are meaningless unless one says what the object in question varies with or against. In the case of linear algebra, one starts with a given vector space, $V$. The unstated reference is always, by convention, the basis of $V$: covariant objects transform exactly like the basis, and contravariant objects use the transpose-inverse of the basis transformation's coefficient matrix. One can, of course, turn the tables, and change one's focus to the dual, $W=V^*$, in which case the primal $V$ now becomes the dual, $W^*=V^{**}\cong V$. In this case, quantities that used to transform with the primal basis now transform against the dual basis, and vice versa. This is exactly why we call it the dual: there exists a full duality between the two spaces. However, as is the case anywhere in mathematics where two fully dual spaces are considered ( example , example , example , example , example ), one needs to break this symmetry to get anywhere. There are two classes of objects which behave differently, and a transformation that swaps the two. This has two distinct, related advantages: Anything one proves for one set of objects has a dual fact which is automatically proved. Therefore, one need only ever prove one version of the statement. When considering vector transformation laws, one always has (or can have, or should have), in the back of one's mind, the fact that one can rephrase the language in terms of the duality-transformed objects. However, since the content of the statements is not altered by the transformation, it is not typically useful to perform the transformation: one needs to state some version, and there's not really any point in stating both. Thus, one (arbitrarily, -ish) breaks the symmetry, rolls with that version, and is aware that a dual version of all the development is also possible. However, this dual version is not the same. Covectors can indeed be expressed as row vectors with respect to some basis of covectors, and the coefficients of vectors in $V$ would then vary with the new basis instead of against, but then for each actual implementation, the matrices you would use would of course be duality-transformed. You would have changed the language but not the content. Finally, it's important to note that even though the dual objects are equivalent, it does not mean they are the same. This why we call them dual, instead of simply saying that they're the same! As regards vector spaces, then, one still has to prove that $V$ and $V^*$ are not only dually-related, but also different. This is made precise in the statement that there is no natural isomorphism between a vector space and its dual , which is phrased, and proved in, the language of category theory . The notion of 'natural' isomorphism is tricky, but it would imply the following: For each vector space $V$, you would have an isomorphism $\sigma_V:V\to V^*$. You would want this isomorphism to play nicely with the duality structure, and in particular with the duals of linear transformations, i.e. their adjoints . That means that for any vector spaces $V,W\in\mathrm{Vect}$ and any linear transformation $T:V\to W$, you would want the diagram to commute. That is, you would want $T^* \circ \sigma_W \circ T$ to equal $T$. This is provably not possible to do consistently. The reason for it is that if $V=W$ and is $T$ an isomorphism, then $T$ and $T^*$ are different, but for a simple counter-example you can just take any real multiple of the identity as $T$. This is precisely the formal statement of the intuition in garyp's great answer . In apples-and-pears languages, what this means is that a general vector space $V$ and its dual $V^*$ are not only dual (in the sense that there exists a transformation that switches them and puts them back when applied twice), but they are also different (in the sense that there is no consistent way of identifying them), which is why the duality language is justified. I've been rambling for quite a bit, and hopefully at least some of it is helpful. In summary, though, what I think you need to take away is the fact that Just because dual objects are equivalent it doesn't mean they are the same. This is also, incidentally, a direct answer to the question title: no, it is not foolish. They are equivalent, but they are still different. | {
"source": [
"https://physics.stackexchange.com/questions/105347",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/25736/"
]
} |
105,375 | I was reading through the first chapter of Polyakov's book "Gauge-fields and Strings" and couldn't understand a hand-wavy argument he makes to explain why in systems with discrete gauge-symmetry only gauge-invariant quantities can have finite expectation value. This is known as Elitzur's theorem (which holds for continuous gauge-symmetry). Polyakov says : [...] there could be no order parameter in such systems (in
discrete gauge-invariant system) [...], only gauge invariant quantities are nonzero. This follows from the fact, that by fixing the values of $\sigma_{\mathbf{x},\mathbf{\alpha}}$ at the boundary of our system we do not
spoil the gauge invariance inside it. Here $\sigma_{x,\alpha}$ are the "spin" variables that decorate the links of a $\mathbb{Z}_2$ lattice gauge theory. I would like to understand the last sentence of this statement. Could anyone clarify what he means and why this implies no gauge-symmetry breaking ? | 1) Gauge theory is a theory where we use more than one label to label the same
quantum state. 2) Gauge “symmetry” is not a symmetry and can never be broken. This notion of gauge theory is quite unconventional, but true. When two different quantum states $|a\rangle$ and $|b\rangle$ (i.e. $\langle a|b\rangle=0$) have the
same properties, we say that there is a symmetry between $|a\rangle$ and $|b\rangle$. If we
use two different labels “$a$” and “$b$” to label the same state,
$|a\rangle=|b\rangle$, then $|a\rangle$ and $|b\rangle$ obviously have (or has) the same
properties. In this case, we say that there is a gauge “symmetry” between $|a\rangle$
and $|b\rangle$, and the theory about $|a\rangle$ and $|b\rangle$ is a gauge theory (at least
formally). As $|a\rangle$ and $|b\rangle$, being the same state, always have (or
has) the same properties, the gauge “symmetry”, by definition, can never be
broken. Usually, when the same “thing” has the same properties, we do not say that there is a
symmetry. Thus, the terms “gauge symmetry” and “gauge symmetry breaking”
are two of the most misleading terms in theoretical physics.
Ideally, we should not use the above two confusing terms.
We should say that there is a gauge structure (instead of a gauge “symmetry”)
when we use many labels to label the same state. When we change our labeling
scheme, we should say that there is a change of gauge structure (instead of “gauge
symmetry breaking”). | {
"source": [
"https://physics.stackexchange.com/questions/105375",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/23279/"
]
} |
105,433 | I'm currently learning special relativity in high school and we only primarily deal with what happens when an object is moving at constant relativistic speeds. But what if the object slowed back down to a stop? I observe length contraction for an object at relativistic speeds, when it returns to be stationary, does it go back to its original length? | Calculating the effect of acceleration in special relativity is straightforward, but I suspect the algebra is a bit much at high school level. See John Baez's article on the Relativistic Rocket for a summary, or see Chapter 6 of Gravitation by Misner, Thorne and Wheeler for a more detailed analysis. When you're first introduced to SR you tend to be told about time dilation and length contraction and given formulae to calculate them. However this is at best an oversimplification and at worst actively misleading. When you're looking at some object moving relative to you you do indeed measure the object's length to be contracted, but what actually happens is that the two end points in the object's rest frame transform into points at slightly different times in your rest frame. You measure the object to be contracted because you're measuring the end points at slightly different times. There is no sense in which the object is squeezed by it's high velocity. Any object has a proper length , which is equal to its length in its rest frame. The proper length is an invariant and all observers will measure the same proper length regardless of their relative velocity. If you consider proper length then the object is not contracted. Anyhow, the answer to your question is that when the object comes to a stop relative to you its length has not changed. This is because it never did change - the change you measured was due to the coordinates you were using not matching the coordinates the object was using. When the object comes to rest in your frame you and the object are using the same coordinates (at worst differing in the position of the origin) so both of you measure the length to be the proper length. | {
"source": [
"https://physics.stackexchange.com/questions/105433",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36046/"
]
} |
105,707 | Water appears transparent to visible light, yet most other objects are opaque. Why is that? Is there an explanation why water appears transparent? Is water transparent at all wavelengths, or are the visible wavelengths somehow special? If it is not transparent at all wavelengths, is there some evolutionary explanation why we would expect water to have low absorption at the wavelengths that we can see with our eyes? Is there some explanation along the lines of "because we evolved in an environment where water plays (some important) role, therefore it's not surprising that our eyes are sensitive to wavelengths where water has low absorption"? | To answer this question we also need to know why some things are not transparent and why certain things, water for example, don't behave in this way. A substance's interaction with light is all about interactions between photons and atomic/molecular electrons. Sometimes a photon is absorbed, the absorber lingers a fantasctically short while in an excited state and then a new photon is re-emitted, leaving the absorber in exactly the same state as it was before the process. Thus the absorber's momentum, energy and angular momentum are the same as before, so the new photon has the same energy, same momentum (i.e. same direction) and same angular momentum (i.e. polarisation) as before. This process we call propagation through a dielectric, and, by all the conservations I name, you can easily see that such a material will be transparent. Sometimes, however, the fleetingly excited absorber couples its excess energy, momentum and so forth to absorbers around it. The photon may feed into molecular (i.e. covalent bond) resonances - linear, rotational and all the other microscopic degrees of mechanical freedom that a bunch of absorbers has. The photon may not get re-emitted, but instead its energy is transferred to the absorbing matter. When this happens, the material is attenuating or opaque. So, BarsMonster's excellent graph shows us where in the spectrum water's internal mechanics tends to absorb photons for good (thus where it is opaque) and where it behaves as a dielectric, simply delaying the light through absorption and re-emission. In a short answer, it is impossible to explain the whys and wherefores of the graph as its peaks and troughs are owing to molecular resonances of very high complexity. The graph is really as good a simple summary as one is going to get. However, there is one last piece to the water transparency (in visible light) jigsaw that I don't believe has been talked about and that is water is a liquid . This means it can't be rivven with internal cracks and flaws. Sometimes opaqueness is caused by scattering and aberration rather than the absorption I speak of above. This is why snow is not transparent, for example. For light to propagate through a medium with low enough aberration that we perceive the medium to be transparent, the medium must be optically highly homogeneous. This homogeneity generally arises only in near to perfect crystals and in liquids, the latter tend to smooth out any flaws by flow and diffusion and thus tend to be self homogenising. Inhomogeneity is a powerful block to light: the simplified models of Mie and Rayleigh scattering show this decisively. So in summary, water is transparent at visible wavelengths because (1) molecular resonances and other mechanical absorbing phenomena don't tend to be excited in water at visible wavelenghts and (2) it is optically homogeneous, which property is greatly helped by its being a liquid. | {
"source": [
"https://physics.stackexchange.com/questions/105707",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24498/"
]
} |
105,802 | The (one-dimensional) wave equation is the second-order linear partial differential equation
$$\frac{\partial^2 f}{\partial x^2}=\frac{1}{v^2}\frac{\partial^2 f}{\partial t^2}\tag{second order PDE}$$
that admits as its solutions functions $f$ of the form
$$f=f(x\pm vt),\tag{solution}$$
as can be verified in a straight-forward manner. These solutions have a convenient interpretation that justifies the phrase wave equation. I noticed there are first-order partial differential equations which have as solutions functions of the form $f(x\pm vt)$:
$$\frac{\partial f}{\partial x}=\pm\frac{1}{v}\frac{\partial f}{\partial t}\tag{first order PDE}.$$ A quick Google search shows this is indeed called the first-order wave equation, but it usually shows up in the context of math classes. So now the question: Why is the usual second-order PDE favored over these first-order ones if both admit the same solutions? Is there a physical reason? Are this first-order equations useful in their own right? Perhaps there other solutions that one admits that isn't desired, or maybe it just looks cleaner since one doesn't have to carry around the $\pm$ symbol in the differential equation. | There's nothing wrong with the first order wave equation mathematically, but it's just a little boring. If you want to use this equation to describe waves, it basically amounts to having a 1d solid with speed of sound $v$ for left moving waves (say) and speed of sound $0$ for right moving waves. It wouldn't surprise me if such a thing could be constructed (you would have to introduce some external fields to break time reversal invariance) but it is a very special system that we are not generically interested in. Let's take the Fourier transforms of both equations to get the dispersion relationships. The normal second order equation gives
\begin{equation}
\omega^2=v^2 k^2
\end{equation}
So for each frequency $\omega$ there are two allowed values of $k$, corresponding to right and left moving waves. Note that if we generalize the second order equation to include more spatial directions, there would be an infinite number of allowed $k$ values. The first order equation meanwhile always has one allowed solution for a given frequency
\begin{equation}
\omega=v k
\end{equation}
So we get either right moving or left moving waves but not both. This restricts the allowed behavior, you can't have standing waves for example. If I try to generalize to higher dinensions, this equation picks out a single allowed $k$ for each frequency, so waves will only propagate along one very special direction. Physically this is not what we would normally call a wave bc I only need one initial condition, not two. Usually dynamical systems can only be evolved given their initial position and velocity, but the first order equation needs only the initial position. (Or if you like, your equation is not a Hamiltonian system bc the phase space is odd dimensional). Last but not least the first order equation necessarily picks out a preferred frame. By doing a boost I can change the sign of $v$, Thus the equation is not a good starting point for dealing with relativistic waves, which is one major application for the wave equation. (Of course you can have waves in materials that do pick out a preferred frame, and that is fine, but there you run into the problems above that you are looking at something with a preferred direction of motion as well). (The Dirac equation gets around this by using spinor reps of the Lorentz group, but from your question I am supposing $f$ is a scalar). Edit: rereading your question I see you want to have the $\pm$. Then you aren't looking at solutions of one single equation, you are looking at solutions to two equations and saying both are allowed. This is a little ugly for a few reasons. First,philosophically there should be one single equation for any system. Second, super positions don't solve either first order equation separately but do solve the second order equation. Third, the analogue of your idea for more spatial dimensions is to have an infinite set of first order equations, one for each direction. On the other hand there is a way to rewrite the second order equation as two first order equations in a way which generalizes to any dimension, this is the way of the Hamiltonian and it is indeed a very useful thing to do in many situations. | {
"source": [
"https://physics.stackexchange.com/questions/105802",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/29216/"
]
} |
105,912 | I'm reading a proof about Lagrangian => Hamiltonian and one part of it just doesn't make sense to me. The Lagrangian is written $L(q, \dot q, t)$ , and is convex in $\dot q$ , and then the Hamiltonian is defined via the Legendre transform $$H(p,q,t) = \max_{\dot q} [p \cdot \dot q - L(q, \dot q, t)]$$ Under the right conditions there exists a function $\dot Q (p,q,t)$ such that $$H(p,q,t) = p \cdot \dot Q(p,q,t) - L(q, \dot Q(p,q,t), t)$$ i.e. when some $\dot Q(p,q,t)$ satisfies $$p = \frac{\partial L}{\partial \dot q}\rvert_{(q, \dot Q(p,q,t), t)} = \frac{\partial L}{\partial \dot q}(q, \dot Q(p,q,t), t).$$ (Finding this function is usually called "inverting $p$ ".) By taking partials in the $p$ variable and using the relationship, we can obtain the relationship $$\dot Q = \frac{\partial H}{\partial p}.$$ Because of the notation I chose, I get the strong urge to say $$\dot q = \frac{\partial H}{\partial p} ,$$ and in fact this is what the textbook does. But have we proved this? In other words, how can we deduce that $$q'(t) = \frac{\partial H}{\partial p}(p(t), q'(t), t)$$ for any differentiable vector valued function $q$ ? (or maybe there are more conditions we need on $q$ ? Here $$p(t) = \frac{\partial L}{\partial \dot q}(q(t), q'(t), t)$$ according to Lagrange's equations. | Ok, let us start from scratch. A function $g: \mathbb R^n \to \mathbb R$ with $f \in C^2(\mathbb R^n)$ is said to be convex if its Hessian matrix (i.e. the one with coefficients $\partial^2 f/\partial x_i \partial x_j$ ) is everywhere (strictly) positively defined. Let $\Omega \subset \mathbb R \times \mathbb R^n$ be an open set and focus on a jointly $C^2$ Lagrangian function $\Omega \times \mathbb R^n \ni (t,q,\dot{q}) \mapsto L(t, q, \dot{q}) \in \mathbb R$ . For fixed $(t,q) \in \Omega$ , $L$ is assumed to be convex as a function of $\dot{q}$ . In other words $\mathbb R^n \ni \dot{q} \mapsto L(t, q, \dot{q}) \in \mathbb R$ is supposed to be convex. Referring to either systems made of material points or solid bodies, convexity arises form the structure of the kinetic energy part of Lagrangians, which are always of the form $T(t, q, \dot{q}) - V(t, q)$ , even considering generalized potentials $V(t,q, \dot{q})$ with linear dependence on $\dot{q}$ , as is the case for inertial or electromagnetic forces or inertial forces also in the presence of holonomic ideal constraints. The associated Hamiltonian function is defined as the Legendre transformation of $L$ with respect to the variables $\dot{q}$ . In other words: $$H(t,q,p) := \max_{\dot{q} \in \mathbb R^n}\left[p\cdot \dot{q} - L(t, q, \dot{q})\right]\qquad (1)$$ Within our hypotheses on $L$ , from the general theory of Legendre transformation, it arises that, for fixed $(t,q) \in \Omega$ , a given $p \in \mathbb R^n$ is associated with exactly one $\dot{q}(p)_{t,q} \in \mathbb R^n$ where the maximum of the RHS in (1) is attained (for $n=1$ the proof is quite evident, it is not for $n>1$ ). Since $\dot{q}(p)_{t,q} $ trivially belongs to the interior of the domain of the function $\mathbb R^n \ni \dot{q} \mapsto p\cdot \dot{q} - L(t, q, \dot{q})$ , it must be: $$\left.\nabla_{\dot{q}} \right|_{\dot{q}= \dot{q}(p)_{t,q}} \left( p\cdot \dot{q} - L(t, q, \dot{q})\right) =0\:.$$ In other words (always for fixed $t,q$ ): $$p = \left.\nabla_{\dot{q}} \right|_{\dot{q}(p)_{t,q}} L(t, q, \dot{q})\:, \quad \forall \dot{q} \in \mathbb R^n\qquad (2)$$ As a consequence, (always for fixed $(t,q)\in \Omega$ ) the map $\mathbb R^n \ni p \mapsto \dot{q}(p)_{t,q} \in \mathbb R^n$ is injective , because it admits a right inverse given by the map $\mathbb R^n \ni \dot{q} \mapsto \nabla_{\dot{q}} L(t, q, \dot{q})$ which, in turn, is surjective . However the latter map is also injective , as one easily proves
using the convexity condition and the fact that the domain $\mathbb R^n$ is trivially convex too. The fact that the $\dot{q}$ -Hessian matrix of $L$ is non-singular also implies that the map (2) is $C^1$ with its inverse. Summing up, the map (2) is a $C^1$ diffeomorphism from $\mathbb R^n$ onto $\mathbb R^n$ and, from (1), we have the popular identity describing the interplay of the Hamiltonian and Lagrangian functions as: $$H(t,q,p) = p\cdot \dot{q} - L(t, q, \dot{q})\qquad (3)$$ which holds true when $p \in \mathbb R^n$ and $\dot{q} \in \mathbb R^n$ are related by means of the $C^1$ diffeomorphism from $\mathbb R^n$ onto $\mathbb R^n$ (for fixed $(t,q)\in \Omega$ ): $$p = \nabla_{\dot{q}} L(t, q, \dot{q})\:, \quad \forall \dot{q} \in \mathbb R^n\qquad (4)\:.$$ By construction, $H= H(t,q,p)$ is a jointly $C^1$ function defined on $\Gamma := \Omega \times \mathbb R^n$ . I stress that $L$ is defined on the same domain $\Gamma$ in $\mathbb R^{2n+1}$ . The open set $\Gamma$ is equipped by the diffeomorphism: $$\psi: \Gamma \ni (t,q, \dot{q}) \mapsto (t,q, p) \in \Gamma \qquad (4)'$$ where (4) holds. Let us study the relationship between the various derivatives of $H$ and $L$ . I remark that I will not make use of Euler-Lagrange or Hamilton equations anywhere in the following. Consider a $C^1$ curve $\gamma: (a,b) \ni t \mapsto (t, q(t), \dot{q}(t)) \in \Gamma$ , where $t$ has no particular meaning and $\dot{q}(t)\neq \frac{dq}{dt}$ generally. The diffeomorphism $\psi$ transform that curve into a similar $C^1$ curve $t \mapsto \psi(\gamma(t)) = \gamma'(t)$ I will also indicate by $\gamma': (a,b) \ni t \mapsto (t, q(t), p(t)) \in \Gamma$ . We can now evaluate $H$ over $\gamma'$ and $L$ over $\gamma$ and compute the total temporal derivative taking (3) and (4) into account, i.e. we compute: $$\frac{d}{dt} H(t, q(t),p(t)) = \frac{d}{dt}\left(p(t) \dot{q}(t) - L(t,q(t),p(t)) \right)\:.$$ Computations gives rise almost immediately to the identity, where both sides are evaluated on the respective curve: $$\frac{\partial H}{\partial t} + \frac{dq}{dt}\cdot \nabla_q H
+ \frac{dp}{dt}\cdot \nabla_p H = \frac{dp}{dt}\dot{q} + p \frac{d\dot{q}}{dt} -\frac{\partial L}{\partial t} - \frac{dq}{dt}\cdot \nabla_q L
- \frac{d\dot{q}}{dt}\cdot \nabla_{\dot{q}} L \:.$$ In the RHS, the second and the last term cancel each other in view of (4), so that: $$\frac{\partial H}{\partial t} + \frac{dq}{dt}\cdot \nabla_q H
+ \frac{dp}{dt}\cdot \nabla_p H = \frac{dp}{dt}\dot{q} -\frac{\partial L}{\partial t} - \frac{dq}{dt}\cdot \nabla_q L \:.$$ Rearranging the various terms into a more useful structure: $$\left(\frac{\partial H}{\partial t}|_{\gamma'(t)} + \frac{\partial L}{\partial t}|_{\gamma(t)}\right) +
\frac{dq}{dt}\cdot \left( \nabla_q H|_{\gamma'(t)} + \nabla_q L|_{\gamma(t)}\right) +
\frac{dp}{dt}\cdot \left(\nabla_p H|_{\gamma'(t)} - \dot{q}|_{\gamma(t)}\right) =0\:.\qquad (5)$$ Now observe that actually, since $\gamma$ is generic, $\gamma(t)$ and $\gamma'(t)= \psi(\gamma(t))$ are generic points in $\Gamma$ (however connected by the transformation (4)). Moreover, given the point $(t,q, \dot{q}) = \gamma(t) \in \Gamma$ , we are free to choose the derivatives $\frac{dq}{dt}$ and (using the diffeomorphism) $\frac{dp}{dt}$ as we want, fixing $\gamma$ suitably. If we fix to zero all these derivatives, (5) proves that, if $(t,q, \dot{q})$ and $(t,q,p)$ are related by means of (4): $$\left(\frac{\partial H}{\partial t}|_{(t,q,p)} + \frac{\partial L}{\partial t}|_{(t,q, \dot{q})}\right) =0\:.$$ This result does not depend on derivatives $dq/dt$ and $dp/dt$ since they do not appear as arguments of the involved functions. So this result holds everywhere in $\Gamma$ because $(t,q, \dot{q})$ is a generic point therein.
We conclude that (5) can be re-written as: $$\frac{dq}{dt}\cdot \left( \nabla_q H|_{\gamma'(t)} + \nabla_q L|_{\gamma(t)}\right) +
\frac{dp}{dt}\cdot \left(\nabla_p H|_{\gamma'(t)} - \dot{q}|_{\gamma(t)}\right) =0\:.\qquad (5)'$$ where again, we are considering a generic curve $\gamma$ as before.
Fixing such curve such that all components of $\frac{dq}{dt}$ and $\frac{dp}{dt}$ vanish except for one of them, for instance $\frac{dq^1}{dt}$ , we find: $$\left(\frac{\partial H}{\partial q^1}|_{(t,q,p)} + \frac{\partial L}{\partial q^1}|_{(t,q, \dot{q})}\right) =0\:,$$ if $(t,q, \dot{q})$ and $(t,q,p)$ are related by means of (4), and so on. Eventually we end up with the following identities, valid when $(t,q, \dot{q})$ and $(t,q,p)$ are related by means of (4) $$\frac{\partial H}{\partial t}|_{(t,q,p)} =- \frac{\partial L}{\partial t}|_{(t,q, \dot{q})}\:, \quad \frac{\partial H}{\partial q^k}|_{(t,q,p)} =- \frac{\partial L}{\partial q^k}|_{(t,q, \dot{q})}\:, \quad
\frac{\partial H}{\partial p_k}|_{(t,q,p)} = \dot{q}^k\:.
\quad (6)$$ The last identity is the one you asked for. As you see, the found identities rely upon the Legendre transformation only and they do not consider Euler-Lagrangian equations or Hamilton ones. However, exploiting these identities, it immediately arises that $\gamma$ verifies EL equations: $$\frac{d}{dt} \frac{\partial L}{\partial \dot{q}^k} - \frac{\partial L}{\partial q^k}=0\:,\quad \frac{dq^k}{dt} = \dot{q}^k\quad k=1,\ldots, n$$ if and only if the transformed curve $\gamma'(t) := \psi(\gamma(t))$ verifies Hamilton equations. $$\frac{d p_k}{dt} = -\frac{\partial H}{\partial q^k} \:, \quad \frac{dq^k}{dt} = \frac{\partial H}{\partial p_k}\quad k=1,\ldots, n\:.$$ Indeed, starting from a curve $\gamma(t) = (t, q(t), \dot{q}(t))$ , the first EL equation, exploiting (4) (which is part of the definition of $\psi$ ) and the second identity in (6), becomes the first Hamilton equation for the transformed curve $\psi (\gamma(t))$ . Moreover, the second EL equation, making use of the last identity in (6), becomes the second Hamilton equation for the transformed curve. This procedure is trivially reversible, so that, starting from Hamilton equations, you can go back to EL equations. The first identity in (6) it not used here. However it implies that the system is or is not invariant under time translations simultaneously in Lagrangian and Hamiltonian formulation (in both cases, that invariance property implies the existence of a constant of motion which is nothing but $H$ represented with the corresponding variables either Lagrangian or Hamiltonian). As a final comment notice that (3) and the last identity in (6) (which is nothing but the inverse function of (2) at fixed $(t,q)$ ) imply $$L(t, q, \dot{q}) = \nabla_p H(t,q,p) \cdot p - H(t,q,p)\:,$$ where (2) is assumed to connect Lagrangian and Hamiltonian variables. | {
"source": [
"https://physics.stackexchange.com/questions/105912",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/24460/"
]
} |
105,935 | Here's the picture: In picture 1 : Assume that we have 2 objects, namely A , and B . Object A with mass $2M$, and object B with mass $m$; and their distance is $r$. Then the gravitational force of A acting on B ($\vec{v}$)'s magnitude will be: $$G\times\frac{2Mm}{r^2}.$$ In picture 2 : Now divide object A into 2 equal parts of mass $M$ each. The distance from the centroid of each part of A to B is $\dfrac{r}{\cos \alpha}$. The gravitational forces of 2 parts of A acting on B ($\vec{v}_1; \vec{v}_2$)'s magnitudes are: $$G\times \frac{Mm\cos^2\alpha}{r^2}.$$ Now, I'm pretty sure that if I take the sum $\vec{v}_1 + \vec{v}_2$, I wouldn't get $\vec{v}$. The direction is the same, but the magnitude isn't. They are off a factor of $\cos ^ 3 \alpha$. :( What's going on here? | The problem is in your assumption that the force is $F = 2GMm/r^2$. This is true for the force on a point mass from a sphere or another point mass, but not otherwise. What you need to do is sum
each the force on each particle from every other particle. For a continuum object, $$\vec F = \int \rho \vec g\, dV$$ where $\rho$ is the density and $\vec g$ the acceleration due to gravity. Both can vary over the volume. To find $\vec g$ you sum up the contribution from all points,
$$\vec g(\mathbf x) = G\int\rho \frac{\mathbf y -\mathbf x}{|\mathbf x -\mathbf y|^3}\, dV$$
do as you can see it is significantly more difficult for bodies other than point masses (and spheres, it turns out). But from these expressions you can see that forces on parts of rigid bodies add, and so do the forces from parts of bodies. Often you can pretend that you are dealing with point masses because from far away, anything looks like a point mass. But as you have discovered, this is an approximation and not exact. | {
"source": [
"https://physics.stackexchange.com/questions/105935",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/43575/"
]
} |
106,009 | I know how rainbows are formed, and why. Usually it is said that the Sun must be behind the observer, in order for its light to be totally reflected inside the droplet and then reach the observer. But surely not all the light is totally reflected? There must be some radiation refracting from inside the droplet to the air: since different wavelengths refract through different angles, shouldn't we also see a rainbow when the Sun is in front of us? | You are right. Rainbows can occur all over the sky. However the traditional one and two internal reflections of the primary and secondary bows send light back towards the sun and hence their bows appear opposite the sun and centered on the antisolar point. The reflection of the main light makes these bows stand out. And only the light that enters a droplet is reflected in some manner. Sometimes it is a single reflection and you get a primary rainbow which everyone is familiar with. Depending on how dense the droplets are some light will pass through all the drops and will not be reflected creating a hazy or dim rainbow. If some of the light bounces inside the droplet (enters but does not exit and bounces inside twice more) you'll see a weaker double rain bow caused the the light that has bounced 2 times. So there are many orders of rainbows: A zero order is when there are no internal droplet reflections and the droplet is sun ward. This creates an orange shifted glow. There must be some radiation refracting from inside the droplet to the air: since different wavelengths refract through different angles, shouldn't we also see a rainbow when the Sun is in front of us? The width you see of the rainbow represents the spreading of the frequencies and their slightly different refraction angles. Light can be refracted in many different ways from the droplets with the sun in between you, but they don't form the traditional bow shape which has to do with the arc of the sun being behind you and refracted back. This is mostly because the suns brightness make it too hard to see. And here are some other refraction examples that are not really considered rainbows. Take a look at this site for more about atmospheric optics . | {
"source": [
"https://physics.stackexchange.com/questions/106009",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37677/"
]
} |
106,098 | Sausages universally split parallel to the length of the sausage. Why is that? | This behaviour is well explained by Barlow's formula , even though the English Wikipedia article is incomplete in this context. The German version , on the other hand, gives the full picture (which I will quote in the following). The walls of a pipe (or a similar cylindric container, say, a sausage) experience two types of stresses: Tangential ($\sigma_{\rm{t}}$) and axial ($\sigma_{\rm{a}}$). For given pressure $p$, diameter $d$ and wall-thickness $s$, the individual stresses can approximately be calculated from
$$\sigma_{\rm{t}} = \frac { p \cdot d } { 2 \cdot s }$$
and
$$\sigma_{\rm{a}} = \frac { p \cdot d } { 4 \cdot s }.$$
Here, you can directly see that the tangential stress will always be larger, which is why it is likely that cracks in the container/sausage will first form in this direction. In fact, this is why the first formula is often stated on its own, just as it is the case in the English Wikipedia article. Fun fact: The sausage example is used by many German students as a mnemonic helping to remember which of the stresses is larger. As a result, the formulas are often called "Bockwurstformeln" (sausage formulas). Edit: In response to the comments below, I will try to summarize some details about the above formulas The formulas do not directly indicate how and where the container will split. Assuming that the tensile strength is identical in all directions, we can see that there will be a greater release of tension when the crack propagates length-wise (See the video posted by JoeHobbit and the comment by LDC3) A real sausage will of course have various imperfections, which is why the crack path will not be straight in practice | {
"source": [
"https://physics.stackexchange.com/questions/106098",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/2843/"
]
} |
106,605 | Is charge of something for (e.g.) an electron related to electromagnetic space if it exists due to energy, due to which it may have mass? I don't know about quantum mechanics or advanced particle models. Can anyone just simply give an intuitive idea? EDIT
I want to mean what actually gives electron charge if it is not assumed fundamental but result of some other physical phenomenon or it is just the quantity defined to explain physical interactions?I think now it is clear | Charge is a fundamental conserved property of particles. It is, if you like, a measure of how much a particle interacts with electromagnetic fields. A particle with charge can produce and be affected by electromagnetic fields. This is what we mean when we say a particle has electric charge. It might help to think of it as a simple quantised way to measure the coupling strength of particles with the appropriate force, as the concept of charge extends to other forces as well. e.g: electric charge for electromagnetic force,
colour charge for strong force, etc. Please also see @JamalS's answer which is thicker on the abstraction and shows the quantum field theoretic origins of electric charge | {
"source": [
"https://physics.stackexchange.com/questions/106605",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40253/"
]
} |
106,754 | Boiling point of water is 100 degree Celsius. The temperature at which water in liquid form is converted into gaseous form. Then how it possible for water to evaporate at room temperature? | Think of temperature as average kinetic energy of the water molecules. While the average molecule doesn't have enough energy to break the inter-molecular bonds, a non-average molecule does. Water is a liquid because the dipole attraction between polar water molecules makes them stick together. At standard atmospheric pressure (acting somewhat like a vice), you need a comparatively large temperature of 100°C (translating to a high average energy distributed among the micsroscopic degrees of freedom , most relevantly the kinetic ones) for water molecules to break free in bulk, creating bubbles of water vapour within the liquid. However, at the surface of the liquid, lone molecules may end up getting enough kinetic energy to break free due to the random nature of molecular motion at basically any temperature. On the flip side, water molecules in the atmosphere may enter the liquid at the surface as well, which is measured by equilibrium vapour pressure . | {
"source": [
"https://physics.stackexchange.com/questions/106754",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41462/"
]
} |
106,767 | It is unclear whether the work is defined considering opposite force on the same body or in ideal case where there is a single force acting on a body.If we consider ideal case where only a single force is present on the body, the displacement will not attain a specefic value. It will go on increasing and we cannot fix the value of displacement, I mean when do we say a force has worked on a body? Do we say this when a certain force has to' work' against another opposite force to cause displacement? I think the work is defined considering 2nd case since only then we can have a fix value of displacement because when one force is removed the other force will eventually stop the body. | Think of temperature as average kinetic energy of the water molecules. While the average molecule doesn't have enough energy to break the inter-molecular bonds, a non-average molecule does. Water is a liquid because the dipole attraction between polar water molecules makes them stick together. At standard atmospheric pressure (acting somewhat like a vice), you need a comparatively large temperature of 100°C (translating to a high average energy distributed among the micsroscopic degrees of freedom , most relevantly the kinetic ones) for water molecules to break free in bulk, creating bubbles of water vapour within the liquid. However, at the surface of the liquid, lone molecules may end up getting enough kinetic energy to break free due to the random nature of molecular motion at basically any temperature. On the flip side, water molecules in the atmosphere may enter the liquid at the surface as well, which is measured by equilibrium vapour pressure . | {
"source": [
"https://physics.stackexchange.com/questions/106767",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/43830/"
]
} |
106,808 | If I jump from an airplane straight positioned upright into the ocean, why is it the same as jumping straight on the ground? Water is a liquid as opposed to the ground, so I would expect that by plunging straight in the water, I would enter it aerodynamically and then be slowed in the water. | When you would enter the water, you need to "get the water out of the way". Say you need to get 50 liters of water out of the way. In a very short time you need to move this water by a few centimeters. That means the water needs to be accelerated in this short time first, and accelerating 50 kg of matter with your own body in this very short time will deform your body, no matter whether the matter is solid, liquid, or gas. The interesting part is, it does not matter how you enter the water—it is not really relevant (regarding being fatal) in which position you enter the water at a high velocity. And you will be slowing your speed in the water, but too quickly for your body to keep up with the forces from different parts of your body being decelerated at different times. Basically I'm making a very rough estimate whether it would kill, only taking into account one factor, that the water needs to be moved away. And conclude it will still kill, so I do not even try to find all the other ways it would kill. Update - revised : One of the effects left out for the estimate is the surface tension. It seems to not cause a relevant part of the forces - the contribution exists, but is negligibly small. That is depending on the size of the object that is entering the water - for a small object, it would be different. (see answers of How much of the forces when entering water is related to surface tension? ) | {
"source": [
"https://physics.stackexchange.com/questions/106808",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/43905/"
]
} |
107,061 | Imagine we have cup A with 50 g of water and cup B (smaller in width than A) with 100 g of water. Now put cup B into cup A. If the width of both cups are of comparable size then the cup with 100 g of water floats. It does not touch the bottom of cup B. Now think about Archimedes law of flotation. It says that the weight of displaced liquid = weight of the floating object. However in this case the bottom cup has only 50 g of water. How can an object float without displacing water equal to its own weight? Am I not applying Archimedes principle correctly or because of both things beings of comparable size Archimedes principle does not apply? | As best I can tell, what you're confused about is the fact that Cup A (weighing 100 g) is floating in 50 g of water, while Archimedes principle states that Cup A ought to be displacing 100 g of water, which seems to contradict the fact that there's only 50 g of water available to displace. How can that be possible? There is a subtle reason; just because you have 50 g of water doesn't mean you can't effectively displace more than 50 g of water. This is probably best illustrated with a picture. Here's what the system looks like before Cup B is dropped in: Here's what it looks like when you drop in Cup B: The tricky thing is: Cup B effectively displaced 100 g of water, even though there was only 50 g of water available to displace! If it's not immediately obvious how it is that Cup B is displacing 100 g of Cup A's water (even though Cup A only has 50 g of water), stare at diagram 2 for a while. | {
"source": [
"https://physics.stackexchange.com/questions/107061",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/29735/"
]
} |
107,171 | The the title is self explanatory, I guess. Why can two (or more) electric field lines never cross? | Electric field lines are a visualization of the electrical vector field. At each point, the direction (tangent) of the field line is in the direction of the electric field. At each point in space (in the absence of any charge), the electric field has a single direction, whereas crossing field lines would somehow indicate the electric field pointing in two directions at once in the same location. Field lines do cross, or at least intersect, in the sense that they converge on charge. If there is a location with charge, the field lines will converge on that point. However we typically say the field lines terminate on the charge rather than crossing there. | {
"source": [
"https://physics.stackexchange.com/questions/107171",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/38873/"
]
} |
107,191 | Diagrams of rocket engines like this one, ( source ) always seem to show a combustion chamber with a throat, followed by a nozzle. Why is there a throat? Wouldn't the thrust be the same if the whole engine was a U-shaped combustion chamber with a nozzle? | The whole point to the throat is to increase the exhaust velocity. But not just increase it a little bit -- a rocket nozzle is designed so that the nozzle chokes . This is another way of saying that the flow accelerates so much that it reaches sonic conditions at the throat. This choking is important. Because it means the flow is sonic at the throat, no information can travel upstream from the throat into the chamber. So the outside pressure no longer has an effect on the combustion chamber properties. Once it is sonic at the throat, and assuming the nozzle is properly designed, some interesting things happen. When we look at subsonic flow, the gas speeds up as the area decreases and slows down as the area increases. This is the traditional Venturi effect. However, when the flow is supersonic, the opposite happens. The flow accelerates as the area increases and slows as it decreases. So, once the flow is sonic at the throat, the flow then continues to accelerate through the expanding nozzle. This all works together to increase the exhaust velocity to very high values. From a nomenclature standpoint, the throat of a nozzle is the location where the area is the smallest. So a "U-shaped chamber with a nozzle" will still have a throat -- it's defined as wherever the area is the smallest. If the nozzle is a straight pipe then there is no throat to speak of. | {
"source": [
"https://physics.stackexchange.com/questions/107191",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/9167/"
]
} |
107,397 | I have read about the uncertainty principle . As it applies to electrons, how is it that we can get exact tracks of electrons in cloud chambers? That is to say that how is it that the position is fixed? | In this article electrons seen in a bubble chamber are shown. The spiral is an electron knocked off from an atom of hydrogen, a bubble chamber is filled with supercooled liquid hydrogen in this case. The accuracy of measuring the tracks is of an order of microns. The momentum of the electron can be found if one knows the magnetic field and the curvature. The little dots on the straight tracks are electrons that have just managed to be kicked off from the hydrogen, this would give them a minimum momentum of a few keV. The total system, picture and measurements give a space resolution of 10 to 50 microns. $$\Delta x \sim 10^{-5}\, {\rm m}$$ $$\Delta p \sim 1\, {\rm keV}/c = 5.344286×10^{-25}\, {\rm kg\cdot m/s}$$ $$\Delta x \cdot \Delta p > \hbar/2$$ with $\hbar=1.054571726(47)\times10^{−34}\, {\rm kg\cdot m^2/s}$ is satisfied macroscopically since the value is $10^{-30}$ , four orders of magnitude larger than $\hbar$ . With nanotechnology, one is getting into dimensions commensurate with the size of $\hbar$ , but not with bubble chambers or cloud chambers or most particle detectors up to now. | {
"source": [
"https://physics.stackexchange.com/questions/107397",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/3938/"
]
} |
107,409 | After seeing this answer claiming that displacing matter "In a very short time", "no matter whether the matter is solid, liquid, or gas" (even though he concludes that falling from a high altitude is fatal, independent of this). I wondered why then is the jump itself not fatal, considering that there is a significant amount of "gas", that does need to be displaced before even hitting water. Is it because there isn't enough mass per square inch to be fatal? And if so, at what speed would it be fatal? Or is there something else I or the guy who answered that question is missing? | It's not the falling that's fatal, it's the deceleration at the end that kills you. Something like water or concrete does this on a sub-meter distance (which requires extremely high forces). On the other hand a gas is much less dense, so it cannot decelerate a falling object nearly as quick. Sometimes inflatable cushions are used as safety nets (think: stunts/someone jumping off a building scenario). If it is too inflated then the deceleration distance won't be great enough and it can still cause injury or even death. It seems that a sudden deceleration of ~100g is fatal ; that's about 80kN for an average male (80kg). We need the drag formula : $F_d = \frac{1}{2}\rho v^2C_dA$. Plugging in typical values: $F_d = 80*10^3N$ as asserted above, The density of air humans experience is typically $\rho = 1 \frac{kg}{m^3}$. $A$, the frontal surface of a human seems to be hidden behind pay walls; let's go with $A = 0.5 m^2$ $C_d$, the drag coefficient , is not so straightforward, but we'll go with $1.3$ (man,ski jumper example given on the Wikipedia drag coefficient page). $80*10^3N=\frac{1}{2}*1*v^2*1.3*0.5 $... ...results in a speed of about $500 m/s$, or 1800 km/h. This does not mean that falling at that speed is lethal. This scenario assumes you suddenly transition form no resistance into dense air. | {
"source": [
"https://physics.stackexchange.com/questions/107409",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/42491/"
]
} |
107,443 | When I learned about the Minkowski Space and it's coordinates, it was explained such that the metric turns out to be $$ ds^{2} = -(c^{2}dx^{0})^{2} +(dx^{1})^{2} + (dx^{2})^{2} + (dx^{3})^{2} $$ where $ x^0,x^1,x^2,x^3 $ come from $ x^{\mu} : \mu = 0,1,2,3 $, and $ c $ is the speed of light. The first resource I had access to--I have to do a bit of digging for the exact paper--of course addressed that this invariant takes the tensor form: $$ ds^2 = g_{\mu \nu}dx^{\mu}dx^{\nu} $$ as well. This two things I've seen in all of my texts and online resources regarding resources. The element in question that varies between authors, is the time coordinate, $ x^0 $. When it was first explained to me, it was using the standard Cartesian representation for the spatial portion of the coordinates, and the time coordinate was labeled as $ x^0 = ict $. Squaring gives $ (x^0)^2 = -c^2t^2 $, and applying differential calculus, we get $ (dx^0)^2 = -c^2dt^2 $. Sensible and expected to have the tensor formula spit out the Minkowski Metric. The author then later explicitly states the coordinates are $ x^0 = ict, x^1=x,x^2=y,$ and $x^3 = z$. My questions is then, why is it most authors on the subject omit the imaginary unit on the time coordinate? For example, here . The only reason I can fathom omission is if the author is using metric signature $ [+,-,-,-] $, where I started off learning the theory with signature $ [-,+,+,+] $ which may be the reason seeing the time coord with no imaginary unit seems dissonant to me. All help appreciated! Edit: After reading the other answers, my questions is now one of why and how (mathematically) do we obtain the Minkowski Metric Signature. More specifically the one element with a different sign. | as you wrote, the spacetime invariant can be expressed as:
$$ds^2=g_{\mu\nu}dx^{\mu}dx^{\nu}$$
and from that we normally get:
$$ds^2=-c^2dt^2+dx^2+dy^2+dz^2$$
This is not because of some arbitrary imaginary time unit, this is because the metric ($g_{\mu\nu}$) is a diagonal matrix with the coefficients of each term of the $ds^2$ equation:
$$g_{\mu\nu}=\left(\begin{array}{l}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right)$$
and the coordinates are listed as you would assume:
$$dx^{\mu}=\left(\begin{array}{l}cdt\\dx\\dy\\dz\end{array}\right)$$
Then, you should note that $$g_{\mu\nu}dx^{\mu}=dx_{\nu}=\left(\begin{array}{l}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{array}\right)\left(\begin{array}{l}cdt\\dx\\dy\\dz\end{array}\right)=\left(-cdt~~dx~~dy~~dz\right)$$
Also, $v_{\mu}v^{\mu}$ is the inner product, meaning:
$$dx_{\nu}dx^{\nu}=\left(-cdt~~dx~~dy~~dz\right)\left(\begin{array}{l}cdt\\dx\\dy\\dz\end{array}\right)=-c^2dt^2+dx^2+dy^2+dz^2$$
This is the equation you want without any imaginary unit omission. The reason for the $-1$ in the $g_{\mu\nu}$ is that it makes the system Lorentz invariant; it maintains $ds^2$ as a spacetime invariant quantity. Let me be historical. In Euclidean 3-D coordinates, you find the interval between positions as $$\Delta d_{Eucl}^2=(X_2-X_1)^2+(Y_2-Y_1)^2+(Z_2-Z_1)^2$$
When incorporating relativity and time, the interval becomes a spacetime quantity. Because relativity sets the maximum speed of information as $c$, we make the interval $$\Delta s^2=\Delta d_{Eucl}^2-c^2(t_2-t_1)^2$$ This represents the original interval - the distance between the two events - minus the maximum distance the information could travel in the time between the two events. That difference lets us determine if the events happened in a definite chronological order ($\Delta s^2<0$) or if they occurred in two distinctly separate positions ($\Delta s^2>0$), since in relativity we can't always be sure. It is from this that the $-1$ in the metric originates. Space and time coordinates are given opposite signs here. We keep the metric in terms of $s^2$ because we simply can't be sure if $s$ is positive or negative. There was no original imaginary time coordinate, that was simply someone's poor interpretation and it has been (thankfully) dropped for the most part. I should probably also point out that the imaginary time coordinate can not come out of Euclidean 4-D either. If one ignores relativity, then there is no maximum velocity. If there is no maximum velocity, there is no natural way of equating spatial and temporal coordinates. Therefore, not only would it not be right to use $c$ in the $ict$ coordinate, it also would not make sense to add time to space because there would be no agreeable conversion between them. However, if you don't ignore relativity, then you must subtract the time term from the 3-D interval in order to comply with the notion of a maximum velocity. So the Euclidean signature, $(1,1,1,1)$ can not be used to describe 4-D spacetime! So you never define the time coordinate as imaginary. | {
"source": [
"https://physics.stackexchange.com/questions/107443",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/20907/"
]
} |
107,709 | I tried to calculate earth's orbital period using Kepler's third law , but I found 365.2075 days for the orbital period instead of 365.256363004 which is the correct value. I checked everything, and I couldn't find what's the problem.
I used these values for my calculation: Semi-major axis, a: 149,598,261 km Gravitational constant, G: 6.67*10 -11 N·(m/kg) 2 Solar mass, M: 1.9891*10 30 kg | You've used the gravitational constant with only three significant digits. So it's no surprise that your answer isn't accurate to five significant digits. Instead of $G$ and $M_\odot$ separately, you should use the product $GM_\odot$, known as the standard gravitational parameter . Its value is known very accurately: in the link, you'll find
$$
GM_\odot = 132\,712\,440\,018\;\text{km}^3\text{s}^{-2}
$$
We could even include the value for the Earth:
$$
GM_\oplus = 398\,600\;\text{km}^3\text{s}^{-2}
$$
so we get
$$
T = 2\pi\sqrt{\frac{a^3}{G(M_\odot+M_\oplus)}} =
2\pi\sqrt{\frac{(149\,598\,261)^3}{132\,712\,838\,618}} = 31\,558\,272\;\text{s}=365.2578\;\text{d},
$$
which is very close to the actual value. As remarked in the other answers, the remaining small difference is mainly due to planetary perturbations. | {
"source": [
"https://physics.stackexchange.com/questions/107709",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26416/"
]
} |
107,824 | What is the difference between stress and pressure ? Are there any intuitive examples that explain the difference between the two? How about an example of when pressure and stress are not equal? | Pressure is defined as force per unit area applied to an object in a direction perpendicular to the surface. And naturally pressure can cause stress inside an object. Whereas stress is the property of the body under load and is related to the internal forces. It is defined as a reaction produced by the molecules of the body under some action which may produce some deformation. The intensity of these additional forces produced per unit area is known as stress (pretty picture from wikipedia): EDIT PER COMMENTS Overburden Pressure or lithostatic pressure is a case where the gravity force of the object's own mass creates pressure and results in stress on the soil or rock column. This stress increases as the mass (or depth) increases. This type of stress is uniform because the gravity force is uniform. http://commons.wvc.edu/rdawes/G101OCL/Basics/earthquakes.html Included in lithostatic pressure are the weight of the atmosphere and,
if beneath an ocean or lake, the weight of the column of water above
that point in the earth. However, compared to the pressure caused by
the weight of rocks above, the amount of pressure due to the weight of
water and air above a rock is negligible, except at the earth's
surface. The only way for lithostatic pressure on a rock to change is
for the rock's depth within the earth to change. Since this is a uniform force applied throughout the substance due to mostly to the substance itself, the terms pressure and stress are somewhat interchangeable because pressure can be viewed as both an external and internal force. For a case where they are not equal, just look that the image of the ruler. If pressure is applied at the far end (top of image) it creates unequal stress inside the ruler, especially where the internal stress is high at the corners. | {
"source": [
"https://physics.stackexchange.com/questions/107824",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26219/"
]
} |
107,904 | There are claims like this one that you can improve the cooling speed of beverages when you put them wrapped in a wet paper towel inside the refrigerator/freezer. I've just tried it by myself and it seems it does not work as expected, although I might have confused freezer with refrigerator. My question: Is this kind of an urban legend or does it actually help? | I actually went ahead and spent some hours experimenting. Used two 500ml aluminum beer cans filled with water at room temperature, 21.4°C. One can wrapped in a paper towel soaked with an additional 20ml of water, one left bare as control. Shoved both in my small, non-ventilated house freezer at -14°C and measured temperature and weight every twenty minutes until water in both cans started forming ice. These are the results. Allowing for some error from my cheap digital food thermometer, the towel-wrapped can cooled quite a bit faster than the control one. In fact, it reached the 4°C serving temperature in about 50 mins, more than an hour earlier than the control can. Notably, by that time it had already lost some 6ml of water, I suppose through evaporation/minor dripping, and ended up losing a total of 10ml by the end of the experiment (the control only lost 2ml). So yes, the wet paper towel trick does seem to work quite nicely. I'd expect it to work even better if one were to use a ventilated freezer (faster heat exchange) and smaller containers (greater surface/volume ratio). I also had a properly sealed beer can in there with a wet towel, same starting temperature, and it cooled to serving temp at about the same rate as the other can. A little quicker, because it wasn't taken out of the freezer for measurements until the 1-hour mark, when it was ready to be in my tummy. Not very academic maybe, but I hope this provides some useful info! | {
"source": [
"https://physics.stackexchange.com/questions/107904",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/933/"
]
} |
107,963 | Einstein originally thought that special relativity was about light and how it always travelled at the same speed. Nowadays, we think that special relativity is about the idea that there is some universal speed limit on the transfer of information (and experiments tell us that photons, the quanta of light, move with the largest speed, $c$). But what if tomorrow we happen to observe a particle $X$ that travels with a speed $v>c$? What changes would have to be made to special relativity? | If (and that's a big if) tomorrow we had a $70\sigma$ detection in a repeatable experiment of a particle that travelled faster than $c$, then one of several things would be true. 1) We would be forced to conclude that $c$ is not, in fact, the limiting speed of information transfer; everything based on this assumption would have to be scrapped (pretty much all of research-level physics); and we would have to start over in developing even the mathematics that allows us to start re-describing the universe. 2) We would be forced to conclude that $c$ is not the limiting speed of information transfer; we would assume that special relativity and everything based on it is the special-case effective theory of much broader physical laws and behaviours; and we would have to find a way of modifying relativity (and basically everything that relies on it) so that it can causally allow for this particle to exist and yet have everything else we see still basically operate under the idea that $c$ is the max speed. 3) We find a way to use this particle to communicate with the past and future, travel faster than light, and then we go home every night and laugh at Einstein. 4) We perform the experiment thousands of times in different laboratories, find the same result, then go back and discover that there was a fundamental flaw with the theory. Once the flaw is corrected, we see that we are not actually observing a superluminal particle. 5) We also discover flying pigs, perpetual motion, and that we really can believe it's not butter. Then I wake up from my nightmare. My money is on (4) with (2) being a close second (although (5) has happened before). Note: This answer assumes that the speed $c$ referenced is the assumed maximum speed; the speed of a massless particle in a vacuum. This is why I did not include a "we determine the photon is not massless and then have to change EM" option. | {
"source": [
"https://physics.stackexchange.com/questions/107963",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/43897/"
]
} |
108,212 | I'm a beginner of QFT. Ref. 1 states that [...] The Lorentz group $SO(1,3)$ is then essentially $SU(2)\times SU(2)$. But how is it possible, because $SU(2)\times SU(2)$ is a compact Lie group while $SO(1,3)$ is non-compact? And after some operation, he says that the Lorentz transformation on spinor is complex $2\times2$ matrices with unit determinant, so Lorentz group becomes $SL(2,\mathbb{C})$. I'm confused about these, and I think there must be something missing. References: L.H. Ryder, QFT, chapter 2, p. 38. | Here's my two cents worth. Why Lie Algebras? First I'm just going to talk about Lie algebras . These capture almost all information about the underlying group. The only information omitted is the discrete symmetries of the theory. But in quantum mechanics we usually deal with these separately, so that's fine. The Lorentz Lie Algebra It turns out that the Lie algebra of the Lorentz group is isomorphic to that of $SL(2,\mathbb{C})$. Mathematically we write this (using Fraktur font for Lie algebras) $$\mathfrak{so}(3,1)\cong \mathfrak{sl}(2,\mathbb{C})$$ This makes sense since $\mathfrak{sl}(2,\mathbb{C})$ is non-compact, just like the Lorentz group. Representing the Situation When we do quantum mechanics, we want our states to live in a vector space that forms a representation for our symmetry group. We live in a real world, so we should consider real representations of $\mathfrak{sl}(2,\mathbb{C})$. A bit of thought will convince you of the following. Fact : real representations of a Lie algebra are in one-to-one correspondence (bijection) with complex representations of its complexification . That sounds quite technical, but it's actually simple. It just says that we can have complex vector spaces for our quantum mechanical states! That is, provided we use complex coefficients for our Lie algebra $\mathfrak{sl}(2,\mathbb{C})$. When we complexify $\mathfrak{sl}(2,\mathbb{C})$ we get a direct sum of two copies of it. Mathematically we write $$\mathfrak{sl}(2,\mathbb{C})_{\mathbb{C}} = \mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$$ So Where Does $SU(2)$ Come In? So we're looking for complex representations of $\mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$. But these just come from a tensor product of two representations of $\mathfrak{sl}(2,\mathbb{C})$. These are usually labelled by a pair of numbers, like so $$|\psi \rangle \textrm{ lives in the } (i,j) \textrm{ representation of } \mathfrak{sl}(2,\mathbb{C}) \oplus \mathfrak{sl}(2,\mathbb{C})$$ So what are the possible representations of $\mathfrak{sl}(2,\mathbb{C})$? Here we can use our fact again. It turns out that $\mathfrak{sl}(2,\mathbb{C})$ is the complexification of $\mathfrak{su}(2)$. But we know that the real representations of $\mathfrak{su}(2)$ are the spin representations! So really the numbers $i$ and $j$ label the angular momentum and spin of particles. From this perspective you can see that spin is a consequence of special relativity! What about Compactness? This tortuous journey shows you that things aren't really as simple as Ryder makes out. You are absolutely right that $$\mathfrak{su}(2)\oplus \mathfrak{su}(2) \neq \mathfrak{so}(3,1)$$ since the LHS is compact but the RHS isn't! But my arguments above show that compactness is not a property that survives the complexification procedure. It's my "fact" above that ties everything together. Interestingly in Euclidean signature one does have that $$\mathfrak{su}(2)\oplus \mathfrak{su}(2) = \mathfrak{so}(4)$$ You may know that QFT is closely related to statistical physics via Wick rotation. So this observation demonstrates that Ryder's intuitive story is good, even if his mathematical claim is imprecise. Let me know if you need any more help! | {
"source": [
"https://physics.stackexchange.com/questions/108212",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/34669/"
]
} |
108,224 | We know due to Maxwell's equations that: $$\vec{\nabla} \cdot \vec{B}=0$$ But if we get far from the magnetic field, shouldn't it be weaker?
Shouldn't the divergence of the field be positive? If we define the vector field as a function of distance, then if the distance increases then the magnitude of the vector applied to a distant point of the "source" should be weaker. Is my reasoning correct? | Your intuition about the meaning of the divergence operator is wrong. In physics it's easiest to think intuitively about divergence by using the divergence theorem which states $$\int_V dV \ \nabla \cdot \mathbf{B} = \int_{\partial V} \mathbf{B} \cdot d\mathbf{S}$$ where $\partial V$ is the surface area surrounding the volume $V$. The magnetic field has zero divergence, which means that $$\int_{\partial V} \mathbf{B} \cdot d\mathbf{S}= 0$$ We can interpret this by saying there's no net flow of magnetic field across any closed surface. This makes sense because magnetic field lines always come in complete loops, rather than starting or ending at a point. Put another way, the divergence-free condition is just saying that we don't have magnetic monopoles in Maxwell electromagnetism. Let me know if you need any more help! | {
"source": [
"https://physics.stackexchange.com/questions/108224",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41008/"
]
} |
108,230 | I have often heard that the gravitational field has spin $2$. How can I read the spin of the field from the Einstein-Hilbert action $$S=\int \! \mathrm{d}^4x \,\sqrt{|g|} \, \mathcal{R} \, \, \, ?$$ | A common procedure to determine the spin of the excitations of a quantum field is to first determine the conserved currents arising from quasi-symmetries via Noether's theorem. For example , in the case of the Dirac field, described by the Lagrangian, $$\mathcal{L}=\bar{\psi}(i\gamma^\mu \partial_\mu -m)\psi $$ the associated conserved currents under a translation are, $$T^{\mu \nu} = i \bar{\psi}\gamma^\mu \partial^\nu \psi - \eta^{\mu \nu} \mathcal{L}$$ and the currents corresponding to Lorentz symmetries are given by, $$(\mathcal{J}^\mu)^{\rho \sigma} = x^\rho T^{\mu \sigma} - x^\sigma T^{\mu \rho}-i\bar{\psi}\gamma^\mu S^{\rho \sigma} \psi$$ where the matrices $S^{\mu \nu}$ form the appropriate representation of the Lorentz algebra. After canonical quantization, the currents $\mathcal{J}$ become operators, and acting on the states will confirm that, in this case, the excitations carry spin $1/2$. In gravity, we proceed similarly. The metric can be expanded as, $$g_{\mu \nu} = \eta_{\mu \nu} + f_{\mu \nu}$$ and we expand the field $f_{\mu \nu}$ as a plane wave with operator-valued Fourier coefficients, i.e. $$f_{\mu \nu} \sim \int \frac{\mathrm{d}^3 p}{(2\pi)^3} \frac{1}{\sqrt{\dots}} \left\{ \epsilon_{\mu \nu} a_p e^{ipx} + \dots\right\}$$ We only keep terms of linear order $\mathcal{O}(f_{\mu \nu})$, compute the conserved currents analogously to other quantum field theories, and once promoted to operators as well act on the states to determine the excitations indeed have spin $2$. Counting physical degrees of freedom The graviton has spin $2$, and as it is massless only two degrees of freedom. We can verify this in gravitational perturbation theory. We know $h^{ab}$ is a symmetric matrix, and only $d(d+1)/2$ distinct components. In de Donder gauge, $$\nabla^{a}\bar{h}^{ab} = \nabla^a\left(h^{ab}-\frac{1}{2}h g^{ab}\right) = 0$$ which provides us $d$ gauge constraints. There is also a residual gauge freedom, providing that infinitesimally, we shift by a vector field, i.e. $$X^\mu \to X^\mu + \xi^\mu$$ providing $\square \xi^\mu + R^\mu_\nu \xi^\nu = 0$, which restricts us by $d$ as well. Therefore the total physical degrees of freedom are, $$\frac{d(d+1)}{2}-2d = \frac{d(d-3)}{2}$$ If $d=4$, the graviton indeed has only two degrees of freedom. Important Caveat Although we often find a field with a single vector index has spin one, with two indices spin two, and so forth, it is not always the case, and determining the spin should be done systematically. Consider, for example, the Dirac matrices, which satisfy the Clifford algebra, $$\{ \Gamma^a, \Gamma^b\} = 2g^{ab}$$ On an $N$-dimensional Kahler manifold $K$, if we work in local coordinates $z^a$, with $a = 1,\dots,N$, and the metric satisfies $g^{ab} = g^{\bar{a} \bar{b}} = 0$, the expression simplifies: $$\{ \Gamma^a, \Gamma^b\} = \{ \Gamma^{\bar{a}}, \Gamma^{\bar{b}}\} = 0$$
$$\{ \Gamma^a, \Gamma^{\bar{b}}\} = 2g^{ab}$$ Modulo constants, we see that we can think of $\Gamma^a$ as an annihilation operator, and $\Gamma^{\bar{b}}$ as a creation operator for fermions. Given that we define $\lvert \Omega \rangle$ as the Fock vacuum, we can define a general spinor field $\psi$ on the Kahler manifold $K$ as, $$\psi(z^a,\bar{z}^{\bar{a}}) = \phi(z^a,\bar{z}^{\bar{a}}) \lvert \Omega \rangle + \phi_{\bar{b}}(z^a,\bar{z}^{\bar{a}}) \Gamma^{\bar{b}} \lvert \Omega \rangle + \dots$$ Given that $\phi$ has no indices, we would expect it to be a spinless field, but it can interact with the $U(1)$ part of the spin connection. Interestingly, we can only guarantee that $\phi$ is neutral if the manifold $K$ is Ricci-flat, in which case it is Calabi-Yau manifold. | {
"source": [
"https://physics.stackexchange.com/questions/108230",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/34669/"
]
} |
108,523 | Why do we need to introduce the spin connection coefficients $\omega_{\mu \space \space b}^{\space \space a} $ in General Relativity? To me, they just look (mathematically) like the Christoffel symbols and so I'm assuming they aren't tensors? If I'm right in saying that, then do these $\omega_{\mu \space \space b}^{\space \space a} $ transform the same way under general coordinate transformations as the Christoffel symbols, and if so, what's the need in them then? Finally, what's the relation between the Vierbein $e_{\mu}^a$ and the spin connection (non-mathematically)? | There are both physical and formal reasons to introduce the spin connection. Physically , we know that there are spin 1/2 particles. A spin 1/2 field cannot be described by anything built from 4-vector fields. You can realize this for example by that 4-vector fields (and so anything built from them) returns to their original value after a $2\pi$ rotation whereas a spin 1/2 field does not (it changes sign). In GR, you will want to take covariant derivatives of this spinor field, this is exactly what the spin connection is for. If all you want to do is vacuum GR then you can do without the spin connection, but if you want to put interesting matter in your spacetime, you need it. Formally it turns out that some calculations are actually simpler using spinors. The canonical example is the Newman-Penrose formalism [1], a somewhat neater formalism is the Geroch-Held-Penrose formalism [2]. Then there is of course two whole volumes by Penrose and Rindler [3]. One can perhaps understand the utility of spinor formalisms from one arithmetic fact: you deal with 12 (in the GHP formalism, 8) complex quantities instead of 24 real ones; this is few enough that you can have separate names for each quantity which helps the notation immensely. (Imagine having to write $\omega^1{}_{03}$ or similar for every single quantity in your calculation.) The GHP formalism notation is almost as compact as you can hope to achieve given the complexity of GR. Now since the product of two spinors is a vector, and a null vector at that, the spinor formalisms are extremely well suited to problems with radiation, both gravitational and other. In the GHP formalism every quantity has a direct geometrical interpretation and it is easy to make Ansätze that are both geometrically meaningful and useful for simplifying the equations. An example of this is the integration method of Edgar and Ludwig [4]. The Cartan-Karlhede algorithm for classifying spacetimes is also simpler in spinor form. Part of this is because of the compactness of notation, but part is also that a step in the algorithm is to put tensors in a standard form (for example for Petrov type III you can take the Weyl tensor to have components $\Psi_i = \delta_{i3}$). There are algorithms for doing this with computer algebra for spinors; I do not know about algorithms for doing it with world-vectors. Now for your more practical questions , yes, the spin connection coefficients are exactly like the Christoffel symbols. Since a world-vector is the product of 2 spinors, you can recover the latter from the former. They do not form a proper tensor and have a transformation law like that of the Christoffel symbols. The precise relation between the tetrad and the spin connection is, I think, impossible to explain non-mathematically because the proper understanding of it requires thinking about fiber bundles and covering groups. However, vaguely, you can say that just like it is locally possible to take a tetrad, it is always locally possible to find two spinor fields that are everywhere orthonormal (in a certain sense), say $o^A$ and $\iota^A$; these are two-spinors, so $ A = 0,1$. This is called a dyad . They have complex conjugates $\overline{o}^\dot{A}$ and $\overline{\iota}^\dot{A}$. The product of a spinor and a conjugate spinor is a world-vector, so with these you can form four world-vectors, $o^A\overline{o}^\dot{A}$ and so on. It is not too hard to realize that those four form a (null) tetrad. You could take $-o^A$ and $-\iota^A$ instead and obtain the tetrad. So to each tetrad there are exactly two dyads. You could hand-wavingly say that a dyad is the square root of a tetrad, but the proper, more formal statement is that the spin group is a double cover of the Lorentz group. References Newman, E., & Penrose, R. (2004). An approach to gravitational radiation by a method of spin coefficients. Journal of Mathematical Physics, 3(3), 566-578. Geroch, R., Held, A., & Penrose, R. (2003). A space‐time calculus based on pairs of null directions. Journal of Mathematical Physics, 14(7), 874-881. Penrose, R. & Rindler, W. Spinors and Space-time. 2 vols (Cambridge University press, 1984). Edgar, S. B., & Ludwig, G. (1997). Integration in the GHP formalism III: Finding conformally flat radiation metrics as an example of an “optimal situation”. General Relativity and Gravitation, 29(10), 1309-1328. | {
"source": [
"https://physics.stackexchange.com/questions/108523",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/38122/"
]
} |
108,925 | Everyone calls the centrifugal force a pseudo force and claims that it is not really present, even though there are so many machines listed as taking advantage of it (e.g. centrifuge, washing machine, drier). It also has the same magnitude as the centripetal force, so that they cancel each other out but I can definitely feel it every time I take a sharp turn in a car. So can you explain whether the centrifugal force is real? | Suppose you are at a red light in your car. You apply Newton's second law on the street light. $$F=ma$$ $$F=0N, a=0ms^{-2}$$$$0N=0N$$ It works!! Now the light turns green and you start accelerating. Suppose your acceleration is $1ms^{-2}$. According to you, you are at rest. Do you see your nose moving? Apparently not. It means your body is at rest wrt you. So street light has acceleration $-1ms^{-2}$ wrt you. Let's apply Newton's second law. $$F=ma$$ Clearly, there is no force acting on it. And the light,say, has mass=$50kg$ $$0N=-50N$$ NOOOOOOOOOOOOO..... Your mind just blew, right? You see that you are unable to apply Newton's second law in an accelerating frame. Let's see how can we fix it. IF we add $-50N$ on $LHS$ we will get the correct answer. Hence, we define pseudo force as a correction term which enables us to apply Newton's second law in accelerating frames. It has no real existence, it is just a mathematical force. Similarly, a centripetal force is needed to make you go in a circle. If you sit there, you have to apply a force outwards which we call centrifugal force, to use Newton's laws. Centripetal force is a force which provides acceleration towards centre, say, Tension while moving the object round with string. So if, you apply $F=ma$ from the revolving object, you have to add centrifugal force as the object is at rest wrt itself. You can explain what you experience while turning due to you inertia which resists you change in motion. | {
"source": [
"https://physics.stackexchange.com/questions/108925",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/40945/"
]
} |
108,928 | An exercise from Goldstein (9.31-3rd Ed) asks to show that for a one-dimensional harmonic oscillator $u(q,p,t)$ is a constant of motion where
$$
u(q,p,t)=\ln(p+im\omega q)-i\omega t
$$
and $\omega=(k/m)^{1/2}$. The demonstration is easy but the physical significance of the constant of motion is not so clear to me. Indeed I can show that $u$ can be rewritten like:
$$
u(q,p,t)=i\phi+\ln(m\omega A)
$$
where $\phi$ is the phase and $A$ the amplitude of the vibration of the oscillator. I can also demonstrate that $m\omega A=\sqrt{2mE}$, where $E$ is the total energy of the oscillator. But there is any further significance of $u$ that I'm missing? | Suppose you are at a red light in your car. You apply Newton's second law on the street light. $$F=ma$$ $$F=0N, a=0ms^{-2}$$$$0N=0N$$ It works!! Now the light turns green and you start accelerating. Suppose your acceleration is $1ms^{-2}$. According to you, you are at rest. Do you see your nose moving? Apparently not. It means your body is at rest wrt you. So street light has acceleration $-1ms^{-2}$ wrt you. Let's apply Newton's second law. $$F=ma$$ Clearly, there is no force acting on it. And the light,say, has mass=$50kg$ $$0N=-50N$$ NOOOOOOOOOOOOO..... Your mind just blew, right? You see that you are unable to apply Newton's second law in an accelerating frame. Let's see how can we fix it. IF we add $-50N$ on $LHS$ we will get the correct answer. Hence, we define pseudo force as a correction term which enables us to apply Newton's second law in accelerating frames. It has no real existence, it is just a mathematical force. Similarly, a centripetal force is needed to make you go in a circle. If you sit there, you have to apply a force outwards which we call centrifugal force, to use Newton's laws. Centripetal force is a force which provides acceleration towards centre, say, Tension while moving the object round with string. So if, you apply $F=ma$ from the revolving object, you have to add centrifugal force as the object is at rest wrt itself. You can explain what you experience while turning due to you inertia which resists you change in motion. | {
"source": [
"https://physics.stackexchange.com/questions/108928",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/44759/"
]
} |
108,971 | It just occurred to me that almost all images I've seen of the (in)famous mushroom cloud show a vertical column rising perpendicular to the ground and a horizontal planar ring parallel to the ground. Not that I'm an expert (that's why this question) but I have rarely seen anything go in the 45 degree angle. Or for that matter anything other than the 'special' 0 degree horizontal plane and 90 degree vertical column. Shouldn't there be radial vectors at all angles between 0 and 90 degrees giving rise to a hemispherical explosion envelope? Why is it a vertical cylinder? PS: I understand the top expands eventually on cooling and lowered air pressure giving the mushroom look but my questions is for the previous stage - the vertical column. | The explosion certainly is hemispherical, see, for instance, this explosion caused by the Trinity bomb : The gas cloud that you posted, and what many would consider is synonymous to the nuclear weapons, comes after the explosion. Nuclear bombs are actually usually ignited above ground for "maximum destruction." Since the nuclear reaction is immensely hot (about 4000 K whereas the surface of earth is sitting pretty around 300 K), the gas rises much the same way a hot-air balloon rises. At some point, the cold air from around the explosion gets sucked under the mushroom cap and causes the thin column you see: Thus, for the most part, it is the extreme temperatures that cause the explosion "bubble" to rise in the first place. And it is the convective air currents under the bubble that cause the column to form. | {
"source": [
"https://physics.stackexchange.com/questions/108971",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/7484/"
]
} |
109,031 | What I am really asking is are there other functions that, like $\sin()$ and $\cos()$ are bounded from above and below, and periodic ? If there are, why are they never used to describe oscillations in Physics? EDIT: Actually I have just thought of a cycloid , which indeed is both bounded and periodic.
Any particular reason as to why it doesn't pop up in science as much as sines / cosines? | Part of it is that Newtonian mechanics is described in terms of calculus . When we consider vibrational motions, we're talking about some particle that tends to not be displaced from some equilibrium position. That is, the force on the particle, at displacement $x$, $F(x)$, is equal to some function of displacement $x$, $g(x)$. There are two ways calculus gets involved here. Firstly, $F=ma$, and $a$, acceleration, is a "rate of change" and therefore a calculus concept. So we have $ma(x)=g(x)$. Now, dealing with a general function $g$ is too difficult - we won't get anywhere with it. So how can we proceed in the most general way? One fruitful method is to do a Taylor expansion. $g(x)=g(0)+g'(0) x+\frac{1}{2} g''(0) x^2+\frac{1}{3!} g^{(3)}(0)x^3+\cdots$, where these are the $g^{(n)}(x)$ is the nth derivative of g at point x. If we want $x=0$ to be an equilibrium position, we must have $g(0)=0$ - there isn't any force on the particle at equilibrium. If we want it to be a stable equilibrium that will tend to turn back to its original position, we must have $g'(0)<0$. All other derivatives are fair game. Writing $-k=g'(0)$: $$m a(x)=-k x+\frac{1}{2} g''(0) x^2+\frac{1}{3!} g^{(3)}(0)x^3+\cdots$$
as is so useful in physics, we now suppose that $x$ is small, so that $x^2$ is very small, and $x^3$ is even smaller. That is, we ignore all powers of $x$ greater than one. We wind up with:
$$m a(x)=-k x$$
Hooke's law. The solution to this equation is sinusoidal, always. (that is, it can be written in the form $x=a \cos(\omega t-\varphi)$) So it is inevitable that, with these definitions of "stable equilibrium", the resulting vibrational pattern at small amplitudes will be sinusoidal. Always. That's what makes $\cos$ and $\sin$ special from a physical point of view. (of course, we've also tacitly assumed that $g$ is a nice function that is nice and smooth and differentiable, but one generally does that when working on Newtonian style problems) | {
"source": [
"https://physics.stackexchange.com/questions/109031",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/37677/"
]
} |
109,142 | We have seen birds sitting on uninsulated electric wires of high voltage transmission lines overhead without getting harmed, because sitting on only one wire doesn't complete any circuit. But what about the potential difference between their legs? Is this not a small complete circuit? Because the wire has a potential gradient, there should be a potential difference between the bird's feet. Is this potential difference so very small that we can say the bird is sitting at a single point on the wire? If a bird of a sufficiently large size, with a wide gap between its feet, sits on a single wire, shouldn't the bird receive a shock if the potential difference is sufficient? | The potential difference between two points on a wire carrying a current is given by Ohm's Law, $V = R\cdot I$. Since wires used for long-distance power transmission have, by design, a very low resistance per unit length, and the distance between the two extremities of your hands is very small (~10cm), even for large currents the potential difference is not dangerous at all. For example, if 10cm of gauge 4/0 aluminum wire (cross-sectional area 1.07 $\mathrm{cm^2}$) have a total resistance of $2.63714\cdot 10^{-5} \Omega$, so if a large current of 300A flow through (maximum rating for this gauge of cable, actually), then the potential difference between the ends will be $V=2.63714\cdot 10^{-5} \Omega \cdot 300\mathrm{A}=0.0079\mathrm{V}$ which is not noticeable by human skin. The same goes for the feet of a bird, which are separated by an even smaller distance. | {
"source": [
"https://physics.stackexchange.com/questions/109142",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/41066/"
]
} |
109,368 | I'm new to physics and am just going through some of the free online classes at World Science U, and after watching this video on the nature of the speed of light and its constancy, a question came to mind about photons. (Video: YouTube Video , World Science U course ) I know that photons don't have mass, but what happens when photons — even the photons from distant stars — reach us? Are we merely observing the occurrence of photons moving through space relative to us, or are we really being "bathed" in photons? I know that when I observe rain, I can both observe it from a distance but could also be immersed in it as well if in the path of that rain. But with distant starlight, are we just observing it or are the photons actually reaching and penetrating the earth around us? If they are penetrating, does science tell us what is actually happening on an atomic or sub-atomic level? | Yes, the photons actually reach you, like rain falling on you, not like watching rain from a distance. When you see a star, photons from the star actually enter your eye. In for example rods of your eye, the photon causes a molecule of retinal to react by change from cis to trans isomer. | {
"source": [
"https://physics.stackexchange.com/questions/109368",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/44952/"
]
} |
109,500 | Currently in my last year of high school, and I have always been told that centrifugal force does not exist by my physics teachers. Today my girlfriend in the year below asked me what centrifugal force was, I told her it didn't exist, and then she told me her textbook said it did, and defined it as "The apparent force experienced towards the outside of a circle is the centrifugal force and is due to the mass of the object resisting the inward centripetal acceleration that the object is experiencing". I was pretty shocked to hear this after a few years of being told that it does not exist. I did some reading and found out all sorts of things about pseudo forces and reference frames. I was wondering if someone could please explain to me what is going on? Is it wrong to say that centrifugal force does not exist? This has always nagged me a bit as I often wonder that if every force has a reaction force then a centripetal force must have a reaction centrifugal force, but when I asked my teachers about this they told me that centrifugal force does not exist. | Summary Centrifugal force and Coriolis force exist only within a rotating frame of reference and their purpose is to "make Newtonian mechanics work" in such a reference. So your teacher is correct ; according to Newtonian mechanics, centrifugal force truly doesn't exist. There is a reason why you can still define and use it, though. For this reason, your girlfriend's book might also be considered correct . Details As you know, Newton's laws work in so-called " inertial frames of reference ". However, a point on the surface of the Earth is not really an inertial frame of reference because it is spinning around the center of the Earth. (So you can think of it as a rotating coordinate system.) So Newton's mechanics don't apply if you want to describe motion and use a reference point on the Earth. This is quite inconvenient, because we mostly want to engineer things that work on the Earth. Fortunately, there is a trick: you can use a point on the surface of the Earth as your reference and pretend that it's an inertial frame of reference, if you also pretend that some external "imaginary" (fictious) forces exist in addition to the real ones. These are the centrifugal force and the Coriolis force. Further reading If you are interested in more, see: http://en.wikipedia.org/wiki/Inertial_frame_of_reference http://en.wikipedia.org/wiki/Centrifugal_force_%28rotating_reference_frame%29 http://en.wikipedia.org/wiki/Coriolis_effect | {
"source": [
"https://physics.stackexchange.com/questions/109500",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45013/"
]
} |
109,535 | I know this isn't the right place for asking this question, but in other places the answers are so awfull.. I'm studying eletricity, so, I start seeing things like "charges", "electrons has negative charges",etc. But I didn't quite understand what charge is. I search a little bit on the internet and found it related to electromagnetic fields, then I thought "negative and positive may be associeted with the behaviour of the particle in the field, great!", but the articles about e.m. fields already presuppose "negative" and "positive" charges. In other places, I see answers relating charges to the amount of electrons/protons in an atom, but if that's right, the "negative" electron is an atom without any protons? What about the neutron? So, my questions are (1) What are charges; and (2) How a particle can "be" electrically charged. What does that really mean?
Thanks for your time. | I would say that charge is a theoretical prescription describing a way of how a particle interacts with electromagnetic field. Since we are talking about a theory that should describe and predict various phenomena, we need to start with definition of fundamental object. If we are talking about Newtonian mechanics we face phenomena related to interactions of particles with each other by a direct mechanical contact. We characterize these interactions by force, momentum, etc. Fundamental characteristics of a body will be the mass. Theoretically, you may consider objects of positive, negative or zero mass in mechanics. However, from experiment we know, that there are no objects with negative mass. The same is true for electrodynamics, where we see objects interacting through a field. Now to describe the ability of an object to generate or to fill this field, we introduce the charge. So, as was already said by zeal charge is just a property of an object, same as mass. Concerning your second question. Firstly, one should note that any object may have any charge irrespective of electrons. However, we know that atom is a complex object composed from electrons, protons and neutrons. Hence, in order to figure out charge of an atom we should assign some charges to its fundamental constituents. From experiments we know that electrons, neutrons and protons interact with each other and with electromagnetic field in such a way, that we may define $q_e=-1$, $q_p=+1$, $q_n=0$. Now, just by summing charges of constituents of a complex object we can derive its charge. Hence, in brief: charge is such theoretical prescription in electrodynamics, that allows to predict electromagnetic phenomena. | {
"source": [
"https://physics.stackexchange.com/questions/109535",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45031/"
]
} |
109,736 | As you know, impedance is defined as a complex number. Ideal capacitors:
$$
\frac {1} {j \omega C} \hspace{0.5 pc} \mathrm{or} \hspace{0.5 pc} \frac {1} {sC}
$$ Ideal inductors:
$$
j \omega L \hspace{0.5 pc} \mathrm{or} \hspace{0.5 pc} sL
$$ I know that the reason why they 'invented' the concept of impedance is because it makes it easy to work with circuits in the frequency domain (or complex frequency domain). However, since in real-life circuits both voltages and currents are real numbers, I'm wondering if there is any actual physical meaning behind the imaginary component of impedance. | The physical 'meaning' of the imaginary part of the impedance is that it represents the energy storage part of the circuit element. To see this, let the sinusoidal current $i = I\cos(\omega t)$ be the current through a series RL circuit. The voltage across the combination is $$v = Ri + L\frac{di}{dt} = RI\cos(\omega t) - \omega LI\sin(\omega t)$$ The instantaneous power is the product of the voltage and current $$p(t) = v \cdot i = RI^2\cos^2(\omega t) - \omega LI^2\sin(\omega t)\cos(\omega t) $$ Using the well known trigonometric formulas, the power is $$p(t) = \frac{RI^2}{2}[1 + \cos(2\omega t)] - \frac{\omega LI^2}{2}\sin(2\omega t) $$ Note that the first term on the RHS is never less than zero - power is always delivered to the resistor. However, the power for the second term has zero average value and alternates symmetrically positive and negative - the inductor stores energy half the time and releases the energy the other half. But note that $\omega L$ is the imaginary part of the impedance of the series RL circuit: $$Z = R + j\omega L$$ Indeed, via the complex power S, we see that the imaginary part of the impedance is related the reactive power Q $$S = P + jQ = \tilde I^2Z = \frac{I^2}{2}Z = \frac{RI^2}{2} + j\frac{\omega L I^2}{2} $$ Thus, as promised, the imaginary part of the impedance is the energy storage part while the real part of the impedance is the dissipative part. | {
"source": [
"https://physics.stackexchange.com/questions/109736",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45023/"
]
} |
109,739 | Is photosynthesis more efficient than solar panels? If so, by how much? | Rather than considering quantum efficiencies or such details it's instructive to step back and take a broader view. One of the main fuel crops grown in the UK is miscanthus . There are various figures around for the yield produced by miscanthus, but these people estimate it as about 14 tonnes per hectare per year . The energy content is 19GJ/tonne , so that's 266GJ per hectare per year or about 8.5kW per hectare. Commercial PV panel installations typically produce 500kW per hectare (NB the link is a PDF) though this is peak power and would be a lot less averaged over the year. However, even if averaging over the year reduces the yield by a factor of 6 this still leaves the PV panels producing ten times as much power as miscanthus per hectare. For comparison the intensity of sunlight at midday is around 10MW per hectare. Incidentally, I don't mean to belittle miscanthus. PV panels are vastly more expensive to make than miscanthus is to grow, and we have yet to persuade PV panels to reproduce themselves. Both have their role to play in supplying energy. | {
"source": [
"https://physics.stackexchange.com/questions/109739",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/30522/"
]
} |
109,776 | Suppose I wanted to travel to one of the recently discovered potentially Earth-like planets such as Kepler 186f that is 490 light years away. Assuming I had a powerful rocket and enough fuel, how long would it take me? | Start by considering what is seen by the people watching you from the Earth. Nothing can travel faster than the speed of light, $c$, so the quickest you could get to Kepler 186f would be if you were travelling at $c$ in which case it would take 490 years. In practice it would take longer than this because you have to accelerate from rest when you leave the Earth and decelerate to a halt again when you get to your destination. So far this isn’t very interesting. What makes the problem interesting is that clocks on fast moving objects run slow due to time dilation . If you could travel near to the speed of light the time that passes for you will be less than 490 years, and in fact can be a lot less, as we’ll see below. First let’s take the simple case where you travel at some constant velocity $v$, and we won’t worry about how you accelerated to $v$ or how you’re going to slow down again. We’ll call the distance to the star $d$. For the people watching from Earth the time taken is just the distance you travel divided by your velocity: $$ t = \frac{d}{v} $$ So if the distance is 490 light years and you’re travelling at the speed of light the time taken is just 490 years. But how much time would you measure on your wristwatch? To do the calculation properly you need to use the Lorentz transformations , but in fact the answer turns out to be very simple. The time you measure, $\tau$, is given by: $$ \tau = \frac{t}{\gamma} $$ where $t$ is the time measured on Earth and $\gamma$ is the Lorentz factor and is given by: $$ \gamma = \frac{1}{\sqrt{1 - \tfrac{v^2}{c^2}}} $$ Or if you want the whole expression written out in full, the time you measure is: $$ \tau = \frac{d}{v} \sqrt{1 - \frac{v^2}{c^2}} $$ To give you a feel for this I’ve done the calculation for the 490 light year trip to Kepler 186f and I’ve drawn a graph of the time you measure as a function of your speed: The blue line is the travel time as measured on Earth, so it goes to 490 years as $v \rightarrow c$. The red line is the time measured on your wristwatch, which goes to zero as $v \rightarrow c$. But this isn’t very realistic since it ignores acceleration and deceleration. Suppose instead you travel halfway to the star at constant acceleration, then you flip over and travel halfway at constant deceleration. This allows you to start from rest and end at rest, and you also get a nice artificial gravity during the trip. But how can you calculate the time dilation for a trip that involves acceleration? The details of the calculation are given in Chapter 6 of Gravitation by Misner, Thorne and Wheeler . I won’t reproduce the calculation here because it’s surprisingly boring. You solve a couple of simultaneous equations to get differential equations for the time, $t$, and distance, $x$, and you solve these two differential equations to get: $$ t = \frac{c}{a} \sinh\left(\frac{a\tau}{c}\right) \tag{1} $$ $$ x = \frac{c^2}{a} \left(\cosh\left(\frac{a\tau}{c} \right) – 1 \right) \tag{2} $$ In these equations $\tau$ is the time measured on your wristwatch, $t$ is the time measured by the observers on Earth and $x$ is the distance travelled as measured by the observers on Earth. The times $t$ and $\tau$ start at zero at the moment you begin accelerating and leave the Earth. Finally $a$ is your constant acceleration. Note that $a$ is the acceleration you measure i.e. it’s the acceleration shown by an accelerometer you hold while you’re sat in the rocket. To do the calculation, for example for the trip to Kepler 186f, you take the first half of the journey while the rocket is accelerating and set $x$ to this distance. So for Kepler 186f $x = 245$ light years. Then you solve equation (2) to get the elapsed time on the rocket $\tau$, and finally plug this into equation (1) to get the elapsed time on Earth. This is the time for half the trip, so just double it to get the time for the whole trip. I’ve done this for a range of accelerations to get this graph: Again the blue line is the time measured on Earth and the red line is your time. At an acceleration of only 0.1g the travel time is already down to 76 years (just doable in a single lifetime) and at a more comfortable 1g the travel time is a shade over 12 years. Since the values aren't that easy to read off the graph here are some representative values: $$\begin{matrix}
a (/g) & \tau (/\text{years}) & t (/\text{years}) \\
0.01 & 374.9 & 655.9 \\
0.1 & 76.8 & 509.0 \\
1 & 12.1 & 491.9 \\
10 & 1.7 & 490.2
\end{matrix}$$ Footnotes for non-non-nerds Assuming you have more than a casual interest in Physics (why else would you be reading this!) there is lots more interesting stuff about accelerated motion. For example you might wonder how the spaceship accelerating at 1g can travel 490 light years in 12.1 years if nothing can travel faster than light. The answer is that the spaceship doesn’t travel 490 light years - the Lorentz contraction caused by its high speed means it travels a much shorter distance. We’ve got the equations for distance and time above, and you can combine them to work out the velocity as a function of spaceship time $\tau$. I won’t do this since it’s just algebra; instead I’ll just quote the result: $$ v = c \tanh \left( \frac{a\tau}{c} \right) \tag{3} $$ If the spaceship is travelling at velocity $v$ relative to the Earth and destination star then the Earth and star are travelling at velocity $v$ relative to the spaceship, and the crew of the spaceship see distances contracted by the Lorentz factor: $$ d’ = \frac{d}{\gamma} = d\sqrt{1 - \frac{v^2}{c^2}} $$ When the spaceship sets off its distance to the star is 490 light years, but as it accelerates this distance decreases for two reasons. Firstly (obviously) the ship moves towards the star, but secondly Lorentz contraction makes the remaining distance smaller. To calculate this effect you work out $x(\tau)$ using equation (2) for the first half of the trip. Since the trip is symmetrical you can reflect about the halfway point to get $x(\tau)$ for the second half of the journey. Then the distance left is just (for Kepler 186f) 490 light years - $x$. Calculate the velocity using equation (3) (again for the first half then reflect about the halfway point). Calculate the Lorentz factor from the velocity and multiply to get the contracted distance left. The results for 1g acceleration look like this: To make the data clearer I’ve plotted the remaining distance for the last half of the trip on an expanded scale to the right. The discontinuity is where the spaceship switches from acceleration to deceleration. The graph shows that the occupants of the ship see the distance they have left to travel shrink rapidly as their speed increases. Conversely, as they start decelerating the Lorentz contraction decreases and the distance left to travel decreases only slowly until they are close to the destination. | {
"source": [
"https://physics.stackexchange.com/questions/109776",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1325/"
]
} |
109,897 | I was having a conversation with my father and father-in-law, both of whom are in electric related work, and we came to a point where none of us knew how to proceed. I was under the impression that electricity travels on the surface while they thought it traveled through the interior. I said that traveling over the surface would make the fact that they regularly use stranded wire instead of a single large wire to transport electricity make sense. If anyone could please explain this for some non-physics but electricly incline people, I would be very appreciated. | It depends on the frequency. DC electricity travels through the bulk cross section of the wire. A changing electrical current (AC) experiences the skin-effect where the electricity flows more easily in the surface layers. The higher the frequency the thinner the surface layer that is usable in a wire. At normal household AC (50/60hz) the skin depth is about 8-10mm but at microwave frequencies the depth of the metal that the current flows in is about the same as a wavelength of visible light edit: Interesting point from Navin - the individual strands have to be insulated from each other for the skin effect to apply to each individually. That is the reason for the widely separated pairs of wires in this question What are all the lines on a double circuit tower? | {
"source": [
"https://physics.stackexchange.com/questions/109897",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45221/"
]
} |
109,995 | I have a unit measure, say, seconds, $s$. Furthermore let's say I have a dimensionful quantity $r$ that is measure in seconds, $s$. What is the unit measure of $e^r$? ($1/r$ is in $Hz$.) My question is general, how to find the unit measure of a transformation function $y=f(x)$ where $x$ takes some known unit measure. I give above two functions $f(\cdot)=e^\cdot$ and $f(.)=1/\cdot$. | The only sensible rule when working with units is, that you can only add together terms which carry the same unit.
Say $ [x]=[y] $ , then $x+y$ is unit-wise a valid statement. You may also multiply arbitrary units together. Whether that is physically sensible is another question. Obviously you cannot add, e.g meters and seconds, but multiplying to form $m/s$ as a unit for velocity is a valid operation. From that follows, that the argument of the exponential must not carry a unit, because the exponential is defined as a power series. $$ e^x =\sum_{n=0}^\infty \frac{x^n}{n!}$$ If $x$ were to carry a unit, say meters, one would add (schematically) $m+m^2+m^3+\cdots$ , which is nonsenical. If you encounter an exponential, a sine/cosine, logarithm,... in physics you will find almost always that its argument, which must be dimensionless, is a product of often two conjugate variables. Examples are time and frequency, or distance and momentum. | {
"source": [
"https://physics.stackexchange.com/questions/109995",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45267/"
]
} |
110,645 | I have read the explanation for this in several textbooks, but I am struggling to understand it via Archimedes' principle. If someone can clarify with a diagram or something so I can understand or a clear equation explanation that would be great. | Good question. Assume we have one cube of ice in a glass of water. The ice displaces some of that water, raising the height of the water by an amount we will call $h$ . Archimedes' principle states that the weight of water displaced will equal the upward buoyancy force provided by that water. In this case, $$\text{Weight of water displaced} = m_\text{water displaced}g = \rho Vg = \rho Ahg$$ where $V$ is volume of water displaced, $\rho$ is density of water, $A$ is the area of the ice cube base and $g$ is acceleration due to gravity. Therefore the upward buoyancy force acting on the ice is $\rho Ahg$ . Now the downward weight of ice is $m_\text{ice}g$ . Now because the ice is neither sinking nor floating, these must balance. That is: $$\rho Ahg = m_\text{ice}g$$ Therefore, $$h = \frac{m_\text{ice}}{\rho A}$$ Now when the ice melts, this height difference due to buoyancy goes to 0. But now an additional mass $m_\text{ice}$ of water has been added to the cup in the form of water. Since mass is conserved, the mass of ice that has melted has been turned into an equivalent mass of water. The volume of such water added to the cup is thus: $$V = \frac{m_\text{ice}}{\rho}$$ and therefore, $$Ah = \frac{m_\text{ice}}{\rho}$$ So, $$h = \frac{m_\text{ice}}{\rho A}$$ That is, the height the water has increased due to the melted ice is exactly the same as the height increase due to buoyancy before the ice had melted. Edit: For completion, since it is raised as a question in the comments Melting icebergs boost sea level rise, because the water they contain is not salty. Although most of the contributions to sea-level rise come from water and ice moving from land into the ocean, it turns out that the melting of floating ice causes a small amount of sea-level rise, too. Fresh water, of which icebergs are made, is less dense than salty sea water. So while the amount of sea water displaced by the iceberg is equal to its weight, the melted fresh water will take up a slightly larger volume than the displaced salt water. This results in a small increase in the water level. Globally, it doesn’t sound like much – just 0.049 millimetres per year – but if all the sea ice currently bobbing on the oceans were to melt, it could raise sea level by 4 to 6 centimeters. | {
"source": [
"https://physics.stackexchange.com/questions/110645",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/45467/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.