source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
381,981 | If water vapor is pulled inwards and cooled at a fast enough rate could if be frozen back into a solid form? i understand that they would have to be froze together as soon as contact is made but if this is possible what would the temperature have to be? And could this be the only thing that can directly go from a gas to a solid? | Changing a substance from its physical state of a gas to the physical state of a solid requires the removal of thermal energy. A gas has particles that have larger amount of kinetic or moving energy, they are vibrating very rapidly. A solid has particles with lower amounts of kinetic energy and they are vibrating slower without changing position. This change of state from a gas to a solid is not a very common phase change but is referred to as deposition. It is called deposition because the particles in the gas form are depositing into a solid form. Examples of Gas to Solid: Making dry ice or solid carbon dioxide involves the removal of gaseous carbon dioxide from air and using cold temperatures and higher pressure causes the gas particles to skip the liquid phase and deposit into a solid to form a chunk of dry ice. A carbon dioxide fire extinguisher has been filled with gaseous carbon dioxide but inside the canister the higher pressure causes this to turn into solid carbon dioxide which later is released as a white powder when putting out a fire. In severely cold temperatures frost will form on windows because the water vapor in the air comes into contact with a window and immediately forms ice without ever forming liquid water. Deposition has become a manufacturing technology application where solid alloys are heated to a gaseous state and then sprayed onto things like semiconductors. When the spray is released onto the semiconductor the heat energy is lost and the gaseous substance becomes a solid metal alloy. | {
"source": [
"https://physics.stackexchange.com/questions/381981",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/178651/"
]
} |
382,128 | The LHC is much larger than its predecessors, and proposed successors much larger still. Today, particle accelerators seem to be the main source of new discoveries about the fundamental nature of the world. My lay interpretation is that particle accelerators like the LHC are essentially the only viable apparatus for performing experiments in particle physics, passive detectors of naturally energized particles notwithstanding. Experiments vary by configuration, sensors and source material, but the need for an accelerator is constant, and more powerful accelerators are able to perform experiments which are out of reach to less powerful accelerators. For the most powerful accelerators, "more powerful" seems to imply "physically larger". In these ring-shaped accelerators, for a given type of particle, its maximum power appears to be (very) roughly proportionate to circumference. I use the word "power" in a loose sense here, reflecting my loose grasp of its meaning. Technology upgrades can make an accelerator more powerful without making it larger, e.g. the planned High Luminosity upgrade to the LHC. One imagines that an upgrade would be cheaper to build than a colossal new accelerator, yet larger accelerators are still built, so it would seem to follow that the upgrade potential of a given accelerator is limited in some way - that there is, in fact, a relationship between the size of an accelerator and its maximum power. The first part of my question is this: what is the nature of the relationship between the size and power of a modern particle accelerator? Are there diminishing returns to the operating cost of making an accelerator more powerful? Or are there fundamental physical constraints placing a hard limit on how powerful an accelerator of a given size can be? Or is technology the main limiting factor - is it conceivable that orders-of-magnitude power increases could be efficiently achieved in a small accelerator with more advanced technology? Is it likely? The basic premise of these experiments seems to be that we observe the collision byproducts of energetic particles, where "energetic" presumably refers to kinetic energy, since we used an "accelerator" to energize them. To create interesting collision byproducts, the kinetic energy in the collision (measured in eV) must be at least as large as the mass of the particle (also measured in eV) we wish to create. Thus, we can observe particles of higher mass with a higher powered accelerator. The second part of my question is this: are particle accelerators the only way of pushing the boundaries of experimental particle physics? Is it conceivable that there is a way to produce these interesting byproducts in an experimental setting without using high-energy collisions? If not, is it conceivable that there is a way to energize particles other than by accelerating them around a track? If not, is it impossible by definition or for some physical reason? If either of these alternatives are conceivable, then assuming they're not practical replacements for large accelerators today, is it possible that they will be in the future? Is it likely? In a sentence, my question is this: is the future of experimental particle physics now just a matter of building larger and larger particle accelerators? | There are many competing limits on the maximum energy an accelerator like the LHC (i.e. a synchrotron, a type of circular accelerator) can reach. The main two are energy loss due to bremsstrahlung (also called synchrotron radiation in this context, but that's a much less fun name to say) and the bending power of the magnets. The bending power of the magnets isn't that interesting. There's a maximum magnetic field that we can acquire with current technology, and the strength of it fundamentally limits how small the circle can be. Larger magnetic fields means the particles curve more and let you build a collider at higher energy with the same size. Unfortunately, superconducting magnets are limited in field: a given material has a maximum achievable field strength. You can't just make a larger one to get a larger field - you need to develop a whole new material to make them from. Bremsstrahlung Bremsstrahlung is German for "braking radiation." Whenever a charged particle is accelerated, it emits some radiation. For acceleration perpendicular to the path (for instance, if its traveling in a circle), the power loss is given by: $$P=\frac{q^2 a^2\gamma^4}{6\pi\epsilon_0c^3}$$ $q$ is the charge, $a$ is the acceleration, $\gamma$ is the Lorentz factor, $\epsilon_0$ is the permittivity of free space, and $c$ is the speed of light. In high energy, we usually simplify things by setting various constants equal to one. In those units, this is $$ P=\frac{2\alpha a^2\gamma^4}{3}$$ This is instantaneous power loss. We're usually more interested in power loss over a whole cycle around the detector. The particles are going essentially at the speed of light, so the time to go around once is just $\frac{2\pi r}{c}$. We can simplify some more: $\gamma=\frac{E}{m}$, and $a=\frac{v^2}{r}$. All together, this gives: $$ E_{\rm loop} = \frac{4\pi\alpha E^4}{3m^4r}$$ The main things to note from this are: As we increase the energy, the power loss increases very quickly Increasing the mass of the particles is very effective at decreasing the power loss Increasing the radius of the accelerator helps, but not as much as increasing the energy hurts. To put these numbers in perspective, if the LHC were running with electrons and positrons instead of protons, at the same energy and everything, each $6.5~\rm TeV$ electron would need to have $37\,000~\rm TeV$ of energy added per loop. All told, assuming perfect efficiency in the accelerator part, the LHC would consume about $20~\rm PW$, or about 1000 times the world's energy usage just to keep the particles in a circle (this isn't even including the actually accelerating them part). Needless to say, this is not practical. (And of course, even if we had the energy, we don't have the technology.) Anyway, this is the main reason particle colliders need to be large: the smaller we make them, the more energy they burn just to stay on. Naturally, the cost of a collider goes up with size. So this becomes a relatively simple optimization problem: larger means higher-up front costs but lower operating costs. For any target energy, there is an optimal size that costs the least over the long run. This is also why the LHC is a hadron collider. Protons are much heavier than electrons, and so the loss is much less. Electrons are so light that circular colliders are out of the question entirely on the energy frontier. If the next collider were to be another synchrotron, it would probably either collide protons or possibly muons. The problem with using protons is that they're composite particles, which makes the collisions much messier than using a lepton collider. It also makes the effective energy available less than it would be for an equivalent lepton collider. The next collider There are several different proposals for future colliders floating around in the high-energy physics community. A sample of them follows. One is a linear electron-positron collider. This would have allow us to make very high-precision measurements of Higgs physics, like previous experiments did for electroweak physics, and open up other precision physics as well. This collider would need to be a linear accelerator for the reasons described above. A linear accelerator has some significant downsides to it: in particular, you only have one chance to accelerate the particles, as they don't come around again. So they tend to need to be pretty long. And once you accelerate them, most of them miss each other and are lost. You don't get many chances to collide them like you do at the LHC. Another proposal is basically "the LHC, but bigger." A $100~\rm TeV$ or so proton collider synchrotron. One very interesting proposal is a muon collider. Muons have the advantage of being leptons, so they have clean collisions, but they are much heavier than electrons, so you can reasonably put them in a synchrotron. As an added bonus, muon collisions have a much higher chance of producing Higgs bosons than electrons do. The main difficulty here is that muons are fairly short-lived (around $2.2~\rm\mu s$), so they would need to be accelerated very quickly before they decay. But very cool, if it can be done! The Future If we want to explore the highest energies, there's really no way around bigger colliders: For a fixed "strongest magnet," synchrotrons fundamentally need to be bigger to get to higher energy. And even assuming we could get magnets of unlimited strength, as we increase the energy there's a point where it's cheaper to just scrap the whole thing and build a bigger one. Linear accelerators are limited in the energy they can reach by their size and available accelerator technology. There is research into better acceleration techniques (such as plasma wakefield accelerators), but getting them much better will require a fundamental change in the technology. There is interesting research that can be done into precision measurements of particle physics at low energy, but for discovering new particles higher energy accelerators will probably always be desirable. | {
"source": [
"https://physics.stackexchange.com/questions/382128",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/137324/"
]
} |
382,667 | This might be a stupid question but I could not find the answer in my textbook or on the internet with a few searches. So I believe when an atomic electron moves down to a lower energy level it emits radiation in the process. However since the energy levels are discrete, the photons released have specific energies and hence wavelength which results in the line spectra. However apparently this is only true for hot gases and not liquids or solids, which have continous emission spectrum. Why is this? | In liquids and solids the difference in energy between energy levels becomes very small, due to the electron clouds of several atoms bein in very close proximity of one another. These similar energy levels will form 'bands' of indistinguishable spectral lines. In gases however, atoms will be spaced loosely enough such that the interaction between atoms will be minimal. This allows the energy levels to have sufficient difference in energy for distinct lines to be formed. | {
"source": [
"https://physics.stackexchange.com/questions/382667",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/182004/"
]
} |
383,248 | Suppose we have two twins travelling away from each other, each twin moving at some speed $v$: Twin $A$ observes twin $B$’s time to be dilated so his clock runs faster than twin $B$’s clock. But twin $B$ observes twin $A$’s time to be dilated so his clock runs faster than twin $A$’s clock. Each twin thinks their clock is running faster. How can this be? Isn’t this a paradox? | The answer to this is that our twins, $A$ and $B$, are not measuring the same thing on their clocks. Since they are not measuring the same thing there is no paradox in the fact that each twin thinks their clock is running faster. I’m going to try and give an intuitive feel for what is going on, and to do this I’ll use an analogy. This is going to seem a bit odd at first but bear with me and I hope everything will become clear. Suppose I, Albert, and my two friends Bill and Charlie are all in cars driving at $1$ metre per second. I am driving due North, Bill is driving at an angle $\theta$ to my right and Charlie is driving at an angle $\theta$ to my left: Consider how fast we are travelling North, i.e. the component of our velocity in the North direction. I am travelling North at $1$ m/s while my friends are travelling North at $\cos\theta$ m/s, so my friends are travelling North more slowly that I am. Now it turns out that our compasses have the odd feature that they show North as the direction in which our cars are travelling. That means both Bill and Charlie also consider themselves to be travelling North. Let’s have a look at the situation from Bill’s perspective: Bill considers himself to be travelling North at $1$ m/s while from his perspective I am travelling North more slowly, at $\cos(\theta)$, and Charlie is travelling North even more slowly, at $\cos(2\theta)$ m/s. And for completeness let’s show Charlie’s view: Like Bill, Charlie considers himself to be travelling North at $1$ m/s while he considers me to be travelling North more slowly, at $\cos(\theta)$, and Bill to be travelling North even more slowly, at $\cos(2\theta)$ m/s. So all three of us think they are travelling North faster than the other two. Let me emphasise this because this is the key point in my argument: Everyone thinks they are travelling North faster than everyone else Now this isn’t rocket science. The reason we all think we are travelling North fastest is because we have different ideas of what direction North is in. But this is exactly what happens in special relativity if we replace the direction North in our diagrams by the time direction. And the reason everyone thinks everyone else’s time is dilated is because we all disagree about the direction of the time axis. In special relativity we typically use spacetime diagrams with the time axis vertical and the $x$ axis horizontal (we omit the $y$ and $z$ axes because it’s hard to draw 4D graphs). I’ll get out of my car, so I’m not moving, then if I draw my spacetime diagram it looks like this: Although I’m no longer in the car I am still moving up the time axis because of course I’m moving through time at one second per second. So we have a diagram much like the one I started with except now the vertical direction is time not North , and I’m moving in the time direction not the North direction. Bill and Charlie are moving away from me along the $x$ axis at speeds $+v$ and $-v$ just like the twins in the question: But, and this is the key point, what special relativity tells us is that for a moving observer the time and x axes are rotated relative to mine. Specifically, if the other observer is moving relative to me at a speed $v$ then their time axis is rotated by an angle $\theta$ given by: $$ \tan\theta = \frac{v}{c} $$ So if I draw Bill and Charlie’s time axes on my graph I get: Hopefully you can now see the point of my analogy. In Bill and Charlie’s rest frames they are stationary, so as far as they are concerned they are moving up the time axis at $1$ second per second just like me. But because their time axes are rotated relative to me I observe them to be moving in the time direction at less than $1$ second per second i.e. their time is dilated relative to mine. Bearing in mind my analogy, to find out what Bill observes we rotate everything to the left to make Bill’s time axis vertical, and now Bill considers himself to be moving up the time axis fastest. Likewise we rotate to the right to make Charlie’s time axis vertical, and we find that Charlie considers himself to be moving up the time axis fastest. And this answers our question. All three of us think we are moving through time fastest, and the other two people’s time is dilated, because when we measure time we are all measuring time in a different direction. Our clocks differ because we are measuring different things. | {
"source": [
"https://physics.stackexchange.com/questions/383248",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/1325/"
]
} |
383,704 | General Relativity predicts the bending of light due to gravity. But, does this explanation require light to be corpuscular? Can the EM waves of classical electromagnetism be bend in Einstein's gravity? Or does the fact that light bends due to gravity alone prove that it is photons(corpuscles) in General Relativity? | In relativity (both flavours) light rays follow null geodesics. That is when you calculate the proper length of any part of the light's trajectory it comes out zero. More precisely the trajectory of a light ray is described by the null geodesic equation. So to calculate the bending of light you simply have to solve the null geodesic equation. No resort to quantum mechanics or the particulate nature of the light is required. Light is not unique in this respect. In the weak field limit the trajectory of gravitational waves is also described by the null geodesic equation. In fact massless particles also follow the same trajectory, though the key property here is not that they are particles but that they are massless. | {
"source": [
"https://physics.stackexchange.com/questions/383704",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/148764/"
]
} |
384,187 | I was in a conversation with my senior engineer where he kept on insisting that we can use plural when we write down any unit. I argued that it is not the 'common' practice or even throughout my whole academic career (unfortunately) I haven't found any instance where there was any plural unit used in the text books. He argued that if I said that it was not correct then it should have a good reason for that. When I searched for this topic I couldn't come to any conclusive decision. Such as this thread and the other links those have been referred there (some leads to English.SE). These answers gave me the impression that it is grammatically acceptable provided the right circumstances. But I felt that it would be rather ambiguous to accept plurals on scientific and engineering notations. For example we were talking about output rate of a boiler which is measured in $\mathrm{kg/hr}$. My senior said that it is okay if anyone writes $\mathrm{kgs/hr}$. To me it looks ambiguous. If anyone writes $\mathrm{s}$ after $\mathrm{kg}$ it may give a plural sense but as well it may refer to second also. Moreover if anyone argues that this is acceptable in some cases (like $\mathrm{kgs/hr}$) then what would be the yard stick to find out accepted cases? For instance can we add $\mathrm{s}$ in $\mathrm{m/s}$ or $\mathrm{km/hr}$ like $\mathrm{ms/s}$ or $\mathrm{kms/hr}$? There is The NIST Guide for the Use of the International System of Units , which has this example . the length of the laser is $5\ \mathrm{m}$ but not : the length of the laser is five meters But I want to have more conclusive answer to which one is acceptable i.e. $\mathrm{kg/hr}$ or $\mathrm{kgs/hr}$ (or other similar instances). | According to the International System of Units (SI) Unit symbols are mathematical entities and not abbreviations. Therefore, they are not followed by a period except at the end of a sentence, and one must neither use the plural nor mix unit symbols and unit names within one expression, since names are not mathematical entities. as well as to the international standard ISO/IEC 80000 Quantities and units Symbols for units are always written in roman (upright) type, irrespective of the type used in the rest of the text. The unit symbol shall remain unaltered in the plural and is not followed by a full stop except for normal punctuation, e.g. at the end of a sentence. it is not acceptable to use the plural of unit symbols. By the way, it is also not permissible to use abbreviations such as “hr” for unit symbols (“h”) or unit names (“hour”). | {
"source": [
"https://physics.stackexchange.com/questions/384187",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/182297/"
]
} |
385,298 | My kitchen clock has a pendulum, which is just for decoration and is not powering the clock. The pendulum's arm has a magnet that is repelled by a second magnet that is fixed to the clocks body. The repelling magnets are at their closest when the pendulum is at its lowest point. We all (hopefully) agree that a regular pendulum would eventually slow down due to friction. But I honestly cannot recall ever seeing the clock's pendulum at rest. By my calculations the magnet would slow the pendulum as it falls but accelerate it as it swings up the other side. So how would a magnet actually create any net benefit to the pendulum? Will the pendulum eventually stop, or if not, how is it not violating the laws of thermodynamics? | The pendulum is being driven by the magnet: the fixed magnet in the clock is actually the pole of an electromagnet which the clock is using to drive the pendulum: the clock is putting energy into the pendulum via the electromagnet. Almost certainly the clock 'listens' for the pendulum by watching the induced current in the electromagnet, and then gives it a kick as it has just passed (or alternatively pulls it as it approaches). People have used techniques like this to actually drive a time-keeping pendulum (I presume this pendulum is not keeping time but just decorative) but I believe they are not as good as you would expect them to be, because the pendulum is effectively not very 'free'. 'Free' is a term of art in pendulum clock design which refers to, essentially, how much the pendulum is perturbed by the mechanism which drives it and/or counts swings, the aim being to make pendulums which are perturbed as little as possible. The ultimate limit of this is clocks where there are two pendulums: one which keeps time and the other which counts seconds to decide when to kick the good pendulum (and the kicking mechanism also synchronises the secondary pendulum), which are called 'free pendulum' clocks. | {
"source": [
"https://physics.stackexchange.com/questions/385298",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/133736/"
]
} |
385,810 | I cook frequently with aluminum foil as a cover in the oven. When it's time to remove the foil and cook uncovered, I find I can handle it with my bare hands, and it's barely warm. What are the physics for this? Does it have something to do with the thickness and storing energy? | You get burned because energy is transferred from the hot object to your hand until they are both at the same temperature. The more energy transferred, the more damage done to you. Aluminium, like most metals, has a lower heat capacity than water (ie you) so transferring a small amount of energy lowers the temperature of aluminium more than it heats you (about 5x as much). Next the mass of the aluminium foil is very low - there isn't much metal to hold the heat, and finally the foil is probably crinkled so although it is a good conductor of heat you are only touching a very small part of the surface area so the heat flow to you is low. If you put your hand flat on an aluminium engine block at the same temperature you would get burned. The same thing applies to the sparks from a grinder or firework "sparkler", the sparks are hot enough to be molten iron - but are so small they contain very little energy. | {
"source": [
"https://physics.stackexchange.com/questions/385810",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/85056/"
]
} |
386,346 | I've just seen this on the news - Single Trapped Atom Captures Science Photography Competition's top prize . Credit: David Nadlinger via EPSRC I am not a Physics major but I believe I do know the basics. I have always believed that we can't really see single atoms with naked eye. What allows that picture to make us see a single atom? If that Single atom is being held there by a field, why are the atoms of that very field not visible? | The questions of whether you can detect light emitted from an (isolated) atom and whether you can resolve an atom from its neighbours are completely independent. The spacing between different atoms in a regular material remains impossible to resolve using visible light, whose wavelength is several thousand times larger. You can "see" individual atoms by using other microscopy techniques (so see e.g. this short film for a nice example), but those are using rather elaborate instrumentation and post-processing, and they do not reflect what is visible to the naked human eye. The picture you're quoting, however, does not image one atom out of many in a material. Instead, it really is a single isolated atom, held in a vacuum by a set of electric "tweezers" called an ion trap (itself produced by the metal electrodes that surround the atom, which will be a couple of centimetres across), and which is emitting light via fluorescence (i.e. it is being excited by a laser and re-emitting that light). The size of the atom as it appears in the picture has nothing to do with its actual size: as far as the camera is concerned, the atom is a point source, and the nonzero spread in the image is caused by the finite resolution of the camera. Thus, assuming that the trapped atom is bright enough, it could in principle be seen with the naked eye, in which case it would look much like a star on a clear, still night (which are also point sources as far as our eyes are concerned, though their appearance then gets changed by twinkling ). Whether the experimental configurations in actual use are enough to produce atoms that are bright enough to see with the naked eye is a good question ; my understanding is that this isn't quite possible, but that with a completely dark background it isn't that far out of reach. That does mean that a human wouldn't be able to see both the atom itself and the trap electrodes simultaneously, since you require a completely dark background to begin to have a chance at seeing the atom. As for the camera, the author has clarified in a comment that it's a single thirty-second exposure, with the electrodes illuminated by a camera flash halfway through the exposure. Finally, to address your expanded question, If that single atom is being held there by a field, why are the atoms of that very field not visible? the answer is that the field that is holding it up is not made of atoms at all. The atom in the picture is being held in place by electrostatic forces, which are the same forces that you use to pull up bits of paper with a balloon that you've rubbed against your hair . Electrostatic forces, like magnetic forces and gravity, are said to form a field, but it's a force field that's all force and no atoms. The effect here is analogous to magnetic levitation , except that you use electric fields (carefully engineered ones, produced by the metal electrodes that surround the atom in the picture) instead of magnets. | {
"source": [
"https://physics.stackexchange.com/questions/386346",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/87457/"
]
} |
386,370 | I know that the escape velocity of the Moon is of 2.38 km/s. That is the velocity required to put any object of any size or mass out of the gravitational effect of the Moon (correct me if anything I am saying is wrong). My question is: what acceleration would be required to put any object in orbit around the Moon, not out of the Moon's gravitational influence? Also: what would be the required height and momentum of an object to orbit around the Moon for long period of time? | The questions of whether you can detect light emitted from an (isolated) atom and whether you can resolve an atom from its neighbours are completely independent. The spacing between different atoms in a regular material remains impossible to resolve using visible light, whose wavelength is several thousand times larger. You can "see" individual atoms by using other microscopy techniques (so see e.g. this short film for a nice example), but those are using rather elaborate instrumentation and post-processing, and they do not reflect what is visible to the naked human eye. The picture you're quoting, however, does not image one atom out of many in a material. Instead, it really is a single isolated atom, held in a vacuum by a set of electric "tweezers" called an ion trap (itself produced by the metal electrodes that surround the atom, which will be a couple of centimetres across), and which is emitting light via fluorescence (i.e. it is being excited by a laser and re-emitting that light). The size of the atom as it appears in the picture has nothing to do with its actual size: as far as the camera is concerned, the atom is a point source, and the nonzero spread in the image is caused by the finite resolution of the camera. Thus, assuming that the trapped atom is bright enough, it could in principle be seen with the naked eye, in which case it would look much like a star on a clear, still night (which are also point sources as far as our eyes are concerned, though their appearance then gets changed by twinkling ). Whether the experimental configurations in actual use are enough to produce atoms that are bright enough to see with the naked eye is a good question ; my understanding is that this isn't quite possible, but that with a completely dark background it isn't that far out of reach. That does mean that a human wouldn't be able to see both the atom itself and the trap electrodes simultaneously, since you require a completely dark background to begin to have a chance at seeing the atom. As for the camera, the author has clarified in a comment that it's a single thirty-second exposure, with the electrodes illuminated by a camera flash halfway through the exposure. Finally, to address your expanded question, If that single atom is being held there by a field, why are the atoms of that very field not visible? the answer is that the field that is holding it up is not made of atoms at all. The atom in the picture is being held in place by electrostatic forces, which are the same forces that you use to pull up bits of paper with a balloon that you've rubbed against your hair . Electrostatic forces, like magnetic forces and gravity, are said to form a field, but it's a force field that's all force and no atoms. The effect here is analogous to magnetic levitation , except that you use electric fields (carefully engineered ones, produced by the metal electrodes that surround the atom in the picture) instead of magnets. | {
"source": [
"https://physics.stackexchange.com/questions/386370",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/154678/"
]
} |
386,971 | I know that the full Moon appears when Sun, Moon and Earth are in a straight line, but if we consider that they are in straight line, why is the Moon illuminated? I mean to say that Earth should block all the rays of the Sun and shouldn't allow any light ray to reach the Moon. In this case the moon should not get illuminated as no light has reached it which it can reflect back. Then why do we see a fully illuminated hemisphere of the Moon? | One of the reasons people often have bad intuitions like yours about the relationship between the Earth and the Moon is because they've never seen an accurate picture. The distance from the Earth to the Moon is often pictured something like this: The relative sizes of the Earth and the Moon are accurate but the distance is not. Given this picture it looks like the Moon ought to be almost always in the shadow of the Earth. A picture that accurately shows the relative sizes and distances is more like this: And now it should be pretty clear that it would be really hard to get the Moon exactly in the shadow of the Earth from that far away. And if that's not clear, try it. Get a light bulb, a big grapefruit, a small orange, and a dark room and see if you can get the orange in the shadow of the grapefruit from twenty grapefruit-diameters away. A fact that is missing from this diagram is: where exactly is the shadow of the Earth, and how large is it compared to the size and position of the moon? I've edited the diagram above to give a rough idea of it. The white lines on the left of the Earth, when extended, go to the "north" and "south" poles of the Sun, 150 million km away. The Sun is about 1.4 million km in diameter. The white lines that continue on the right of the Earth indicate where the shadow of the Earth is; inside this region you can see neither the top nor the bottom of the sun. That region is about 1.5 thousand km long, or about four times the distance from the Earth to the Moon. Imagine those lines meet three or four screen widths to the right of your screen. The Moon's orbit takes it both "north" and "south" of that shadow region; I've marked the approximate maximum positions of the Moon on the diagram. So you can see, there's a pretty small region that the Moon has to hit in order to be in shadow on a full Moon. Most of the time the full Moon will be too far north or south of the shadowed region. | {
"source": [
"https://physics.stackexchange.com/questions/386971",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/182781/"
]
} |
386,990 | I know that the balancing toys have their center of mass under the axis on which they are balancing. That's why they stay still. But when we give a little tap on it, it re-balances itself. But how does it happen? Does the center of mass acts like a pendulum? | One of the reasons people often have bad intuitions like yours about the relationship between the Earth and the Moon is because they've never seen an accurate picture. The distance from the Earth to the Moon is often pictured something like this: The relative sizes of the Earth and the Moon are accurate but the distance is not. Given this picture it looks like the Moon ought to be almost always in the shadow of the Earth. A picture that accurately shows the relative sizes and distances is more like this: And now it should be pretty clear that it would be really hard to get the Moon exactly in the shadow of the Earth from that far away. And if that's not clear, try it. Get a light bulb, a big grapefruit, a small orange, and a dark room and see if you can get the orange in the shadow of the grapefruit from twenty grapefruit-diameters away. A fact that is missing from this diagram is: where exactly is the shadow of the Earth, and how large is it compared to the size and position of the moon? I've edited the diagram above to give a rough idea of it. The white lines on the left of the Earth, when extended, go to the "north" and "south" poles of the Sun, 150 million km away. The Sun is about 1.4 million km in diameter. The white lines that continue on the right of the Earth indicate where the shadow of the Earth is; inside this region you can see neither the top nor the bottom of the sun. That region is about 1.5 thousand km long, or about four times the distance from the Earth to the Moon. Imagine those lines meet three or four screen widths to the right of your screen. The Moon's orbit takes it both "north" and "south" of that shadow region; I've marked the approximate maximum positions of the Moon on the diagram. So you can see, there's a pretty small region that the Moon has to hit in order to be in shadow on a full Moon. Most of the time the full Moon will be too far north or south of the shadowed region. | {
"source": [
"https://physics.stackexchange.com/questions/386990",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/184599/"
]
} |
387,663 | The Hamiltonian of the Earth in the gravity field of the Sun is the same as that of the electron in the hydrogen atom (besides some constants), so why are the energy levels of the Earth not quantized? (of course the question is valid for every mass in a gravity field). | The orbital energy of the Earth around the Sun is quantized. Measuring this quantization directly is infeasible, as I'll show below, but other experiments with bouncing neutrons ( Nature paper ) show that motion in a classical gravity field is subject to energy quantization. We can estimate the quantized energy levels of the Earth's orbit by analogy with the hydrogen atom since both are inverse square forces--just with different constants. For hydrogen :
$$E_n = -\frac{m_e}{2}\left(\frac{e^2}{4\pi\epsilon_0}\right)^2\frac{1}{n^2\hbar^2}$$
Replacing $m_e$ with the mass of Earth ($m$) and the parenthesized expression with the corresponding expression from the gravitational force ($GMm$, where $M$ is the mass of the sun and $G$ is the gravitational constant) to get
$$E_n = -\frac{m}{2}\left(GMm\right)^2\frac{1}{n^2\hbar^2}$$
Setting this equal to the total orbital energy $$E_n = -\frac{m}{2}\left(GMm\right)^2\frac{1}{n^2\hbar^2} = -\frac{GMm}{2r}$$
Solving for $n$ and plugging in values gives:
$$n = \frac{m}{\hbar}\sqrt{GMr} = 2.5\cdot 10^{74}$$
The fact that Earth's energy level is at such a large quantum number means that any energy transition (which are proportional to $1/n^3$) will be undetectably small. In fact, to transition to the next energy level, Earth would have to absorb:
$$\Delta E_{n \to n+1} = m\left(GMm\right)^2\frac{1}{n^3\hbar^2} = 2\cdot 10^{-41}\ \textrm{J} = 1\cdot 10^{-22}\ \textrm{eV}$$
For a sense of how little this energy is, a photon of this energy has a wavelength of $10^{16}$ meters--or, one light-year. Solving for $r$:
$$r = n^2\left(\frac{\hbar}{m}\right)^2\frac{1}{GM}$$
An increase in the principal quantum number ($n$) by one results in a change in orbital distance of
\begin{align}
\Delta r &= \left[(n+1)^2 - n^2\right]\left(\frac{\hbar}{m}\right)^2\frac{1}{GM} \\
&= \left[2n + 1\right]\left(\frac{\hbar}{m}\right)^2\frac{1}{GM} \\
&= 1.2\cdot 10^{-63}\ \textrm{meters}
\end{align}
Again, way too small to measure. | {
"source": [
"https://physics.stackexchange.com/questions/387663",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/139691/"
]
} |
387,734 | Recently my physics teacher during a rant said something that piqued my interest. Heres what he said "There are more problems visiting other stars if you wanted a rocket to go 99% the speed of light you'd need an extreme amount of energy and the same amount of energy to also stop that rocket once you were at your destination and wanted to land safely." But, this seems odd to me because in this rant he was talking about the earth to Alpha Centauri. So he's saying the amount of energy to both starts it going .99 c and stop it is the same. But, a vacuum is frictionless and the rocket doesn't have to fight the 1g here on earth so how would they have the same energy total? One thought I had is that using the gravity of earth as a slingshot might make up for having to fight earth gravity and air resistance. My specific question is is he right? And, if he is why? (Math up to pre-calc is fine.) | Consider the energy $E_1$ required to remove $1$kg from Earth's gravity. This is given by: $$ E = \frac{GM}{r} $$ where $r$ is the radius of the Earth, and this works out to be about: $$E_1 = 6.3 \times 10^7 \,\text{J} $$ Now consider the energy $E_2$ required to accelerate that $1$kg to $0.99c$. The total energy is give by the relativistic equation for the energy: $$ E^2 = p^2c^2 + m^2 c^4 $$ If we calculate this for $1$kg at $0.99c$ then subtract off the rest energy $mc^2$ we get about: $$ E_2 = 5.4 \times 10^{17} \,\text{J} $$ So the energy needed to get away from Earth's gravity, $E_1$, is roughly $0.000000012\%$ of the energy, $E_2$, needed to reach the final speed. That's why the difference it makes to the acceleration and deceleration energies is negligible. A similar argument applies to air resistance. To get to $0.99c$ at a survivable acceleration, i.e. of order $g$, the vast majority of the acceleration would be done after you had left the atmosphere. So the effect of the air resistance would also be negligible. | {
"source": [
"https://physics.stackexchange.com/questions/387734",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/182869/"
]
} |
387,741 | If you had a super powerful gamma-ray creating gun what would it be like? Would it be similar to on sci-fi shows where they have laser guns that can pierce a hole through an enemy more effective than a bullet? Or would it be relatively lame and have no immediate effects? Or how about the gamma rays convert their energy into thermal energy and you explode due to the water in your body. Would any of these happen or am I just having some super cool childish fantasies and if it is effective how hard would it be to make? | Consider the energy $E_1$ required to remove $1$kg from Earth's gravity. This is given by: $$ E = \frac{GM}{r} $$ where $r$ is the radius of the Earth, and this works out to be about: $$E_1 = 6.3 \times 10^7 \,\text{J} $$ Now consider the energy $E_2$ required to accelerate that $1$kg to $0.99c$. The total energy is give by the relativistic equation for the energy: $$ E^2 = p^2c^2 + m^2 c^4 $$ If we calculate this for $1$kg at $0.99c$ then subtract off the rest energy $mc^2$ we get about: $$ E_2 = 5.4 \times 10^{17} \,\text{J} $$ So the energy needed to get away from Earth's gravity, $E_1$, is roughly $0.000000012\%$ of the energy, $E_2$, needed to reach the final speed. That's why the difference it makes to the acceleration and deceleration energies is negligible. A similar argument applies to air resistance. To get to $0.99c$ at a survivable acceleration, i.e. of order $g$, the vast majority of the acceleration would be done after you had left the atmosphere. So the effect of the air resistance would also be negligible. | {
"source": [
"https://physics.stackexchange.com/questions/387741",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/182869/"
]
} |
388,164 | In atomic bombs, nuclear reactions provide the energy of the explosion. In every reaction, a thermal neutron reaches a plutonium or a uranium nucleus, a fission reaction takes place, and two or three neutrons and $\gamma$ radiation are produced. I know that it happens in a very short time, and an extreme amount of energy is released which can be calculated from the mass difference between $m_\mathrm{starting}$ and $m_\mathrm{reaction\ products}$. So my question is: Why exactly does it explode? What causes the shockwave and why is it so powerful? (Here I mean the pure shockwave which is not reflected from a surface yet) I understand the reactions which are taking place in nuclear bombs but I don't understand why exactly it leads to a powerful explosion instead of just a burst of ionising radiation. | I don't understand why exactly it leads to a powerful explosion instead of just a burst of ionising radiation. This radiation, representing most of the initial energy output by a nuclear weapon, is swiftly absorbed by the surrounding matter. The latter in turn heats almost instantly to extremely high temperature, so you have the almost instantaneous creation of a ball of extremely high kinetic energy plasma. This in turn means a prodigious rise in pressure, and it is this pressure that gives rise the blast wave. The same argument applies to the neutrons and other fission fragments / fusion products immediately produced by the reaction. But it is the initial burst of radiation that overwhelmingly creates the fireball in an atmospheric detonation, and the fireball that expands to produce most of the blast wave. | {
"source": [
"https://physics.stackexchange.com/questions/388164",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/134733/"
]
} |
388,700 | I am just starting to wrap my head around analytical mechanics, so this question might sound weird or trivial to some of you. In class I have been introduced to Noether's theorem, which states that if the Lagrangian function is invariant under a continuous group of transformations then it's possible to find a conservation law. But a Lagrangian system with $n$ degrees of freedom obeys the Euler-Lagrange equations, which are:
$$\frac{d}{dt}\frac{\partial L}{\partial \dot{q_i}}-\frac{\partial L}{\partial q_i} = 0$$ for $i = 1, ..., n$. This equation represents a system of $n$ second order differential equations that already has $2n$ arbitrary constants in its general solution. This constants are obviously to be preserved in time, so they actually represent conservation laws. So my question is, what is the utility of a theorem that tells you under what conditions it is possible to find a conservation law, if we already know from the Euler-Lagrange equations that a Lagrangian system has $2n$ conservation laws? | We usually call equations like $$\frac{d}{dt} \frac{\partial L}{\partial \dot{q_i}} - \frac{\partial L}{\partial q_i} = 0$$ "equations of motion," because they are equations that tell us how the variables of our system (here $q_i$ ) evolve in time. Indeed, in general, the solution to $n$ second order differential equations involves $2n$ integration constants (or initial conditions) in the solution. However, most people would not call these integration constants "conservation laws." In general usage, a "conserved quantity" $Q$ is a function of the configuration variables (here $q_i$ and $\dot q_i$ ) that does not change in time when the configuration variables evolve according to the equations of motion: $$\frac{d}{dt} Q(q_i, \dot q_i) = 0.$$ Note that $Q(q_i, \dot q_i)$ does not depend on $t$ explicitly; it only depends on $t$ insofar as $q_i$ and $\dot q_i$ do. However, an initial condition depends on $q_i$ , $\dot q_i$ , and $t$ . You need to know $t$ in order to know "how far to turn back the clock" to find the initial position and velocity. A slick "proof" of Noether's theorem goes as follows. Say you have some differentiable group of transformations that leave your Lagrangian invariant. Imagine changing a path in configuration space by an infinitesimal group action, using a tiny number $\varepsilon$ . For example, an infinitesimal translation in the $x$ -direction in 3D space ( $i = 1, 2, 3$ ) would be given by $$q_1 \to q_1 + \varepsilon$$ $$q_2 \to q_2$$ $$q_3 \to q_3$$ $$\dot q_i \to \dot q_i$$ and an infinitesimal rotation in the $xy$ -plane would be given by $$q_1 \to q_1 + \varepsilon q_2$$ $$q_2 \to q_2 - \varepsilon q_1$$ $$\dot q_1 \to \dot q_1 + \varepsilon \dot q_2$$ $$\dot q_2 \to \dot q_2 - \varepsilon \dot q_1$$ $$q_3 \to q_3$$ $$\dot q_3 \to \dot q_3$$ Under these transformations, the Lagrangian $L(q_i, \dot q_i)$ will not change its value. In other words, the change in the Lagrangian can be expressed as $$\delta L(q_i, \dot q_i) = \varepsilon A(q_i, \dot q_i)$$ where $A = 0$ if the group action is a symmetry. Here is the slick part: now imagine that the parameter $\varepsilon$ is time-dependent, i.e. $\varepsilon(t)$ . For our above two actions, the transformations would then become $$q_1 \to q_1 + \varepsilon$$ $$\dot q_1 \to \dot q_i + \dot \varepsilon$$ $$q_{2} \to q_{2}$$ $$q_{3} \to q_{3}$$ $$\dot q_{2} \to \dot q_{2}$$ $$\dot q_{3} \to \dot q_{3}$$ and $$q_1 \to q_1 + \varepsilon q_2$$ $$q_2 \to q_2 - \varepsilon q_1$$ $$\dot q_1 \to \dot q_1 + \varepsilon \dot q_2 + \dot \varepsilon q_2$$ $$\dot q_2 \to \dot q_2 - \varepsilon \dot q_1 - \dot \varepsilon q_1$$ $$q_3 \to q_3$$ $$\dot q_3 \to \dot q_3$$ (where the extra term above comes from the product rule when differentiating by $t$ ). Now, $\varepsilon(t)$ and $\dot \varepsilon(t)$ are both tiny numbers that change paths in configuration space. That means that, just doing a first order Taylor expansion, the change in $L$ under these transformations can be expressed as $$\delta L = \varepsilon A + \dot \varepsilon B$$ where the $A$ is the same $A$ as before, meaning $A = 0$ if the transformation is a symmetry. Now, on actual paths, $\delta S = 0$ for any tiny variation we make to our path. (That is just the principle of least action.) That includes our tiny group action variation. Therefore, on actual paths, $$0 = \delta S = \int \delta L dt = \int \dot \varepsilon B dt = - \int \varepsilon \dot B dt.$$ (In the last step we integrated by parts and imposed boundary conditions $\varepsilon = 0$ on the boundary of integration.) Therefore, if $\delta S$ is to be $0$ for any $\varepsilon$ , we must have $$\dot B = 0$$ so $B$ is a conserved quantity. Note that if our transformation wasn't a symmetry, then $A \neq 0$ and $$\dot B = A$$ meaning that $B$ would change in time and not be a conserved quantity. This concludes the proof that symmetries give conservation laws, and also instructs you how to find said conserved quantities. Now this is all nice and interesting. Symmetries imply conservation laws. In a sense, we have understood where "conserved quantities" come from (symmetries). Conserved quantities are very useful in physics because they usually make analyzing the system much easier. For example, even in intro physics, the conservation of momentum and energy are always used to make solving for the motion of a particle much easier. In more complicated examples, like for example a gas of many particles, the evolution of the system is far too complicated to ever hope to describe. However, if you know a few conserved quantities (like energy, for example) you can still get a pretty good idea of how the system behaves. In quantum field theory, quantum fields are also governed by Lagrangians. However, it is often difficult to figure out exactly what the Lagrangian of quantum fields should be based off of experimental data. Something that is straightforward to ascertain from experimental data, however, are conserved quantities , like charge, lepton number, baryon number, weak hyper change, and many others. Experimentalists can figure out what these conserved quantities are, and then theorists will cook up Lagrangians with symmetries that have the right conserved quantities. This greatly aids theorists in figuring out the fundamental laws of physics. Considerations of symmetries and conserved quantities historically played a large role in piecing together the standard model, and continue to play a crucial role in theorists trying to figure out what lies beyond it. EDIT: So, to answer your question proper, any system of differential equations will have integration constants (A.K.A. initial conditions). However, from equations of motion derived from a Lagrangian (and all known physical laws can be written with Lagrangians) we have extra symmetries that have important physical meaning. Furthermore, the exact solutions to differential equations are usually impossible to solve for any moderately complex system. Therefore, finding initial conditions is usually a waste of time, while Noether's theorem is easy to use. | {
"source": [
"https://physics.stackexchange.com/questions/388700",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/148676/"
]
} |
388,802 | This question is based on a discussion with a 10-year old. So if it is not clear how to interpret certain details, imagine how a 10-year old would interpret them. This 10-year old does not know about relativistic issues, so assume that we are living in a Newtonian universe. In this model, our universe is homogenous and isotropic, with properties such as we see around us. Specifically, the density and size distribution of stars is what the current models say they are. This universe has the same size as our observable universe, around 45 billion light years. If we froze time, and took a plane through this universe, would this plane go through a star? I cannot figure out if the chance of this happening is close to zero or close to one. I know that distances between stars are very big, so the plane is much more likely to be outside a star than inside a star, so my intuition wants to say that the chance is very small. But on the other hand, this plane will be very big... So based on that, my intuition says that the chance is close to one. I expect the chance to be one of these extremes, I would be very surprised if the chance were close to 50%... Clearly, my intuition fails here. And I don't know how to approach this problem better (generating entire universes of stars and calculating if a plane intersects one of the stars takes too much time...). Rough estimates are perfectly acceptable, I only want to know if the chance is close to zero or close to one! Edit: Reading the comments/answers, I noticed that my reference to the 10-year old did not have the intended effect. Some of the answers/comments focussed on how an answer to the title question could be explained to a 10-year old. That was not my question, and I was a bit surprised to see several people interpreting it that way. My question is the one summarized in the title. And some of the comments were about the definition of observable universe, and that it necessarily would slice through earth because earth is in the center of our observable universe. I added the reference of the 10-year old to avoid such loopholes... Rob Jeffries' and Accumulation's interpretation of the question was exactly what I meant, so their answers satisfied me. | There are about $10^{23}$ stars in the observable universe . Thanks to the expansion of the universe, those stars are currently spread over a sphere that is about $d=2.8\times 10^{10}$ parsecs across. Of course some stars will have died whilst their light has been travelling towards us, but others will have been born, so I am going to ignore that complication. If we imagine the stars uniformly spread through this volume$^{*}$, they have a number density of $n=3 \times 10^{-58}$ m$^{-3}$ (or $\sim 10^{-8}$ pc$^{-3}$). If we then define an average radius for a star $R$ we can ask how many stars lie within $R$ of a plane that goes through the Earth. The volume occupied by this slice is $2\pi d^2 R/4$ and the number of stars within that volume is
$$N = \pi d^2 R n/2.$$ If $R \sim 1 R_{\odot}$ (many stars are much bigger, most stars are a bit smaller), then $N \sim 2\times 10^5$. So my surprising conclusion (to me anyway) is that many stars would be "sliced" by a plane going through the entire observable universe. $*$ NB: Stars are not distributed uniformly - they are concentrated in galaxies and those galaxies are organised into groups, clusters and filamentary superstructures. However, on the largest scales the universe is rather homogeneous (see the cosmic microwave background) and so to first order the smaller-scale non-uniformity will not affect an estimate of the average total number of "sliced" stars across the observable universe, but may mean there is a larger variance in the answer than simple Poissonian statistics would suggest. Could the clustering of stars affect the conclusion? It could if the clustering is strong enough that the median number of stars within $R$ of the plane becomes $<1$, but with the mean number unchanged. As an example consider an extreme bimodal model where all stars are found in galaxies of $N_*$ stars, where the average density is $n_*$. The "structure" of the universe could then be characterised by uniformly distributed galactic "cubes" of side $L = (N_*/n_*)^{1/3}$ and of voids with side $(n_*/n)^{1/3} L = (N_g/n)^{1/3}$. The number density of galaxies is the number of galaxies divided by the volume of the observable universe $n_g = (10^{23}/N_*)/(\pi d^3/6)$ The number of galaxies intersected by the plane will be
$$ N_g \sim \left(\frac{6\times 10^{23}}{\pi d^3 N_*}\right)\left(\frac{\pi d^2}{4}\right) L = 1.5 \times 10^{23} \left(\frac{L}{N_* d}\right)$$
and in each of those galaxies there will be $\sim L^2 R n_* = R N_*/L$ intersections with a star. If we let $n_*= 0.1$ pc$^{-3}$ (the local stellar density in our Galaxy) and $N_* =10^{11}$ (the size of our Galaxy), then $L= 10^4$ pc, $N_g = 5\times 10^{5}$ and the number of stellar intersections per galaxy will be about 0.25. thus the average number of intersections will be about the same (by design) but the variance won't be much different either. I think the only way density contrasts could give an appreciable chance of no intersection is if $N_g<1$, and thus $L/N_* < 2 \times 10^{-13}$ - i.e. if galaxies/structures contain lots more stars and are very dense so that there is a good chance that the plane will not intersect a single "galaxy". For example if $N_* = 10^{21}$ and $n_* = 10^3$ pc$^{-3}$, then $L= 10^6$ pc and $N_g \sim 0.05$. In this circumstance (which looks nothing like our universe) there is a high chance that the plane would not intersect one of the 100 big "galaxies", but if it did there would be about $10^7$ stellar intersections. | {
"source": [
"https://physics.stackexchange.com/questions/388802",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
388,808 | I was solving a problem with this diagram: All the surfaces are friction-less. When I saw the solution it was considering the normal forces between $M$ and $m2$, but they were heading towards each other. Why is this happening? I think it is because of the pulley, making pendulum when $M$ moves infinitesimal amount. Do we call this a normal force? I've never seen this before but I've seen many examples with opposite direction for example, the ground reaction force. | There are about $10^{23}$ stars in the observable universe . Thanks to the expansion of the universe, those stars are currently spread over a sphere that is about $d=2.8\times 10^{10}$ parsecs across. Of course some stars will have died whilst their light has been travelling towards us, but others will have been born, so I am going to ignore that complication. If we imagine the stars uniformly spread through this volume$^{*}$, they have a number density of $n=3 \times 10^{-58}$ m$^{-3}$ (or $\sim 10^{-8}$ pc$^{-3}$). If we then define an average radius for a star $R$ we can ask how many stars lie within $R$ of a plane that goes through the Earth. The volume occupied by this slice is $2\pi d^2 R/4$ and the number of stars within that volume is
$$N = \pi d^2 R n/2.$$ If $R \sim 1 R_{\odot}$ (many stars are much bigger, most stars are a bit smaller), then $N \sim 2\times 10^5$. So my surprising conclusion (to me anyway) is that many stars would be "sliced" by a plane going through the entire observable universe. $*$ NB: Stars are not distributed uniformly - they are concentrated in galaxies and those galaxies are organised into groups, clusters and filamentary superstructures. However, on the largest scales the universe is rather homogeneous (see the cosmic microwave background) and so to first order the smaller-scale non-uniformity will not affect an estimate of the average total number of "sliced" stars across the observable universe, but may mean there is a larger variance in the answer than simple Poissonian statistics would suggest. Could the clustering of stars affect the conclusion? It could if the clustering is strong enough that the median number of stars within $R$ of the plane becomes $<1$, but with the mean number unchanged. As an example consider an extreme bimodal model where all stars are found in galaxies of $N_*$ stars, where the average density is $n_*$. The "structure" of the universe could then be characterised by uniformly distributed galactic "cubes" of side $L = (N_*/n_*)^{1/3}$ and of voids with side $(n_*/n)^{1/3} L = (N_g/n)^{1/3}$. The number density of galaxies is the number of galaxies divided by the volume of the observable universe $n_g = (10^{23}/N_*)/(\pi d^3/6)$ The number of galaxies intersected by the plane will be
$$ N_g \sim \left(\frac{6\times 10^{23}}{\pi d^3 N_*}\right)\left(\frac{\pi d^2}{4}\right) L = 1.5 \times 10^{23} \left(\frac{L}{N_* d}\right)$$
and in each of those galaxies there will be $\sim L^2 R n_* = R N_*/L$ intersections with a star. If we let $n_*= 0.1$ pc$^{-3}$ (the local stellar density in our Galaxy) and $N_* =10^{11}$ (the size of our Galaxy), then $L= 10^4$ pc, $N_g = 5\times 10^{5}$ and the number of stellar intersections per galaxy will be about 0.25. thus the average number of intersections will be about the same (by design) but the variance won't be much different either. I think the only way density contrasts could give an appreciable chance of no intersection is if $N_g<1$, and thus $L/N_* < 2 \times 10^{-13}$ - i.e. if galaxies/structures contain lots more stars and are very dense so that there is a good chance that the plane will not intersect a single "galaxy". For example if $N_* = 10^{21}$ and $n_* = 10^3$ pc$^{-3}$, then $L= 10^6$ pc and $N_g \sim 0.05$. In this circumstance (which looks nothing like our universe) there is a high chance that the plane would not intersect one of the 100 big "galaxies", but if it did there would be about $10^7$ stellar intersections. | {
"source": [
"https://physics.stackexchange.com/questions/388808",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/186184/"
]
} |
389,168 | Like many people, I have tried burning stuff with magnifying glass. Where I live, the power of sun is some 600 watts per square meter at most. If my magnifying glass is 10cm in diameter I have only 4,7 watts on my focus point. It can light paper instantly. This makes me wonder why do we have laser cutters. A few hundred watts of ordinary light could do the same. It would seem much easier to have just ordinary light source. No expensive CO2 or fiber lasers. Just a simple high power led for example. I guess that the reason is that laser light is coherent. Another thing that comes to my mind is that sunligt is practically collimated as the sun is so far. Maybe achieving same with artificial light is not so easy. What is the real reason? | This has nothing to do with coherence or the fundamental physics distinction between "normal" and laser light. It's simply a question of finding a source that is intense enough to induce a fine cutting edge. That is, the source must be both powerful and one must be able to concentrate it into a very small spot. Cutting happens when there is highly intense local heating in a very small area of the sample. The mechanism of stimulated emission allows the generation of huge amounts of light all in exactly the same momentum state. What this means is that the output is high power and very nearly a plane wave, with a low aberration wavefront. Such a wave can be focussed to near to a diffraction limited spot. Thus stimulated emission enables both the fundamental requirements of power and low wavefront aberration, equating in this case to high ability for concentration. At $10{\rm \mu m}$ wavelength, that of a ${\rm CO}_2$ industrial machining laser, that implies a spot size of about $20{\rm \mu m}$ focused through a $0.3NA$ optical system. With thousands of watts continuously output, this equates to an intensity of terawatts per square meter at the "cutting edge". In contrast, the Sun is not a collimated source - it is an extended one. The best you can do is focus it down to a tiny image of the Sun. Let's do our calculation for a 0.3NA lens one meter across. The distance to the sample is then about 3 meters. The image of the Sun is then $\frac{3}{1.5\times10^{11}} \times 5\times 10^8$ meters across, or about one centimeter across. Through our one meter lens, we get about $600{\rm W}$. So we get about the same power as in our laser example (somewhat less) through an area that is $\left(\frac{0.01}{2\times 10^{-5}}\right)^2 =2.5\times 10^5$ times as large. Our intensity is thus five or six orders of magnitude less than in the laser example. There is limited ability to improve this situation with a bigger lens; as the lens gets wider, you need to set it back further from the target, with the result that the area of the Sun image grows at the same rate as the area of the lens, and thus the input power. The intensity stays roughly the same. LEDs The OP also asks about LEDs. Although modern LEDs can output amazing powers, they, like the Sun, are also an extended source, comprising a significant area of highly divergent point sources, so the light output has a high étendue and cannot be concentrated into a tight spot. This highest power LEDs needfully have a large area semiconductor chip whence the emission comes. In a laser cavity, it is also true that the first seem emissions are also highly divergent, and the first pass through the gain medium produces an amplified spherical wavefront. However, the design of the resonant cavity means that only a small, on-axis section of that spherical wave bounces back into the cavity, most of it is lost. On the second pass, we have an amplified, lower curvature wavefront; most of this is lost at the other end of the cavity too. During the first few passes, therefore, the process is quite inefficient, but on each bounce the wavefront gets flatter and flatter as only light components directed accurately along the cavity axis can stay in the cavity and the efficiency of recirculation swiftly increase. Through this mechanism of resonance, therefore, the stimulated emission process is restricted to only the most on-axis components of the light. Thus, the combined mechanisms of resonance and stimulated emission co-ordinate the whole wave so that ultimately it is a plane wave, propagating back and forth in a cavity, spread over a relatively wide cross section so that heat loading from any losses are not damaging to the cavity. This near-zero étendue , low aberration field is easily focused to a diffraction limited spot. Solar Furnace User Martin Beckett gives the example of the Odeillo solar furnace: You can however use lots of lenses (or mirrors) This is in keeping with my solar lens example above. A solar furnace is great for furnace applications, such as mass energy production or smelting. But the focused light lacks the intensity needed for cutting. The intensity in this example is about the same as for our one meter mirror. The furnace focuses several megawatts through a 40cm diameter focus, and a a few megawatts through a 40cm focus is about the same intensity as one kilowatt through a 1cm wide focus, which is what we had for our solar lens example. | {
"source": [
"https://physics.stackexchange.com/questions/389168",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/77733/"
]
} |
389,212 | Household bulbs get alternating current, which means that the voltage of source and current in circuit keep changing with time, which implies that the power supply isn't constant. However, we don't see any changes in brightness of the bulb. Why is that ? | Two reasons: An incandescent bulb glows not (directly) because it has electricity going through it, but because it is hot . Even when the power going through the bulb decreases, it takes some time for the filament to cool down. Even once the bulb is turned off, it takes some time (a fraction of a second) for the light to fade. What variation there is in the light is too fast for our eyes to see. You can see the AC flicker in slow motion videos if the camera has a sufficient frame rate, for instance this one . | {
"source": [
"https://physics.stackexchange.com/questions/389212",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
389,431 | What is the minimal velocity to throw an object (material point) to the Sun from Earth, with no specific restrictions? | The limitation to hit the sun is that the object has to have very little angular momentum. The reason for this is that as the distance to the sun gets smaller, the velocity in a direction perpendicular to the sun gets larger, thanks to conservation of angular momentum: $$ L = mv_\perp r\rightarrow v_\perp={L\over mv}$$ A good first-order approximation can be found just by assuming you throw the object so that it has zero angular momentum. To do this, you have to throw the object as fast as the earth is traveling around the sun, just in the opposite direction. So, roughly $30~\rm{km\over s}$. There are two effects that change this a little and one that changes it a lot: Earth's gravity will slow the ball down, so you have to throw it a bit faster at the start. This requires you to throw the ball about $7\%$ faster, as when the object leaves the earth's gravity well it loses $\frac12mv_{\rm escape}^2$ of its kinetic energy. This means the initial kinetic energy must be $\frac12mv^2_{\rm required\ speed\ after\ escape}+\frac12mv^2_{\rm escape}$, so $v_{\rm throw}^2=v_{\rm ignoring\ escape}^2+v_{\rm escape}^2$ The sun has a finite extent, so the ball can have a small angular velocity and still hit the sun. This lets you throw the ball a little slower (but not much: the sun is a small target as far as orbits are concerned) Air resistance is enormous at $30\ \rm{km\over s}$, so you're going to have to throw it a lot faster if you're not ignoring air resistance (So throw it from orbit, not from the ground) | {
"source": [
"https://physics.stackexchange.com/questions/389431",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/186521/"
]
} |
389,442 | I am unable to understand Q value for positron emission . The general reaction is as follows: $$p \to n + e^+ + \nu$$
$$ ^A_ZX \to ^A_{Z-1}Y+ e^+ + \nu \tag{1a}$$ This reaction $(1a)$ was giving in my text. First question is that where did one electron go ? We began with $Z$ electrons but on the right side it seems only $Z-1$ electrons are present. Maybe this is the cause of confusion. So I rather took this reaction $(2a)$ for positron emission : $$^A_ZX \to ^A_{Z-1}Y+ e^+ + e^-+ \nu \tag{2a}$$ Now writing the Q value, we find the mass defect
$$[m_n(^A_ZX)-(m_n(^A_{Z-1}Y)+m_{e^+}+m_{e^-} +m_\nu)]$$ here $m_n(^A_{Z}X)$ are mass of nuclei. Now we can rewrite in terms of atomic mass numbers $m_a(^A_{Z}X)$ as $$\Delta m=[(m_a(^A_ZX)-Zm_e)-((m_a(^A_{Z-1}Y)-(Z-1)m_e)+m_{e^+} + m_{e^-}+m_\nu)] $$
$$\Delta m= (m_a(^A_ZX)-m_a(^A_{Z-1}Y) - 3m_e -m_\nu) \tag{2b} $$ but as you can see, if we use reaction $(1a)$ then we will get $$\Delta m= (m_a(^A_ZX)-m_a(^A_{Z-1}Y) - 2m_e -m_\nu) \tag{1b} $$ Am I wrong somewhere? I have highlighted key points(mistakes) in italic. | The limitation to hit the sun is that the object has to have very little angular momentum. The reason for this is that as the distance to the sun gets smaller, the velocity in a direction perpendicular to the sun gets larger, thanks to conservation of angular momentum: $$ L = mv_\perp r\rightarrow v_\perp={L\over mv}$$ A good first-order approximation can be found just by assuming you throw the object so that it has zero angular momentum. To do this, you have to throw the object as fast as the earth is traveling around the sun, just in the opposite direction. So, roughly $30~\rm{km\over s}$. There are two effects that change this a little and one that changes it a lot: Earth's gravity will slow the ball down, so you have to throw it a bit faster at the start. This requires you to throw the ball about $7\%$ faster, as when the object leaves the earth's gravity well it loses $\frac12mv_{\rm escape}^2$ of its kinetic energy. This means the initial kinetic energy must be $\frac12mv^2_{\rm required\ speed\ after\ escape}+\frac12mv^2_{\rm escape}$, so $v_{\rm throw}^2=v_{\rm ignoring\ escape}^2+v_{\rm escape}^2$ The sun has a finite extent, so the ball can have a small angular velocity and still hit the sun. This lets you throw the ball a little slower (but not much: the sun is a small target as far as orbits are concerned) Air resistance is enormous at $30\ \rm{km\over s}$, so you're going to have to throw it a lot faster if you're not ignoring air resistance (So throw it from orbit, not from the ground) | {
"source": [
"https://physics.stackexchange.com/questions/389442",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/184479/"
]
} |
389,590 | It is obvious that the motion of an object is resisted by air resistance both in the horizontal and vertical components. However what I fail to understand is that the projectile of an object with air resistance taken into account is not parabolic and instead is steeper on its way down than it is on its way up. Why is this? | A projectile's trajectory is only parabolic in the first place because the force is constant in magnitude and direction. Air resistance is not constant in magnitude or direction, so once you include air resistance trajectories can't be parabolic any more. As for why it's steeper on the way down, a good way to visualize this is to imagine something where air resistance completely dominates: a feather, for instance. If you throw a feather at a high speed, it very quickly loses virtually all of its momentum to air resistance, after which it begins to fall at terminal velocity. As a result, it falls straight down , whatever its initial trajectory was. You can imagine making a projectile smaller and smaller. For a large projectile, it has a parabolic arc. A very small projectile has effectively a linear rise and a fall straight downwards. A projectile like a baseball hit off a bat is somewhere in the middle: the fall is steeper than the rise, but not straight down. | {
"source": [
"https://physics.stackexchange.com/questions/389590",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/182004/"
]
} |
389,947 | My friend's 3-year-old daughter asked "Why are there circles there?" It had either rained the night before or frost had thawed. What explains the circles? Follow-up question: Ideally, are these really circles or some kind of superellipse ? | Both thawing and evaporation involve heat exchange between the stone tile, the water sitting atop the stone tile, any water that's been absorbed by the stone tile, and the air around. The basic reason that the center and the edges of the tile evaporate differently is that the gaps between the tiles change the way that heat is exchanged there. However the details of how that works are a little more involved than I can get into at the moment, and would be lost on a three-year-old anyway. A good way to explain this phenomenon to a three-year-old would be to bake a batch of brownies in a square pan, and watch how the brownies get done from the outside of the pan inwards. Even after you have finished them you can still tell the difference between the super-crispy corner brownies, the medium-crispy edge brownies, and the gooey middle-of-the-pan brownies. The three-year-old would probably ask you to repeat this explanation many times. I think the shapes are not exactly circles, superellipses, or any other simple mathematical object --- there's too much real life in the way --- but they do become more circular as the remaining puddle gets further from the edges. A related explanation . | {
"source": [
"https://physics.stackexchange.com/questions/389947",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/130765/"
]
} |
390,230 | I imagine electrons being accelerated by passing gravitational waves, say from a nearby kilonova, so I would expect the electrons to emit light. Am I right? | A gravitational wave does not exert a force on a point particle. By this I mean if you were that point particle and you were holding an accelerometer then you would measure no acceleration as the wave passed through you. More precisely your proper acceleration remains zero at all times. This may seem a bit odd, but it happens because the gravitational wave changes the separation of objects by changing the geometry of the spacetime around them, not by exerting a force on the objects to move them. The situation you describe is essentially the same as whether an electron falling in a gravitational field radiates. If you watch a freely falling electron then you see it accelerate so from your perspective it should radiate. However the freely falling electron is weightless, like all freely falling objects, and therefore experiences no acceleration. So from the electron's perspective it shouldn't radiate. This is a longstanding paradox and has been addressed several times on the site - the definitive question appears to be: Does a charged particle accelerating in a gravitational field radiate? Having said this, I'm unsure to what extend the paradox has been satisfactorily resolved. I believe the solution is that it does radiate as observed from a stationary frame but does not radiate as observed from a comoving frame. The difference is because observers in different frames disagree about the QED ground state. This is the same argument as used for Unruh radiation . So, arguing by comparison with the freely falling charge I think the gravitational wave does cause electrons to radiate as observed by an observer outside the area affected by the gravitational wave. However an observer comoving with the electron would not measure any radiation. | {
"source": [
"https://physics.stackexchange.com/questions/390230",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/75502/"
]
} |
390,738 | My question may be pretty basic, but I feel it is important to ask this as I've gone through several texts and none offer me the clarity I seek. The question is: What is a fluid? What is flow? If we say that a fluid is something that flows, the next right question to ask would be what flow is. To my surprise and disappointment, there is no clear distinction between various definitions, which I present in the form of questions - Is a fluid simply something that can flow ? Is a fluid, an object that can be continuously deformed, as a result of shear forces? (fluids can't sustain tangential stress) What is flow? Does it refer to the motion of fluid elements relative to one another, or does it refer to the motion of the fluid as a whole with respect to the container it is contained in? or, is it just the continuous sliding/deformation of fluid layers, which texts refer to as flow? So, what properties really define a fluid? (Something that brings up a clear distinction between fluids and non-fluids) A detailed explanation would be great. Thanks a lot. | There is no standard definition of the word fluid . It is a somewhat imprecise term used in various ways by different people. Indeed, in real life there is no simple example of a fluid. There is a spectrum from superfluids at one end, through non-Newtonian fluids all the way to crystalline solids. I speak as an (ex) industrial colloid scientist who has spent many happy hours studying the flow properties of many vaguely fluid systems. The practical definition widely used by colloid scientists is that a fluid is something that has a measurable viscosity. That is, if subject to a constant shear stress (typically in a rheometer) it has a constant strain rate (note that non-Newtonian fluids may take a long time to equilibrate to a constant strain rate). The problem with this is that if you carry out your measurement for long enough even apparently solid materials like pitch will flow . I have heard rheologists claim that on a long enough timescale everything is fluid, though these claims tend to be reserved for the bar rather than in refereed publications. Where you draw the line between a fluid and a solid depends on the application and to an extent personal preference. | {
"source": [
"https://physics.stackexchange.com/questions/390738",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/131341/"
]
} |
392,508 | Reading through papers and online sources about radio galaxies, I kept stumbling across a term--a "decade" of the electromagnetic spectrum. Radio galaxy emission encompasses "11 decades of the EM spectrum". Or this quote from NASA: Astronomers have made observations of electromagnetic radiation from cosmic sources that cover a range of more than 21 decades in wavelength (or, equivalently in frequency or energy)! Source. What exactly does this term correspond to? Note: I used the electromagnetism tag because of the context, but I am not sure if the unit can be used outside of the field. Feel free to edit away! | From 10Hz to 100Hz is a decade (on a logarithmic axis this is $10^2$ to $10^3$). | {
"source": [
"https://physics.stackexchange.com/questions/392508",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/96592/"
]
} |
392,863 | Consider a simple situation like this- an object is sitting on a table. In classical mechanics, we say that the net force on the object is zero because gravity (treated as a force) and normal reaction force are equal and opposite to each other, and hence, it's acceleration is zero. But according to Einstein's General Theory of Relativity, gravity isn't a force at all, but instead curvature created in spacetime by a massive object, and objects near it tend to move towards it because they are just moving along the geodesic paths in that curved spacetime. So if an object kept on a table gets acted only by the normal reaction force (as gravity ain't a force), how is the net force on it zero? | So if an object kept on a table gets acted only by the normal reaction force (as gravity ain't a force), how is the net force on it zero? I've quoted what I think is the key part of your question, and it's key because the net force is not zero. The object on the table experiences a net force of $mg$ and as a result it is experiencing an upwards acceleration of $g$ . The way you can tell if no force is acting on you is by whether you are weightless or not. If you were floating in space far from any other objects then there would be no forces acting upon you and you'd be weightless. If we fixed a rocket to you and turned it on then you'd no longer be weightless because now the rocket is exerting a force on you. Technically you have a non-zero proper acceleration . In general relativity your acceleration (your four-acceleration ) has two components. We write it as: $$ a^{\mu}= \frac{\mathrm du^\mu}{\mathrm d\tau}+\Gamma^\mu_{\alpha \beta}u^{\alpha}u^{\beta} $$ The first term $\mathrm du^\mu/\mathrm d\tau$ is the rate of change of your (coordinate) velocity with time, so it is what Newton meant by acceleration, and the second term is the gravitational acceleration. The key thing about general relativity is that we don't distinguish between the two - they both contribute to your acceleration. If you're falling freely then the two terms are equal and opposite so they cancel out and you''re left wit an acceleration of zero: $$ a^{\mu}= 0 $$ This is when the net force on you is zero. For the object on the table the coordinate bit of the acceleration is zero but the second term is not and the acceleration is: $$ a^{\mu}= \Gamma^{\mu}_{\alpha \beta}u^{\alpha}u^{\beta} $$ So the object sitting on the table has a non-zero acceleration and the net force on it is not zero. Maybe this sounds like I'm playing with words a bit, by defining what I do and don't mean by acceleration . But this is absolutely key to understanding how general relativity describes the motion of bodies. The key point is that gravitational and coordinate acceleration are treated on an equal footing, and if you are stationary in a gravitational field that means you are accelerating. If you're interested in pursuing this further there is a fuller description in How can you accelerate without moving? . There is more on why spacetime curvature makes you accelerate in How does "curved space" explain gravitational attraction? A footnote Given the attention this answer has received I think it is worth elaborating on exactly how relativists view this situation. The question gives an example of an object sitting stationary on a table, but let's start with an object a few metres above the table and falling freely towards it. It seems obvious that the apple is accelerating down towards the table. It seems obvious because we are used to taking the surface of the Earth as stationary because that's our rest frame (even though the surface of the Earth is most certainly not at rest :-). But if you were the apple then it would seem natural to take your rest frame as stationary, and in that case the apple is not accelerating downwards - the table
is accelerating upwards to meet it. So which view is correct? The answer is that both are correct. Whether it's the apple or the table that is stationary is just a choice of rest frame, i.e. a choice of coordinates, and it is a fundamental principle in general relativity that all coordinates are equally good when it comes to describing physics. But if we can randomly choose our coordinates it seems hard to say anything concrete. We could choose frames accelerating at any rate, or rotating, or expanding or all sorts of bizarre frames. Isn't there something concrete we can say about the situation? Well there is. In relativity there are quantities called invariants that do not depend on the coordinates used. For example the speed of light is an invariant - all observers measuring the speed of light find it has the same value of $c$ . And in our example of the apple and table there is an important invariant called the proper acceleration. While the apple and the table disagree about which of them is accelerating towards the other, if they compute their respective proper accelerations they will both agree what those values are. In Newtonian mechanics acceleration is a vector $(a_x, a_y, a_z)$ , but in relativity spacetime is four dimensional so vectors have four components. The four-acceleration is the relativistic equivalent of the three dimensional Newtonian acceleration that we are all used to. While it's a bit more complicated, the four acceleration is just a vector in 4D spacetime, and like all vectors it has a magnitude – in relativity we call this quantity the norm . And the norm of the four-acceleration is just the proper acceleration that I talk about above. The proper acceleration can be complicated to calculate. There's a nice explanation of how to calculate it for an object like our table in What is the weight equation through general relativity? It turns out that the proper acceleration of the table is: $$ A = \frac{GM}{r^2}\frac{1}{\sqrt{1-\frac{2GM}{c^2r}}} $$ where $M$ is the mass of the Earth and $r$ is the radius of the Earth. But hang on – that tells me the proper acceleration of the table is non-zero. But ... but ... isn't the table stationary? Well, this takes us back to where we started. The table and the apple disagree about who is accelerating, but they both agree that the table has a non-zero proper acceleration. And in fact if we calculate the proper acceleration of the apple it turns out to be zero so both the apple and the table agree the apple has a proper acceleration of zero. There is a simple physical interpretation of the proper acceleration. To measure your proper acceleration you just need to hold an accelerometer. Suppose you're floating around weightless in outer space, then your accelerometer will read zero, and that means your proper acceleration is zero. If you're standing on the surface of the Earth (alongside the table perhaps) then your accelerometer will read $9.81\ \mathrm{m/s^2}$ , and indeed your proper acceleration is approximately $9.81\ \mathrm{m/s^2}$ not zero. To summarise, a comment asks me: So, let's just get this straight. The book sitting on the table in front of me is accelerating upwards all the time? But when I push it off the table and it falls down, then as it falls down it is not accelerating? Is that what you're saying? What I'm saying, and what all relativist would say, is that: the book on the table has a non-zero proper acceleration the falling book has a zero proper acceleration And this is all we can say. The question of which has a non-zero three-acceleration (Newtonian acceleration) is meaningless because that quantity is not frame invariant. The question of which has a non-zero proper acceleration is meaningful – even if the answer isn't what you expected. | {
"source": [
"https://physics.stackexchange.com/questions/392863",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/181230/"
]
} |
394,171 | Einstein's equivalence principle says that you cannot distinguish between an accelerating frame or a gravitational field. However, in an gravitational field, if I drop a tennis ball, it will bounce, but I don't think that it will in the accelerated rocket. Will it bounce? If so, how? | This is one of those things that should become clear once you see it, so I made an animation: As you can see, the ball simply bounces off the back of the rocket once the rocket catches up with it, just like a tennis ball bouncing off the racket during a serve. In the comoving frame (i.e. if we are accelerating along with the rocket), this amounts to the ball bouncing off the floor. Since the rocket is still accelerating but the ball is not, the rocket will eventually catch up with the ball again and it will bounce a second time. Here is a bonus animation showing multiple bounces: In this version the ball bounces elastically, and it starts at a lower height, so that several bounces can be observed before the rocket reaches the side of the image. It's a little hard for the eye to see, but in between collisions the ball moves at a constant speed, while the rocket accelerates to catch up with it. Finally, here's another bonus animation to show that if the ball doesn't bounce elastically then it will stop bouncing and start just moving along with the rocket: | {
"source": [
"https://physics.stackexchange.com/questions/394171",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
394,942 | Here's a gif showing how the balls move when I move the rattle. The circular tube hangs vertically, with the balls on the bottom. There are more images in the bottom. The balls roll freely inside the tube The inner diameter of the tube is larger than the diameter of the balls I have tried taking an external magnet close to it, but the balls aren't affected by it. What makes these balls repel each other? As you can see from the image below, the diameter of the balls is smaller than the inner diameter of the tube (it is identical if I flip it). Edit: Here's a gif showing what happens if I leave it be for a while, then shake it. Looks like knzhou is right . | As you said, it's probably not magnetism if the balls are free to rotate; there is no reason they wouldn't just flip over and stick together, north to south. You can test this by buying some of those toy magnetic balls . The repulsive configurations are highly unstable and turn attractive with the slightest touch. I'm going to go out on a limb and say it's static electricity; the balls are picking up charge by rubbing against the plastic. The electrons will always go to whichever material 'pulls' them harder (according to the triboelectric series ), so the balls all get the same charge and hence repel. I don't have a baby rattle with me but here are some ways you could test it: If you don't move the rattle for a while, the balls should come together as the static dissipates The effect should come back once you shake the rattle a couple times The effect should be smaller on humid days where static dissipates faster Another statically charged object should attract/repel the balls, e.g. a balloon rubbed on hair. (The plastic ring will not block this effect, since it's an insulator.) | {
"source": [
"https://physics.stackexchange.com/questions/394942",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/52195/"
]
} |
394,943 | Pretend you have an indestructible tube that cannot leak, inside which is water. Imagine that in each side of the tube, you have very powerful pistons What would happen if you compress the water inside? Would it turn into heat and escape the tube? Would the water turn into solid because the water molecules are so close to each other? Would the water turn into a black hole?
What would happen? | What you're asking about is usually shown in a phase diagram. The diagram shows how the "phase", i.e. liquid, gas, or one of various solid phases, exists at different temperatures and pressures: If your cylinder starts at say $20{}^{\circ}\mathrm{C}$ and atmospheric pressure, it'll be in $\color{green}{\textbf{Liquid}}$ right near the center of the diagram. If you raise the pressure keeping the temperature constant, it'll switch to $\color{blue}{\textbf{Ice VI}}$ at about 1GPa, or about 10,000 atmospheres of pressure: it's hard to turn water to ice by compressing it; the water at the bottom of the ocean is still water. As you keep raising the pressure further, keeping the temperature constant, it'll go through more and more compact forms of solid ice (the diagram doesn't show "black hole", as that would be many, many orders of magnitude off the top, and can't be physically reached). I stress "keeping the temperature constant" because (a) that's something your experiment will have to choose to do or not do and (b) because it makes it much easier to read the diagram. The compression is adding energy to the water, from the work done by the pistons. If you go slow, and the cylinder isn't insulated, etc, that energy will conduct away as the cylinder naturally stays the temperature of its environment. If you go fast, or the cylinder is insulated, the temperature will rise and the water will tend to go up-and-right in the diagram: You'll hit the transitions at different points. | {
"source": [
"https://physics.stackexchange.com/questions/394943",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/189957/"
]
} |
394,975 | In this answer on the cycling SE, the claim is made that adding more mass to a bicycle increases the stopping distance. I was under the impression that mass should not affect the stopping distance so long as all the other factors remain the same (balance, coefficient of friction, etc.). What factors in this scenario contribute to increasing stopping distance on a bicycle? If the bicycle is balanced the same but weighs more, will the stopping distance be equal? | The answer is a little more nuanced than a simple yes or no, but for most cyclists stopping distance will increase with mass. Allow me to explain how: We can use the work-energy theorem to write down the distance $x$ an object traveling at velocity $v$ will require if a force $F$ is applied opposite to $v$:
$$\begin{align}
W &= \Delta K \\
-Fx &= 0 - {1 \over 2}mv^2 \\
x &= \frac{m v^2}{2F} \tag{general stopping distance}\\
\end{align}$$ So you can see in general stopping distance is proportional to mass. However, for objects that use friction (like cars and cycles) between the object and ground to stop, the maximum force you can get from friction is also proportional to the object's mass: $F_{max} = \mu m g$ where $\mu$ is the coefficient of friction and $g$ is the gravitational acceleration. Putting the maximum force into the stopping distance yields the minimum stopping distance: $$x_{min} = \frac{v^2}{2\mu g} \tag{minimum distance}\label{x_min} $$ This minimum stopping distance is mass independent. When you apply your brakes, (usually) a caliper applies a force to the wheel. This force depends on how hard you brake, and the location of the caliper, and lots of other engineering specifics. What it doesn't depend on is the total mass of the object, so $m$ will not cancel out of the stopping distance. summary All other things being equal (including how hard you apply brakes), stopping distance is proportional to mass. There is a minimum attainable stopping distance, which is independent of mass. Edit: An example might clear up some confusion in the comments. Imagine two cyclists, "tiny Tim" and "big Bob". Both are riding identical bikes but Bob has more mass than Tim. They approach a stop sign and wish to come to a complete stop with the same initial velocity: Since Bob has more mass he will have to apply his brakes harder than Tim does, i.e. generate a larger force, if he wishes to stop in the same distance. However , Bob's extra weight generates more available friction with the ground, and so his maximum available stopping force is larger than Tim's. If both need to stop in the minimum amount of distance, they should apply a braking force up to the maximum allowable by friction between the ground and wheels. Any more and they risk wheel slippage which will raise their braking distance. Thus Bob's minimum stopping distance is the same as Tim's because his max available force is proportionally larger (e.g. if he's twice as massive, he has twice the maximum braking force before his wheels slip). | {
"source": [
"https://physics.stackexchange.com/questions/394975",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/178487/"
]
} |
395,829 | The Bohr model of the atom is essentially that the nucleus is a ball and the electrons are balls orbiting the nucleus in a rigid orbit. This allowed for chemists to find a model of chemical bonding where the electrons in the outer orbits could be exchanged. And it works pretty well as seen in the Lewis structures: However, electron orbitals were found to be less rigid and instead be fuzzy fields which, instead of being discrete/rigid orbits, look more like: However, in chemistry education like organic chemistry you still learn about chemical reactions using essentially diagrams that are modified lewis structures that take into account information about electron orbitals: What I'm wondering is, if the Bohr model is used essentially throughout college education in the form of these diagrams, it seems like it must be a pretty accurate model, even though it turns out atoms are more fuzzy structures than discrete billiard balls. So I'm wondering what the inaccuracies are, and if there is a better way to understand them than the Bohr model. If you build a computer simulation of atoms with the Bohr model, I'm wondering if it would be "accurate" in the sense of modeling atomic phenomena, or is it not a good model to perform simulations on. If not, I'm wondering what an alternative model is that is better for simulation. Essentially, how good the Bohr model is as a diagram, as a tool for learning, and as a tool for simulation. | In hydrogen: It incorrectly predicts the number of states with given energy. This number can be seen through Zeeman splitting. In particular, it doesn't have the right angular momentum quantum numbers for each energy levels. Most obvious is the ground state, with has $\ell=0$ in Schrodinger's theory but $\ell=1$ in Bohr's theory. It doesn't hold well under perturbation theory. In particular, because of angular momentum degeneracies, the spin-orbit interaction is incorrect. It predicts a single "radius" for the electron rather than a probability density for the position of the electron. What it does do well: a. Correct energy spectrum for hydrogen (although completely wrong even for helium). In particular, one deduces the right value of the Rydberg constant. b. The Bohr radii for various energy levels turn out to be the most probable values predicted by the Schrodinger solutions. c. Also does a lot of chemistry stuff quite well (as suggested in the original question) but I'm not a chemist so can't praise the model for that. | {
"source": [
"https://physics.stackexchange.com/questions/395829",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/16731/"
]
} |
396,588 | When I hold a glass of water, $\hspace{1.5cm}$ , I am applying a force horizontally, but its weight acts downwards. Should it not fall? How do you describe the equilibrium? | I'm not sure exactly what you're asking, but you've used the friction tag, so you realize friction is involved. Think about how the usual "block on a plane" friction problems work: Gravity exerts a force downward (normal to the friction surface), but if you try to move the block across the plane, friction exerts a force parallel to the plane surface to oppose you. In this case, instead of gravity providing the normal force, the grip of your fingers does. And instead of "you" trying to push the block, gravity is trying to move the glass parallel to the surface of your fingers. And friction causes an opposing force parallel to the surface of your fingers against the glass. | {
"source": [
"https://physics.stackexchange.com/questions/396588",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/187604/"
]
} |
396,818 | Part of the reason that relativistic QFT is so hard to learn is that there are piles of 'no-go theorems' that rule out simple physical examples and physical intuition. A very common answer to the question "why can't we do X simpler, or think about it this way" is "because of this no-go theorem". To give a few examples, we have: the Reeh-Schlieder theorem , which I'm told forbids position operators in relativistic QFT the Coleman-Mandula theorem , which forbids mixing internal and spacetime symmetries Haag's theorem , which states that naive interaction picture perturbation theory cannot work the Weinberg-Witten theorem , which among other things rules out a conserved current for Yang-Mills the spin-statistics theorem , which among other things rules out fermionic scalars the CPT theorem , which rules out CPT violation the Coleman-Gross theorem , which states the only asymptotically free theory is Yang-Mills Of course all these theorems have additional assumptions I'm leaving out for brevity, but the point is that Lorentz invariance is a crucial assumption for every one. On the other hand, nonrelativistic QFT, as practiced in condensed matter physics, doesn't have nearly as many restrictions, resulting in much nicer examples. But the only difference appears to be that they work with a rotational symmetry group of $SO(d)$ while particle physicists use the Lorentz group $SO(d-1, 1)$, hardly a big change. Is there a fundamental, intuitive reason that relativistic QFT is so much more restricted? | One of the reasons relativistic theories are so restrictive is because of the rigidity of the the symmetry group. Indeed, the (homogeneous part) of the same is simple , as opposed to that of non-relativistic systems, which is not. The isometry group of Minkowski spacetime is
\begin{equation}
\mathrm{Poincar\acute{e}}=\mathrm{ISO}(\mathbb R^{1,d-1})=\mathrm O(1,d-1)\ltimes\mathbb R^d
\end{equation}
whose homogeneous part is $\mathrm O(1,d-1)$, the so-called Lorentz Group 1 . This group is simple. On the other hand, the isometry group of Galilean space+time is 2 \begin{equation}
\text{Bargmann}=\mathrm{ISO}(\mathbb R^1\times\mathbb R^{d-1})\times\mathrm U(1)=(\mathrm O(d-1)\ltimes\mathbb R^{d-1})\ltimes(\mathrm U(1)\times\mathbb R^1\times\mathbb R^{d-1})
\end{equation}
whose homogeneous part is $\mathrm O(d-1)\ltimes\mathbb R^{d-1}$, the so-called (homogeneous) Galilei Group . This group is not semi-simple (it contains a non-trivial normal subgroup, that of boosts). There is in fact a classification of all physically admissible kinematical symmetry groups (due to Lévy-Leblond ), which pretty much singles out Poincaré as the only group with the above properties. There is a single family of such groups, which contains two parameters: the AdS radius $\ell$ and the speed of light $c$ (and all the rotation invariant İnönü-Wigner contractions thereof). As long as $\ell$ is finite, the group is simple. If you take $\ell\to\infty$ you get Poincaré which has a non-trivial normal subgroup, the group of translations (and if you quotient out this group, you get a simple group, Lorentz). If you also take $c\to\infty$ you get Bargmann (or Galilei), which also has a non-trivial normal subgroup (and if you quotient out this group, you do not get a simple group; rather, you get Galilei, which has a non-trivial normal subgroup, that of boosts). Another reason is that the postulate of causality is trivial in non-relativistic systems (because there is an absolute notion of time), but it imposes strong restrictions on relativistic systems (because there is no absolute notion of time). This postulate is translated into the quantum theory through the axiom of locality ,
$$
[\phi(x),\phi(y)]=0\quad\forall x,y\quad \text{s.t.}\quad (x-y)^2<0
$$
where $[\cdot,\cdot]$ denotes a supercommutator. In other words, any two operators whose support are casually disconnected must (super)commute. In non-relativistic systems this axiom is vacuous because all spacetime intervals are timelike, $(x-y)^2>0$, that is, all spacetime points are casually connected. In relativistic systems, this axiom is very strong. These two remarks can be applied to the theorems you quote: Reeh-Schlieder depends on the locality axiom, so it is no surprise it no longer applies to non-relativistic systems. Coleman-Mandula (see here for a proof). The rotation group is compact and therefore it admits finite-dimensional unitary representations. On the other hand, the Lorentz group is non-compact and therefore the only finite-dimensional unitary representation is the trivial one. Note that this is used in the step 4 in the proof above; it is here where the proof breaks down. Haag also applies to non-relativistic systems, so it is not a good example of OP's point. See this PSE post for more details. Weinberg-Witten. To begin with, this theorem is about massless particles, so it is not clear what such particles even mean in non-relativistic systems. From the point of view of irreducible representations they may be meaningful, at least in principle. But they need not correspond to helicity representations (precisely because the little group of the reference momentum is not simple). Therefore, the theorem breaks down (as it depends crucially on helicity representations). Spin-statistics. As in Reeh-Schlieder, in non-relativistic systems the locality axiom is vacuous, so it implies no restriction on operators. CPT. Idem. Coleman-Gross. I'm not familiar with this result so I cannot comment. I don't even know whether it is violated in non-relativistic systems. 1: More generally, the indefinite orthogonal (or pseudo-orthogonal) group $\mathrm O(p,q)$ is defined as the set of $(p+q)$-dimensional matrices, with real coefficients, that leave invariant the metric with signature $(p,q)$:
$$
\mathrm O(p,q):=\{M\in \mathrm{M}_{p+q}(\mathbb R)\ \mid\ M\eta M^T\equiv \eta\},\qquad \eta:=\mathrm{diag}(\overbrace{-1,\dots,-1}^p,\overbrace{+1,\dots,+1}^q)
$$ The special indefinite orthogonal group $\mathrm{SO}(p,q)$ is the subset of $\mathrm O(p,q)$ with unit determinant. If $pq\neq0$, the group $\mathrm{SO}(p,q)$ has two disconnected components. In this answer, "Lorentz group" may refer to the orthogonal group with signature $(1,d-1)$; to its $\det(M)\equiv+1$ component; or to its orthochronus subgroup $M^0{}_0\ge+1$. Only the latter is simply-connected. The topology of the group is mostly irrelevant for this answer, so we shall make no distinction between the three different possible notions of "Lorentz group". 2: One can prove that the inhomogeneous Galilei algebra, and unlike the Poincaré algebra, has a non-trivial second co-homology group. In other words, it admits a non-trivial central extension. The Bargmann group is defined precisely as the centrally extended inhomogeneous Galilei group. Strictly speaking, all we know is that the central extension has the algebra $\mathbb R$; at the group level, it could lead to a factor of $\mathrm U(1)$ as above, or to a factor of $\mathbb R$. In quantum mechanics the first option is more natural, because we may identify this phase with the $\mathrm U(1)$ symmetry of the Schrödinger equation (which has a larger symmetry group, the so-called Schrödinger group ). Again, the details of the topology of the group are mostly irrelevant for this answer. | {
"source": [
"https://physics.stackexchange.com/questions/396818",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/83398/"
]
} |
396,820 | I am just reading book "University physics with modern physics 14-th edition (Young & Fredman)" . And on page 702 there is an example 21.9 which says: Charge $Q$ is uniformly distributed around a conducting ring of radius
$a$. Find the electric field at point $P$ on the ring axis at a
distance $x$ from center. So author first states that linear charge density $\lambda = Q/2\pi a \rightarrow \lambda = \text{d}Q/\text{d}s$ where $\text{d}s$ is a diferential of the ring length. It is also immediately clear that there won't be any net electric field in the $y$ direction as $\text{d}{E}_y$ cancel out while on the other hand all $\text{d}{E}_x$ sum up. Therefore we can write this in scalar form: \begin{equation*}
\begin{split}
\text{d}E_x &= \text{d}E \cdot cos(\alpha)\\
\text{d}E_x &= \frac{1}{4\pi\varepsilon_0}\frac{\text{d}Q}{r^2} \cdot cos(\alpha)\\
\text{d}E_x &= \frac{1}{4\pi\varepsilon_0}\frac{\text{d}Q}{r^2} \cdot \frac{x}{r}\\
\text{d}E_x &= \frac{1}{4\pi\varepsilon_0}\frac{\text{d}Q}{(x^2+a^2)^2} \cdot \frac{x}{x^2 + a^2}\\
\text{d}E_x &= \frac{1}{4\pi\varepsilon_0}\frac{x}{(x^2 + a^2)^{3/2}}\cdot \text{d}Q\\
\text{d}E_x &= \frac{1}{4\pi\varepsilon_0}\frac{x\cdot \lambda}{(x^2 + a^2)^{3/2}}\cdot \text{d}s\\
\end{split}
\end{equation*} This is all fine, but then he integrates over all the ring's length. But if we have equation we have to integrate in a same way on both sides right? So I think integration should look like this: \begin{equation*}
\begin{split}
\int_0^{2\pi a}\text{d}E_x\, \text{d}s &= \int_0^{2\pi a}\frac{1}{4\pi\varepsilon_0}\frac{x\cdot \lambda}{(x^2 + a^2)^{3/2}}\cdot \text{d}s
\, \text{d}s\\
\int_0^{2\pi a}\text{d}E_x\, \text{d}s &= \frac{1}{4\pi\varepsilon_0}\frac{x\cdot \lambda}{(x^2 + a^2)^{3/2}}\cdot \int_0^{2\pi a} \text{d}s
\, \text{d}s
\end{split}
\end{equation*} What is weird to me is integral on the right. Well author of the book doesn't even integrate in a same way on both sides of equation. What he writes down is: \begin{equation*}
\begin{split}
\int \text{d}E_x &= \frac{1}{4\pi\varepsilon_0}\frac{x\cdot \lambda}{(x^2 + a^2)^{3/2}}\cdot \int_0^{2\pi a} \text{d}s\\
E_x &= \frac{1}{4\pi\varepsilon_0}\frac{x\cdot \lambda}{(x^2 + a^2)^{3/2}}\cdot 2\pi a\\
\end{split}
\end{equation*} Is he alowed to do that? Why? | One of the reasons relativistic theories are so restrictive is because of the rigidity of the the symmetry group. Indeed, the (homogeneous part) of the same is simple , as opposed to that of non-relativistic systems, which is not. The isometry group of Minkowski spacetime is
\begin{equation}
\mathrm{Poincar\acute{e}}=\mathrm{ISO}(\mathbb R^{1,d-1})=\mathrm O(1,d-1)\ltimes\mathbb R^d
\end{equation}
whose homogeneous part is $\mathrm O(1,d-1)$, the so-called Lorentz Group 1 . This group is simple. On the other hand, the isometry group of Galilean space+time is 2 \begin{equation}
\text{Bargmann}=\mathrm{ISO}(\mathbb R^1\times\mathbb R^{d-1})\times\mathrm U(1)=(\mathrm O(d-1)\ltimes\mathbb R^{d-1})\ltimes(\mathrm U(1)\times\mathbb R^1\times\mathbb R^{d-1})
\end{equation}
whose homogeneous part is $\mathrm O(d-1)\ltimes\mathbb R^{d-1}$, the so-called (homogeneous) Galilei Group . This group is not semi-simple (it contains a non-trivial normal subgroup, that of boosts). There is in fact a classification of all physically admissible kinematical symmetry groups (due to Lévy-Leblond ), which pretty much singles out Poincaré as the only group with the above properties. There is a single family of such groups, which contains two parameters: the AdS radius $\ell$ and the speed of light $c$ (and all the rotation invariant İnönü-Wigner contractions thereof). As long as $\ell$ is finite, the group is simple. If you take $\ell\to\infty$ you get Poincaré which has a non-trivial normal subgroup, the group of translations (and if you quotient out this group, you get a simple group, Lorentz). If you also take $c\to\infty$ you get Bargmann (or Galilei), which also has a non-trivial normal subgroup (and if you quotient out this group, you do not get a simple group; rather, you get Galilei, which has a non-trivial normal subgroup, that of boosts). Another reason is that the postulate of causality is trivial in non-relativistic systems (because there is an absolute notion of time), but it imposes strong restrictions on relativistic systems (because there is no absolute notion of time). This postulate is translated into the quantum theory through the axiom of locality ,
$$
[\phi(x),\phi(y)]=0\quad\forall x,y\quad \text{s.t.}\quad (x-y)^2<0
$$
where $[\cdot,\cdot]$ denotes a supercommutator. In other words, any two operators whose support are casually disconnected must (super)commute. In non-relativistic systems this axiom is vacuous because all spacetime intervals are timelike, $(x-y)^2>0$, that is, all spacetime points are casually connected. In relativistic systems, this axiom is very strong. These two remarks can be applied to the theorems you quote: Reeh-Schlieder depends on the locality axiom, so it is no surprise it no longer applies to non-relativistic systems. Coleman-Mandula (see here for a proof). The rotation group is compact and therefore it admits finite-dimensional unitary representations. On the other hand, the Lorentz group is non-compact and therefore the only finite-dimensional unitary representation is the trivial one. Note that this is used in the step 4 in the proof above; it is here where the proof breaks down. Haag also applies to non-relativistic systems, so it is not a good example of OP's point. See this PSE post for more details. Weinberg-Witten. To begin with, this theorem is about massless particles, so it is not clear what such particles even mean in non-relativistic systems. From the point of view of irreducible representations they may be meaningful, at least in principle. But they need not correspond to helicity representations (precisely because the little group of the reference momentum is not simple). Therefore, the theorem breaks down (as it depends crucially on helicity representations). Spin-statistics. As in Reeh-Schlieder, in non-relativistic systems the locality axiom is vacuous, so it implies no restriction on operators. CPT. Idem. Coleman-Gross. I'm not familiar with this result so I cannot comment. I don't even know whether it is violated in non-relativistic systems. 1: More generally, the indefinite orthogonal (or pseudo-orthogonal) group $\mathrm O(p,q)$ is defined as the set of $(p+q)$-dimensional matrices, with real coefficients, that leave invariant the metric with signature $(p,q)$:
$$
\mathrm O(p,q):=\{M\in \mathrm{M}_{p+q}(\mathbb R)\ \mid\ M\eta M^T\equiv \eta\},\qquad \eta:=\mathrm{diag}(\overbrace{-1,\dots,-1}^p,\overbrace{+1,\dots,+1}^q)
$$ The special indefinite orthogonal group $\mathrm{SO}(p,q)$ is the subset of $\mathrm O(p,q)$ with unit determinant. If $pq\neq0$, the group $\mathrm{SO}(p,q)$ has two disconnected components. In this answer, "Lorentz group" may refer to the orthogonal group with signature $(1,d-1)$; to its $\det(M)\equiv+1$ component; or to its orthochronus subgroup $M^0{}_0\ge+1$. Only the latter is simply-connected. The topology of the group is mostly irrelevant for this answer, so we shall make no distinction between the three different possible notions of "Lorentz group". 2: One can prove that the inhomogeneous Galilei algebra, and unlike the Poincaré algebra, has a non-trivial second co-homology group. In other words, it admits a non-trivial central extension. The Bargmann group is defined precisely as the centrally extended inhomogeneous Galilei group. Strictly speaking, all we know is that the central extension has the algebra $\mathbb R$; at the group level, it could lead to a factor of $\mathrm U(1)$ as above, or to a factor of $\mathbb R$. In quantum mechanics the first option is more natural, because we may identify this phase with the $\mathrm U(1)$ symmetry of the Schrödinger equation (which has a larger symmetry group, the so-called Schrödinger group ). Again, the details of the topology of the group are mostly irrelevant for this answer. | {
"source": [
"https://physics.stackexchange.com/questions/396820",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/6764/"
]
} |
397,576 | Yes, I know it's steel. It's everywhere on the web and I did google. But I seek enlightenment. My physics textbook defines elasticity as: Property by virtue of which a material regains its shape. Or, the ability of material to resist change in its shape or size. While I get what my textbook intends to say, I strongly think that there is a subtle difference between the 2 definitions. I mean according to the first definition, certainly rubber is more elastic than steel as rubber has tendency to regain its shape even when stretched several times its natural length. On the other hand, a steel bar would become permanently set and even fracture if the strain increases ever so slightly (let's keep "but it requires tremendous force" out of the way here, that's not the main point here) . In this sense, obviously rubber is more elastic. But the second definition makes clear that steel is the winner. Steel has greater tendency to resist its shape change and hence it should be more elastic. So, it is very clear that we can define elasticity 2 ways, either by a picture of strain tolerance (winner = rubber) or by stress tolerance (winner = steel). Most of the physicists (but definitely not all) seem to prefer the stress tolerance definition (mostly without clarification). What I seek here is a logical(and maybe philosophical) answer to why? Why prefer one definition over other, especially the one which defies common sense of general public? When everyone seems to agree with rubber as winner, why change the rules? | There are two separate concepts here: the Young's modulus , which determines the force needed to stretch the material the elastic limit, aka yield strain , which determines how far the material can be stretched As you say, the term elastic tends to be used in a vague way that conflates these two properties. Generally a high Young's modulus means the material is stiff so I would say steel is stiffer than rubber not more elastic than rubber. Steel also has a much smaller yield strain that rubber because you can't stretch steel far before it starts to deform while rubber can be stretched a long distance. So if you're going to use the vaguely defined term elastic then steel is certainly less elastic than rubber in both meanings. However in a physics or engineering context you would use the precisely defined terms Young's modulus and yield strain instead. Finally: There is another meaning for elastic , which is what Rod has covered in his answer. I'm going to summarise it here for completeness but please upvote Rod's answer as he thought of it first! If we say a collision is elastic it means no energy is lost in the collision. In this sense the collision between steel balls is highly elastic. That's why a Newton's cradle with steel balls will swing for ages once you set it going. By contrast collisions between rubber balls tend to be squidgier and lose more energy so in this sense they are less elastic than steel. It might be that this is why you have seen steel described as more elastic than rubber. The term elastic applies to the collision rather than the material. | {
"source": [
"https://physics.stackexchange.com/questions/397576",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/102349/"
]
} |
397,694 | Say you cook up a model about a physical system. Such a model consists of, say, a system of differential equations. What criterion decides whether the model is classical or quantum-mechanical? None of the following criteria are valid: Partial differential equations: Both the Maxwell equations and the Schrödinger equation are PDE's, but the first model is clearly classical and the second one is not. Conversely, finite-dimensional quantum systems have as equations of motion ordinary differential equations, so the latter are not restricted to classical systems only. Complex numbers: You can use those to analyse electric circuits, so that's not enough. Conversely, you don't need complex numbers to formulate standard QM (cf. this PSE post ). Operators and Hilbert spaces: You can formulate classical mechanics à la Koopman-von Neumann . In the same vein: Dirac-von Neumann axioms: These are too restrictive (e.g., they do not accommodate topological quantum field theories). Also, a certain model may be formulated in such a way that it's very hard to tell whether it satisfies these axioms or not. For example, the Schrödinger equation corresponds to a model that does not explicitly satisfy these axioms; and only when formulated in abstract terms this becomes obvious. It's not clear whether the same thing could be done with e.g. the Maxwell equations. In fact, one can formulate these equations as a Dirac-like equation $(\Gamma^\mu\partial_\mu+\Gamma^0)\Psi=0$ (see e.g. 1804.00556 ), which can be recast in abstract terms as $i\dot\Psi=H\Psi$ for a certain $H$. Probabilities: Classical statistical mechanics does also deal with probabilistic concepts. Also, one could argue that standard QM is not inherently probabilistic, but that probabilities are an emergent property due to the measurement process and our choice of observable degrees of freedom. Planck's constant: It's just a matter of units. You can eliminate this constant by means of the redefinition $t\to \hbar t$. One could even argue that this would be a natural definition from an experimental point of view, if we agree to measure frequencies instead of energies. Conversely, you may introduce this constant in classical mechanics by a similar change of variables (say, $F=\hbar\tilde F$ in the Newton equation). Needless to say, such a change of variables would be unnatural, but naturalness is not a well-defined criterion for classical vs. quantum. Realism/determinism: This seems to depend on interpretations. But whether a theory is classical or quantum mechanical should not depend on how we interpret the theory; it should be intrinsic to the formalism. People are after a quantum theory of gravity. What prevents me from saying that General Relativity is already quantum mechanical? It seems intuitively obvious that it is a classical theory, but I'm not sure how to put that intuition into words. None of the criteria above is conclusive. | As far as I know, the commutator relations make a theory quantum. If all observables commute, the theory is classical. If some observables have non-zero commutators (no matter if they are proportional to $\hbar$ or not), the theory is quantum. Intuitively, what makes a theory quantum is the fact that observations affect the state of the system. In some sense, this is encoded in the commutator relations: The order of the measurements affects their outcome, the first measurement affects the result of the second one. | {
"source": [
"https://physics.stackexchange.com/questions/397694",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/84967/"
]
} |
397,711 | Is it possible for a molecule or atom to orbit a star (e.g. the Sun)? Or is there always too much outward force imparted by solar radiation compared to the inward force of gravitational attraction? | Cute idea! Thanks for posting this question. I've enjoyed thinking about it. Geometrical absorption Suppose we start by assuming that we're talking about a particle that just absorbs all light that impinges on its cross-section. The sun's gravitational force on a particle is proportional to its mass, and therefore to the cube $a^3$ of its linear dimension $a$. Radiation pressure is proportional to the cross-sectional area, and therefore to $a^2$. Since the exponents are different, it follows that for small enough objects, the net force will be repulsive, and there can be no closed orbits. For objects that are just a little above that size cut-off, we could have Keplerian orbits, but they would not obey Kepler's law of periods with the same constant of proportionality as for objects such as planets that are large enough to make radiation pressure negligible. Without having to do a numerical estimate, we can tell that atoms are below the cut-off size for closed orbits, since solar sails exist, and a solar sail is considerably thicker than one monolayer of atoms. All of this holds regardless of the distance $r$ from the sun, because both radiation pressure and gravitational forces go like $1/r^2$. This is also why the orbits are still Keplerian: the interaction with the sun acts like gravity, but just with a different gravitational constant. A stable, electrically neutral particle such as a neutrino or a dark matter particle can orbit, because it doesn't interact with electromagnetic radiation. In fact, I think dark matter is basically known to exist only because it's gravitationally bound to bodies such as galaxies. Wave model But as pointed out by Rob Jeffries in a comment, this is not right at all for objects that are small compared to the wavelength of the light. In the limit $a \ll \lambda$, we have Rayleigh scattering, with a cross-section $\sigma \sim a^6/\lambda^4$. Let $$R=\frac{F_\text{rad}}{F_\text{grav}}$$ be the ratio of the radiation force to the gravitational force. If we don't worry about factors or order unity, then it doesn't matter if we're talking about absorption, reflection, or scattering. Pretend it's absorption, and let $a$ be the radius of a spherical particle. We then have $$R=\frac{3}{16\pi^2 Gc}\cdot\frac{L}{M}\cdot\frac{1}{\rho a^3}\cdot\sigma,$$ where $\rho$ is the density of the particle, $L$ is the luminosity of the sun, and $M$ is the mass of the sun. For a particle with $a\sim 300\ \text{nm}$, the geometrical absorption approximation $\sigma\sim \pi a^2$ is pretty good, and the result is that $R$ is of order unity. For a particle with $a\sim 50\ \text{nm}$, the Rayleigh scattering approximation is valid, and we have $\sigma\sim a^6/\lambda^4$. The result is $R\sim 10^{-4}$. So it seems that the result is somewhat inconclusive. For a star with the $L/M$ of our sun, there is a pretty broad range of sizes for particles, with $a\sim\lambda$, such that there is fairly even competition between radiation pressure and gravity. Ionization Leftroundabout's answer pointed out the importance of ionization, and he estimated that effect for high-energy electrons. Actually I think UV is more important. For a 25 eV photon, which is at the threshold for ionization of helium, the cross-section is about $7\times 10^{-18}\ \text{cm}^2$. Suppose that $\sim10^{-2}$ of the sun's radiation is above this energy. For an atom at a distance of 1 AU from the sun, the result is that ionization occurs at a rate of $\sim10^{-3}\ \text{s}^{-1}$. This suggests that there is no way an atom is going to complete a full orbit around the sun without being ionized. If we assume that our atoms(/ions) are all independent of one another, then an atom will basically spiral around the sun's magnetic field lines. One thing I don't know from this analysis is whether it's really valid to assume the atoms are independent. We could also imagine that there are parcels of gas orbiting the sun, and these are electrically neutral in bulk. Summary This analysis seems inconclusive for particles of baryonic matter with sizes less than about 300 nm. It seems like we need more work to understand this -- or someone could find where the subject has been treated in more detail in the astrophysics literature. For stars off the main sequence, I think we can make some definite conclusions. Giant and supergiant stars, which have $L/M$ much higher than that of the sun, will efficiently sweep out all particles from $a\sim\lambda$ up to some upper size limit. For white dwarfs and such, with very small $L/M$, radiation pressure will never be significant. Here is a paper (Mann et al., "Dust in the interplanetary medium," Plasma Phys. Control. Fusion 52 (2010) 124012) on dust in the interplanetary medium. It describes things like the trajectories of charged dust particles. | {
"source": [
"https://physics.stackexchange.com/questions/397711",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/178168/"
]
} |
397,971 | We have the Euler equations for a rotating body as follows $$I_1\dot\omega_1+\omega_2\omega_3(I_3-I_2)=0\\
I_2\dot\omega_2+\omega_1\omega_3(I_1-I_3)=0\\
I_3\dot\omega_3+\omega_2\omega_1(I_2-I_1)=0$$
Where $I_i$ are the moments of inertia about the $x_i$ axis, and $\omega_i$ is the angular velocity about this axis. It can be shown (*) that if $I_1>I_2>I_3$, then objects with angular velocity very close to $\vec\omega=(0,1,0)$ are unstable. Why is this and how can I try to picture it? I tried to picture this using a ball, but realised this is probably not a good way to visualise it, since a ball is spherically symmetric, so the moments of inertia are not distinct. Is there any visualisation or animation that could allow me to see this rotation, and possibly understand why it is unstable? (*) In response to @SRS's comment: I am not sure about any references, but I know how to do it: Let $\omega_1=\eta_1,\omega_3=\eta_3$ where $\eta$ is a small perturbation, and suppose $\omega_2=1+\eta_2$. Then the Euler eqns become$$I_1\dot\eta_1=(I_2-I_3)\eta_3+O(\eta^2)\tag1$$$$I_2\dot\eta_2=O(\eta^2)\tag2$$$$I_3\dot\eta_3=(I_1-I_2)\eta_1+O(\eta^2)\tag3$$Differentiate $(1)$ and sub in $(3)$ to the resulting expression$$\ddot\eta_1=\frac{(I_2-I_3)(I_1-I_2)}{I_3I_1}\eta_1$$If $I_1>I_2>I_3$, then the constant on the right hand side is positive, so the solution to this equation is an exponential (if it was any other order, then the solution would be a $\sin/\cos$). Therefore it is unstable. Edit: To clarify, I posted this question to see other more visual ways of understanding this effect rather than solving the equations as I did above, and to see how this effect comes into play in real life. So I don't think it is a duplicate of the other questions, since they don't have answers that fit this. | There is another nice way of seeing this mathematically. It is not too hard to show that in the body frame, there are two conserved quantities: the square of the angular momentum vector
$$
L^2 = L_1^2 + L_2^2 + L_3^2
$$
and the rotational kinetic energy, which works out to be
$$
T = \frac{1}{2}\left( \frac{L_1^2}{I_1} + \frac{L_2^2}{I_2} + \frac{L_3^2}{I_3} \right).
$$
(Note that the angular momentum $\vec{L}$ itself is not conserved in the body frame; but its square does happen to be a constant.) We can then ask the question: For given values of $L^2$ and $T$, what are the allowed values of $\vec{L}$? It is easy to see that $L^2$ constraint means that $\vec{L}$ must lie on the surface of a sphere; and it is almost as easy to see that the $T$ constraint means that $\vec{L}$ must also lie on the surface of a given ellipsoid, with principal axes $\sqrt{2TI_1} > \sqrt{2T I_2} > \sqrt{2T I_3}$. Thus, the allowed values of $\vec{L}$ must lie on the intersection of a sphere and an ellipsoid. If we hold $L^2$ fixed and generate a bunch of these curves for various values of $T$, they look like this: Note that for a given value of $L^2$, an object will have its highest possible kinetic energy when rotating around the axis with the lowest moment of inertia, and vice versa. Suppose, then, that an object is rotating around the axis of its highest moment of inertia. If we perturb this object so that we change its energy slightly (assuming for the sake of argument that $L^2$ remains constant), we see that the vector $\vec{L}$ will now lie on a relatively small curve near its original location. Similarly, if the object is rotating around its axis of lowest inertia, $\vec{L}$ will stay relatively close to its original value when perturbed. However, the situation is markedly different when the object is rotating about the intermediate axis initially (the third red point in the diagram above, on the "front side" of the sphere. The contours of slightly perturbed $T$ near this point do not stay near the intermediate axis; they wander all over the sphere. There is therefore nothing keeping $\vec{L}$ from wandering all over this sphere if we perturb the object slightly away from rotating about this axis; which implies that an object rotating about its intermediate axis is unstable. | {
"source": [
"https://physics.stackexchange.com/questions/397971",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/154004/"
]
} |
398,476 | Some things don't bounce like rubber balls do. For example, books don't bounce much when dropped. Why is it that some things bounce while others don't? | Because bouncing requires the object to be elastic - shortly after it deforms, its shape should return to the one it had before deforming. In order for an object to bounce 1 , the sequence of events would be following: object is going to touch the surface, having a kinetic energy $E$ the object deforms ( doesn't shatter, break, explode, catch fire, etc ) and its kinetic energy transforms into the internal energy. there's no (or insignificant) loss in newly gained internal energy, i.e. no (or little) part of it is dissipated as heat, vibration, etc (guess there could be another form of dissipation...) If all the above steps are passed, the object has to "un-deform" - internal energy gained because of deforming and not lost in the step 3 is turning back into the kinetic energy. Now it has its kinetic energy back, and thus has the speed to go up again. In order to bounce, an object must "pass" all the steps above. In other words , the objects bounces, if there is deformation and it's elastic , not plastic or viscous and most of the elastic potential energy is realised into acceleration of the whole object in the opposite direction. Let's see consider three different objects - a rubber ball, a plasticine ball and a book, and see how they behave. Well, any of them can pass the first step since they have got the speed. Now they will fall on the ground. Balls pass the second step, they deform to a different degree. A book primarily fails to bounce because its shape favours other modes of energy propagation - dissipation via vibration. Thus, the book is not a contender anymore. What about the third step ? The plasticine ball fails it - its gained internal energy was mostly lost because of being transformed into thermal energy. So, out of three objects, only the rubber ball will bounce. As an additional example, you could consider the third, steel ball (not drawing it here :). It would certainly deform less than the rubber ball, but would still bounce pretty good. 2 1 - This answer is considering a system where the surface of the floor doesn't deform itself. If there's a trampoline instead of the floor and an object won't stick to it once it falls on it, it will bounce back. If there's sand instead of hard surface, any object falling in sand would behave like an object which falls on the hard surface and fails the 3rd step 2 - See " Clarifying the actual definition of elasticity. Is steel really more elastic than rubber? " | {
"source": [
"https://physics.stackexchange.com/questions/398476",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/191870/"
]
} |
399,281 | When talking about relativity, we always mention Lorentz contraction. If a body is moving with velocituy $V$ in the $x$ direction, its length will be contracted in that direction . The length remains the same in the orthogonal directions. This is usually seen as obvious, as the velocity is zero in those directions so the object might as well be at rest, as long as those directions are concerned. However, a student today raised an objection. She said "no, this is not obvious; the moving object is not at rest and I cannot be sure what happens; in principle, there could be contraction in all directions". She wanted a physical reason to convince her that no contraction takes place in the orthogonal directions. She took me off guard and I didn't know what to say. It is sometimes hard to argue the obvious. Any help? | Thought experiment: Two rings are flying toward each other at relativistic speed. The rings are perpendicular to the velocity, flat toward each other, and have very thin paper stretched across like a drum head. Now imagine the high speed makes transverse measures smaller. Ring A sees a tiny ring B coming toward it: B punches a tiny hole in A, in the process being obliterated. But B sees A tiny, and sees A punching just a tiny hole in B. That’s a contradiction: two different results of the same space-time events doesn’t happen. Ditto if transverse measures get longer. For longitudinal measures, simultaneity creates consistency between separate points. But here we see inconsistency at the same space-time point. Must be that neither happens, because transverse measures don’t change, and the rings exactly hit each other. | {
"source": [
"https://physics.stackexchange.com/questions/399281",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/101257/"
]
} |
399,297 | In the book of The First Three Minutes by Weinberg, on pages 106-107, it is stated that SECOND FRAME. The temperature of the universe is 30,000 million
degrees Kelvin [...] The nuclear particle balance has conse- quently
shifted to 38 per cent neutrons and 62 per cent protons. [...] THIRD FRAME. The temperature of the universe is 10,000 million degrees
Kelvin. [...] The decreasing temperature has now allowed the
proton-neutron balance to shift to 24 per cent neutrons and 76 per
cent protons. What is the reason for this balance shift between neutrons and protons? And what determines the rate of change of the neutron/proton ratio? | There are two very relevant facts that inform this answer: (1) The rest mass energy of a neutron is 1.29 MeV higher than that of a proton. $(m_n - m_p)c^2 = 1.29$ MeV. (2) The total number of neutrons plus protons (essentially the only baryons present) is a constant. Neutrons and protons can transform into one another via reactions moderated by the weak nuclear force. e.g.
$$ n + e^{+}\rightarrow p + \bar{\nu_e}$$
$$ p + e \rightarrow n + \nu_e$$ Because of the rest mass energy difference, the first of these reactions requires no energy input and the products have kinetic energy even if the neutron were at rest. The second does require energy (at least 1.29 MeV) to proceed, in the form of reactant kinetic energy. In the first second of the universe, with temperatures higher than $kT >10$ MeV ($10^{11}$K) these reactions are rapid, and in balance (occur with almost equal likelihood) and the $n/p$ ratio is 1. i.e. Equal numbers of neutrons and protons. As the universe expands and cools to less than a few MeV (a few $10^{10}$ K) two things happen. The density of reactants and the reaction rates fall; and the first reaction starts to dominate over the second, since there are fewer reactants with enough kinetic energy (recall that the kinetic energies of the particles are proportional to the temperature) to supply the rest mass energy difference between a neutron and proton. As a result, more protons are produced than neutrons and the $n/p$ ratio begins to fall. The $n/p$ ratio varies smoothly as the universe expands. If there is thermal equilibrium between all the particles in the gas then the $n/p$ ratio is given approximately by
$$\frac{n}{p} \simeq \exp\left[-\frac{(m_n-m_p)c^2}{kT}\right],$$
where the exponential term is the Boltzmann factor and $(m_n - m_p)c^2 = 1.29$ Mev is the aforementioned rest-mass energy difference between a neutron and a proton. The rate at which $n/p$ changes is simply determined by how the temperature varies with time, which in a radiation-dominated universe is derived from the Friedmann equations as $T \propto t^{-1/2}$ (since the temperature is inversely related to the scale factor through Wien's law). In practice, the $n/p$ ratio does not quite vary like that because you cannot assume a thermal equilibrium once the reaction rates fall sufficiently that the time between reactions is comparable with the age of the universe. This in turn depends on the density of all the reactants and in particular the density of neutrinos, electrons and positrons, which fall as $T^3$ (and hence as $t^{-3/2}$). At a temperature of $kT \sim 1$ MeV, the average time for a neutron to turn into a proton is about 1.7s, which is roughly the age of the universe at that point, but this timescale grows much faster than $t$. When the temperature reaches $kT = 0.7$ MeV ($8\times 10^9$K) after about 3 seconds, the reaction rates become so slow (compared with the age of the universe) that the $n/p$ ratio is essentially fixed (though see below$^{*}$) at that point. The final ratio is determined by the Boltzmann factor $\sim \exp(-1.29/0.7)= 1/6.3$. i.e. There are six times as many protons as neutrons about three seconds after the big bang. $^{*}$ Over the next few minutes (i.e. after the epoch talked about in our question) there is a further small adjustment as free neutrons decay into protons,
$$ n \rightarrow p + e + \bar{\nu_e}$$
in the window available to them before they are mopped up to form deuterium and then helium. During this window, the temporal behaviour is
$$ \frac{n}{p} \simeq \frac{1}{6} \exp(-t/t_n),$$
where $t_n$ is the decay time for neutrons of 880s. Since the formation of deuterium occurs after about $t \sim 200$s this final readjustment gives a final n/p ratio of about 1/7. | {
"source": [
"https://physics.stackexchange.com/questions/399297",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/99217/"
]
} |
399,463 | While I was watching a popular science lecture on YouTube, I came across this sentence "Sun is giving us a low entropy, not energy" which was said by Prof. Krzysztof Meissner . I am not a physicist, but this sounds to me like a huge leap. I would be pleased if someone could explain to me the difference. | First, some preliminaries: We always wish to have a system that can do useful work, e.g., run a water wheel, raise a weight, or generate electricity. The catches are that energy is conserved (which you probably knew about) and also that entropy is paraconserved (which you might not have known about). Specifically, entropy can't be destroyed, but it is transferred when one object heats another, and it's also created whenever any process occurs, anywhere. The problem with producing work arises because work doesn't transfer entropy, but heat transfer does (while also creating some entropy). Therefore, we can't simply turn thermal energy (such as the energy the Sun provides) into work; we must dump the accompanying entropy somewhere as well. This is why every heat engine requires not just a source of thermal energy (the so-called hot reservoir) but also a sink for entropy (the so-called cold reservoir). In the idealized process, when we pull energy $E$ from the hot reservoir at temperature $T_\mathrm{hot}$, the unavoidable entropy transfer is $$S=\frac{E}{T_\mathrm{hot}}.$$ Now we extract some useful work $W$ (by boiling water and running a steam turbine, for example), and we dump all that entropy into the low-temperature reservoir at temperature $T_\mathrm{cold}$ (using a nearby cool river to condense the steam, for example): $$S=\frac{E-W}{T_\mathrm{cold}} .$$ The energy balance works out: $$E-W=(E-W).$$ The entropy balance works out: $$\frac{E}{T_\mathrm{hot}}=\frac{E-W}{T_\mathrm{cold}}.$$ The efficiency is $$\frac{W}{E}=1-\frac{T_\mathrm{cold}}{T_\mathrm{hot}}.$$ And the higher the temperature of the hot reservoir, the more work $W$ we can pull out while satisfying the two conversation laws. Now to the point: The Sun sends a lot of energy our way: around 1000 W/m² at the earth's surface. But is this in fact all that much energy? The heat capacity of soil is about 1000 J/kg-°C, so if we simply extracted 1°C from a kilogram of soil per second, we'd match the Sun in energy per square meter. And there's a lot of soil available, and its absolute temperature is pretty high (about 283 above absolute zero in divisions of °C). And the heat capacity of water is four times as high! Even better, water is self-circulating, so in this scenario, we could cool seawater and let it recirculate. We could operate a party boat: pull out thermal energy from water to make ice for our cocktails and use the extracted energy to cruise around all day. Unfortunately, the restrictions described above tell us that we can't perform this extraction: there's no lower-temperature reservoir to send the entropy to (here, I'm assuming that most of the earth and atmosphere available to us is at around 10°C). In contrast, the Sun's temperature is enormous—around 5500°C, which makes the denominator of the effective entropy term $S=E/T$ relatively small. Thus, it's not the energy of the sunlight that's particularly useful—it's its low entropy. | {
"source": [
"https://physics.stackexchange.com/questions/399463",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/159132/"
]
} |
400,457 | What does general relativity say about the relative velocities of objects that are far away from one another? In particular:-- Can distant galaxies be moving away from us at speeds faster than $c$? Can cosmological redshifts be analyzed as Doppler shifts? Can I apply a Lorentz transformation in general relativity? | What does general relativity say about the relative velocities of objects that are far away from one another? Nothing. General relativity doesn't provide a uniquely defined way of measuring the velocity of objects that are far away from one another. For example, there is no well defined value for the velocity of one galaxy relative to another at cosmological distances. You can say it's some big number, but it's equally valid to say that they're both at rest, and the space between them is expanding. Neither verbal description is preferred over the other in GR. Only local velocities are uniquely defined in GR, not global ones. Confusion on this point is at the root of many other problems in understanding GR: Question: How can distant galaxies be moving away from us at more than the speed of light? Answer: They don't have any well-defined velocity relative to us. The relativistic speed limit of c is a local one, not a global one, precisely because velocity isn't globally well defined. Question: Does the edge of the observable universe occur at the place where the Hubble velocity relative to us equals c, so that the redshift approaches infinity? Answer: No, because that velocity isn't uniquely defined. For one fairly popular definition of the velocity (based on distances measured by rulers at rest with respect to the Hubble flow), we can actually observe galaxies that are moving away from us at >c, and that always have been moving away from us at >c.[Davis 2004] Question: A distant galaxy is moving away from us at 99% of the speed of light. That means it has a huge amount of kinetic energy, which is equivalent to a huge amount of mass. Does that mean that its gravitational attraction to our own galaxy is greatly enhanced? Answer: No, because we could equally well describe it as being at rest relative to us. In addition, general relativity doesn't describe gravity as a force, it describes it as curvature of spacetime. Question: How do I apply a Lorentz transformation in general relativity? Answer: General relativity doesn't have global Lorentz transformations, and one way to see that it can't have them is that such a transformation would involve the relative velocities of distant objects. Such velocities are not uniquely defined. Question: How much of a cosmological redshift is kinematic, and how much is gravitational? Answer: The amount of kinematic redshift depends on the distant galaxy's velocity relative to us. That velocity isn't uniquely well defined, so you can say that the redshift is 100% kinematic, 100% gravitational, or anything in between. Let's take a closer look at the final point, about kinematic versus gravitational redshifts. Suppose that a photon is observed after having traveled to earth from a distant galaxy G, and is found to be red-shifted. Alice, who likes expansion, will explain this by saying that while the photon was in flight, the space it occupied expanded, lengthening its wavelength. Betty, who dislikes expansion, wants to interpret it as a kinematic red shift, arising from the motion of galaxy G relative to the Milky Way Galaxy, M. If Alice and Betty's disagreement is to be decided as a matter of absolute truth, then we need some objective method for resolving an observed redshift into two terms, one kinematic and one gravitational. But this is only possible for a stationary spacetime, and cosmological spacetimes are not stationary. As an extreme example, suppose that Betty, in galaxy M, receives a photon without realizing that she lives in a closed universe, and the photon has made a circuit of the cosmos, having been emitted from her own galaxy in the distant past. If she insists on interpreting this as a kinematic red shift, the she must conclude that her galaxy M is moving at some extremely high velocity relative to itself. This is in fact not an impossible interpretation, if we say that M's high velocity is relative to itself in the past. An observer who sets up a frame of reference with its origin fixed at galaxy G will happily confirm that M has been accelerating over the eons. What this demonstrates is that we can split up a cosmological red shift into kinematic and gravitational parts in any way we like, depending on our choice of coordinate system. For those with a more technical background in abstract math, the following description may be helpful. (The answer by knzhou does a nice job of explaining this in nontechnical terms.) Spacetime in GR is described as a semi-Riemannian space. A velocity vector is a vector in the tangent space at a particular point. Velocity vectors at different points belong to different tangent spaces, so they aren't directly comparable. To compare them, you need to parallel transport them to the same spot. If the spacetime is (approximately) flat, then you can do this, and you can say, for example, that the sun's velocity vector minus Vega's velocity vector is a certain value. But if the spacetime is not even approximately flat (e.g., at cosmological scales), then parallel transport is path-dependent, so the comparison becomes completely ambiguous. Related: Why is the observable universe so big? References Davis and Lineweaver, Publications of the Astronomical Society of Australia, 21 (2004) 97, msowww.anu.edu.au/~charley/papers/DavisLineweaver04.pdf | {
"source": [
"https://physics.stackexchange.com/questions/400457",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
400,465 | Is there a way to divert (deviate, deflect) light (photons) by another light (such as laser) or wave ? Just like what a prism can do but using a ray of light or wave instead of a prism. Maybe I'm searching all possible ways to divert light and radio waves. Although for me, it's interesting to find ways in which they do not need a solid or liquid object. Such as magnetic field , light and... Specially the ways that we can implement them today in a lab. If there is any ways with above assumptions, please sort theme in your answer and if you have any reference about them please insert it. Note: Some information like cost of implementation , accuracy in the deviation , limitations of the method and something like these about each method can be useful too. Note: Also diverting the radio waves by different methods is questionable
here. | What does general relativity say about the relative velocities of objects that are far away from one another? Nothing. General relativity doesn't provide a uniquely defined way of measuring the velocity of objects that are far away from one another. For example, there is no well defined value for the velocity of one galaxy relative to another at cosmological distances. You can say it's some big number, but it's equally valid to say that they're both at rest, and the space between them is expanding. Neither verbal description is preferred over the other in GR. Only local velocities are uniquely defined in GR, not global ones. Confusion on this point is at the root of many other problems in understanding GR: Question: How can distant galaxies be moving away from us at more than the speed of light? Answer: They don't have any well-defined velocity relative to us. The relativistic speed limit of c is a local one, not a global one, precisely because velocity isn't globally well defined. Question: Does the edge of the observable universe occur at the place where the Hubble velocity relative to us equals c, so that the redshift approaches infinity? Answer: No, because that velocity isn't uniquely defined. For one fairly popular definition of the velocity (based on distances measured by rulers at rest with respect to the Hubble flow), we can actually observe galaxies that are moving away from us at >c, and that always have been moving away from us at >c.[Davis 2004] Question: A distant galaxy is moving away from us at 99% of the speed of light. That means it has a huge amount of kinetic energy, which is equivalent to a huge amount of mass. Does that mean that its gravitational attraction to our own galaxy is greatly enhanced? Answer: No, because we could equally well describe it as being at rest relative to us. In addition, general relativity doesn't describe gravity as a force, it describes it as curvature of spacetime. Question: How do I apply a Lorentz transformation in general relativity? Answer: General relativity doesn't have global Lorentz transformations, and one way to see that it can't have them is that such a transformation would involve the relative velocities of distant objects. Such velocities are not uniquely defined. Question: How much of a cosmological redshift is kinematic, and how much is gravitational? Answer: The amount of kinematic redshift depends on the distant galaxy's velocity relative to us. That velocity isn't uniquely well defined, so you can say that the redshift is 100% kinematic, 100% gravitational, or anything in between. Let's take a closer look at the final point, about kinematic versus gravitational redshifts. Suppose that a photon is observed after having traveled to earth from a distant galaxy G, and is found to be red-shifted. Alice, who likes expansion, will explain this by saying that while the photon was in flight, the space it occupied expanded, lengthening its wavelength. Betty, who dislikes expansion, wants to interpret it as a kinematic red shift, arising from the motion of galaxy G relative to the Milky Way Galaxy, M. If Alice and Betty's disagreement is to be decided as a matter of absolute truth, then we need some objective method for resolving an observed redshift into two terms, one kinematic and one gravitational. But this is only possible for a stationary spacetime, and cosmological spacetimes are not stationary. As an extreme example, suppose that Betty, in galaxy M, receives a photon without realizing that she lives in a closed universe, and the photon has made a circuit of the cosmos, having been emitted from her own galaxy in the distant past. If she insists on interpreting this as a kinematic red shift, the she must conclude that her galaxy M is moving at some extremely high velocity relative to itself. This is in fact not an impossible interpretation, if we say that M's high velocity is relative to itself in the past. An observer who sets up a frame of reference with its origin fixed at galaxy G will happily confirm that M has been accelerating over the eons. What this demonstrates is that we can split up a cosmological red shift into kinematic and gravitational parts in any way we like, depending on our choice of coordinate system. For those with a more technical background in abstract math, the following description may be helpful. (The answer by knzhou does a nice job of explaining this in nontechnical terms.) Spacetime in GR is described as a semi-Riemannian space. A velocity vector is a vector in the tangent space at a particular point. Velocity vectors at different points belong to different tangent spaces, so they aren't directly comparable. To compare them, you need to parallel transport them to the same spot. If the spacetime is (approximately) flat, then you can do this, and you can say, for example, that the sun's velocity vector minus Vega's velocity vector is a certain value. But if the spacetime is not even approximately flat (e.g., at cosmological scales), then parallel transport is path-dependent, so the comparison becomes completely ambiguous. Related: Why is the observable universe so big? References Davis and Lineweaver, Publications of the Astronomical Society of Australia, 21 (2004) 97, msowww.anu.edu.au/~charley/papers/DavisLineweaver04.pdf | {
"source": [
"https://physics.stackexchange.com/questions/400465",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/192959/"
]
} |
400,824 | I have seen random errors being defined as those which average to 0 as the number of measurements goes to infinity, and that the error is equally likely to be positive or negative. This only requires a symmetric probability distribution about zero. However typing this question into Google, I did not find a single source that suggested random errors could be anything other than gaussian. Why must random errors be gaussian? | Are random errors necessarily gaussian? Errors are very often Gaussian, but not always.
Here are some physical systems where random fluctuations (or "errors" if you're in a context with the thing that's varying constitutes an error) are not Gaussian: The distribution of times between clicks in a photodetector exposed to light is an exponential distribution.$^{[a]}$ The number of times a photodetector clicks in a fixed period of time is a Poisson distribution. The position offset, due to uniformly distributed angle errors, of a light beam hitting a target some distance away is a Cauchy distribution. I have seen random errors being defined as those which average to 0 as the number of measurements goes to infinity, and that the error is equally likely to be positive or negative. This only requires a symmetric probability distribution about zero. There are distributions that have equal weight on the positive and negative side, but are not symmetric.
Example:
$$ P(x) = \left\{ \begin{array}{ll}
1/2 & x=1 \\
1/4 & x=-1 \\
1/4 & x=-2 \, .
\end{array}\right.$$ However typing this question into Google, I did not find a single source that suggested random errors could be anything other than gaussian. Why must random errors be gaussian? The fact that it's not easy to find references to non-Gaussian random errors does not mean that all random errors are Gaussian :-) As mentioned in the other answers, many distributions in Nature are Gaussian because of the central limit theorem.
The central limit theorem says that given a random variable $x$ distributed according to a function $X(x)$, if $X(x)$ has finite second moment, then given another random variable $y$ defined as the average of many instances of $x$, i.e.
$$y \equiv \frac{1}{N} \sum_{i=1}^N x_i \, ,$$
the distribution $Y(y)$ is Gaussian. The thing is, many physical processes are the sums of smaller processes.
For example, the fluctuating voltage across a resistor is the sum of the voltage contributions from many individual electrons.
Therefore, when you measure a voltage, you get the underlying "static" value, plus some random error produced by the noisy electrons, which because of the central limit theorem is Gaussian distributed.
In other words, Gaussian distributions are very common because so many of the random things in Nature come from a sum of many small contributions . However, There are plenty of cases where the constituents of an underlying error mechanism have a distribution that does not have a finite second moment; the Cauchy distribution is the most common example. There are also plenty of cases where an error is simply not the sum of many small underlying contributions. Either of these cases lead to non-Gaussian errors. $[a]$: See this other Stack Exchange post . | {
"source": [
"https://physics.stackexchange.com/questions/400824",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/57218/"
]
} |
401,216 | In Coordinated Universal Time (UTC), leap seconds are added to account for the slowing down of Earth's rotation. But the slowing down is said to be of the order of milliseconds in a century. Then why there were more than 25 leap seconds added to UTC in the last few decades alone? | It's not the rate of change of the rotation speed that's important, it's the current rotation speed (in the rotating reference frame that stays facing the sun) not matching a 24h day. Thus leap seconds (on average 1 ) accumulate at a near-constant rate, because (as you point out) the average rate of change is low compared to the existing mismatch between actual day length and what our clocks say. Remember that a leap second is an absolute offset added/subtracted, not a multiplier on the speed of our clocks that fixes the problem for the future until the speed drifts some more. We're correcting the "error" in our time function by adding step offsets, not by changing the slope. The length of an SI second remains fixed, and the length of a day by our clocks remain fixed at 24 hours / 86400 SI seconds (with no leap second). In practice the linear model doesn't work at all in the short-term: there's lots of year-to-year variation, and 1.5-2ms/day/century is only a long-term average. See @David Hammen's answer for a nice graph and more details . He commented: Nine leap seconds were added in the first eight years after implementing the concept of leap seconds while only two were added over the 13 year span starting in 1999. The chaotic short-term variation dominates over any period short enough to ignore the average slowdown. More details from the US Naval Observatory's Leap Second article The SI second ( $9 192 631 770$ cycles of the Cesium atom) was chosen to be $1 / 31 556 925.9747$ of the year 1900 . The Earth is constantly undergoing a deceleration caused by the braking action of the tides. Through the use of ancient observations of eclipses, it is possible to determine the deceleration of the Earth to be roughly 1.5-2 milliseconds per day per century . The second that was specified in terms of the mean tropical year of 1900 January 0 was based on the mean solar second computed by Simon Newcomb, using his Tables of the Sun , which used data gathered from 1750 to 1892, and so it corresponds to the mean solar second from around the middle of that period, i.e., ~1820. For further details, see Wikipedia's info on the Ephemeris Second . Note the units of that measurement: it's ms per day per century, or $\Delta s / s / s$ , like an acceleration, not a velocity. And definitely not 1.5 ms per century. Purely coincidentally , a mean solar day is currently on average 2 ms longer than an SI day, so the current error-accumulation rate is 2 ms / day . It's been a little over two centuries since the defining epoch for the SI second. It takes less than 1000 days to need another leap second. (There are various effects which make solar days differ in length, but on average they're longer than 24h and getting even longer.) In another century from now (with constant deceleration of the Earth), we'll need to add leap seconds about twice as often as we do now, to maintain the cumulative difference UT1-UTC at less than 0.9 seconds. | {
"source": [
"https://physics.stackexchange.com/questions/401216",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/103051/"
]
} |
401,715 | So I'm working on a nuclear physics problem and am looking at radioactive decay. The common unit used for very long decays is years within the literature. Is this the sidereal or tropical year? I want to use units of seconds but seeing as how these 20 minutes 24.5 seconds that differential will add up over time... I would guess tropical but that's just a guess. And on the same note, what about days? 24 hours? or 23 hours 56 minutes and 4.1 seconds? Bonus points for a source | A "year" without qualification may refer to a Julian year (of exactly $31\,557\,600~\rm s$), a mean Gregorian year (of exactly $31\,556\,952~\rm s$), an "ordinary" year (of exactly $31\,536\,000~\rm s$), or any number of other things (not all of which are quite so precisely defined). Radioactive decay tables tend to be compiled from multiple different sources, most of which don't clarify which definition of "year" they used, so it is unclear what definition of year is used throughout. It's quite possible that many tables aren't even consistent with the definition of "year" used to calculate the decay times. On the other hand, the standard error is usually overwhelmingly larger than the deviation created by using any common definition of year, so it doesn't really make a difference. A day in physics without qualification pretty universally refers to a period of exactly $86\,400~\rm s$. | {
"source": [
"https://physics.stackexchange.com/questions/401715",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/101916/"
]
} |
402,118 | Does a charged particle, an electron say, travelling with uniform velocity induce a magnetic field? I believe it doesn't. In primary school, we all learned how to induce a magnetic field into an iron nail by wrapping coils of wire around the nail and then hooking it up to a DC battery, but if you do not coil the wire, the magnetic nail doesn't occur. What's happening here? My only guess are the electrons are accelerating; the magnitudes of their speeds aren't changing, but rather their directions. In the coil, a force must be applying itself to the electrons in order for them to make their spiralling paths, thus, they are said to be accelerating and that is what causes the magnetic field to develop. | A straight wire does have a magnetic field. It circles around the wire instead of going in a straight line like in a coil. Picture source: http://coe.kean.edu/~afonarev/physics/magnetism/magnetism-el.htm On the left is a straight wire with the magnetic field curling around it. The middle shows a single loop of wire. Notice that the magnetic field still curls around the wire, but the fields from opposite ends of the loop add together to make a strong field. The right picture shows a multi-loop wire (a solenoid), which enhances the field compared to the single loop. The right picture is the kind of field you created with the wire and nail. For the same current, the solenoid creates a much stronger field, which is why it is used to magnetize the nail. To answer your original question, a single electron in motion does have a magnetic field that's similar to the straight wire (the field curls around the electron's path of motion) except that it gets weaker as you move farther away along the electon's path. | {
"source": [
"https://physics.stackexchange.com/questions/402118",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26071/"
]
} |
402,135 | I am trying to calculate the centripetal force for an enlongated object like a fan blade. Where do I measure to to get the distance from center. Would I measure from the center to the end of the blade? Also what if I had a mass such as a hammer in circular motion since this is an uneven distribution of mass where would I measure to from the center of rotation to get the distance from center? | A straight wire does have a magnetic field. It circles around the wire instead of going in a straight line like in a coil. Picture source: http://coe.kean.edu/~afonarev/physics/magnetism/magnetism-el.htm On the left is a straight wire with the magnetic field curling around it. The middle shows a single loop of wire. Notice that the magnetic field still curls around the wire, but the fields from opposite ends of the loop add together to make a strong field. The right picture shows a multi-loop wire (a solenoid), which enhances the field compared to the single loop. The right picture is the kind of field you created with the wire and nail. For the same current, the solenoid creates a much stronger field, which is why it is used to magnetize the nail. To answer your original question, a single electron in motion does have a magnetic field that's similar to the straight wire (the field curls around the electron's path of motion) except that it gets weaker as you move farther away along the electon's path. | {
"source": [
"https://physics.stackexchange.com/questions/402135",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/168993/"
]
} |
402,140 | Is there any relation between gravity and electrostatic forces like the formulae for forces of gravity and electrostatics are similiar and the charge plays the same role for electrostatics like mass plays for gravity. Are there also any differences also between them? | A straight wire does have a magnetic field. It circles around the wire instead of going in a straight line like in a coil. Picture source: http://coe.kean.edu/~afonarev/physics/magnetism/magnetism-el.htm On the left is a straight wire with the magnetic field curling around it. The middle shows a single loop of wire. Notice that the magnetic field still curls around the wire, but the fields from opposite ends of the loop add together to make a strong field. The right picture shows a multi-loop wire (a solenoid), which enhances the field compared to the single loop. The right picture is the kind of field you created with the wire and nail. For the same current, the solenoid creates a much stronger field, which is why it is used to magnetize the nail. To answer your original question, a single electron in motion does have a magnetic field that's similar to the straight wire (the field curls around the electron's path of motion) except that it gets weaker as you move farther away along the electon's path. | {
"source": [
"https://physics.stackexchange.com/questions/402140",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/193317/"
]
} |
402,383 | This question is inspired by a similar one asked on Quora . Let's say a wizard magicked Jupiter into the Sun, with or without high velocity. What happens? The Quora question has two completely opposed answers: one saying "nothing much happens" and the other saying "the Sun goes out for several hundred years". Both answers give reasons and calculations, and I know enough about physics to find both of them plausible. However ... it's plainly impossible that both answers are correct. Which one (or both?) is incorrect? Why is it incorrect? | Both the quora answers are incorrect. The idea that "nothing happens" is incorrect for reasons I explain in great detail below. The idea that somehow Jupiter spreads itself across the surface of the Sun or directly influences the luminosity of the Sun by doing so is wrong on many levels as pointed out by Victor Toth on the quora page and by Rob and Chris as answers here. Instead I put forward a couple of scenarios where the large amount of accreted energy and/or angular momentum certainly do have an effect on the Sun and/or the radiation the Earth receives from the Sun. Scenario 1: The scenario where Jupiter just drops into the Sun from its current position would certainly have short-term effects. But short-term here means compared with the lifetime of the Sun, not hundreds of years. The kinetic energy of Jupiter at the Sun's surface would be of order $GM_{\odot}M_\mathrm{Jup}/R_{\odot} \sim 4\times 10^{38}$ joules. The solar luminosity is $3.83 \times 10^{26}\ \mathrm{J/s}$. The addition of this much energy (if it is allowed to thermalise) would potentially affect the luminosity of the Sun for timescales of tens of thousands of years. The exact effects will depend on where the energy is deposited. Compared with the binding energy of the star, the additional energy is negligible, but if the energy is dissipated in the convection zone then kinetic energy would do work and lift the convective envelope of the Sun. In other words, the Sun would both increase in luminosity and in radius. If the effects were just limited to the convective envelope, then this has a mass of around $0.02 M_{\odot}$ and so could be "lifted" by $\sim 4\times 10^{38} R_{\odot}^2/GM_{\odot}M_{\rm conv} \sim 0.05 R_{\odot}$. So in this scenario, the Sun would both expand and become more luminous. The relevant timescale is the Kelvin-Helmholtz timescale of the convective envelope , which is of order $GM_{\odot}M_{\rm conv}/R_{\odot} L_{\odot} \sim $few $10^5$ years. If the planet somehow survived and punched its way to the centre of the Sun, then much less energy would be deposited in the convection zone and the effects would be lessened. On longer timescales the Sun would settle back down to the main sequence, with a radius and luminosity only slightly bigger than it was before. This all assumes that Jupiter can remain intact as it falls. It certainly wouldn't "evaporate" in this direct infall scenario, but would it get tidally shredded before it can disappear below the surface? The Roche limit is of order $R_{\odot} (\rho_{\odot}/\rho_{\rm Jup})^{1/3}$. But the average densities of the Sun and Jupiter are almost identical. So it seems likely that Jupiter would be starting to be tidally ripped apart, but as it is travelling towards the Sun at a few hundred km/s at this point, tidal breakup could not be achieved before it had disappeared below the surface. So my conclusion is that dropping Jupiter into the Sun in this scenario would be like dropping a depth charge, with a lag of order $10^{5}$ years before the full effects became apparent. Scenario 2: Jupiter arrives at Roche limit (just above the solar surface) having mysteriously lost a large amount of angular momentum. In this case the effects may be experienced on human timescales. In this case what will happen is Jupiter will be (quickly) shredded by the tidal field, possibly leaving a substantial core. At an orbital radius of $2 R_{\odot}$, the orbital period will be about 8 hours, the orbital speed about $300\ \mathrm{km/s}$ and the orbital angular momentum about $10^{42}\ \mathrm{kg\ m^2\ s^{-1}}$. Assuming total destruction, much of the material will form an accretion disc around the Sun, since it must lose some of its angular momentum before it can be accreted. How much of the Sun's light is blocked is uncertain. It mainly depends on how the material is distributed in the disk, especially the disk scale height. This in turn depends on the balance of the heating and cooling mechanisms and hence the temperature of the disk. Some sort of minimal estimate could be to assume the disk is planar and spread evenly between the solar surface and $2R_{\odot}$ and that it gets close to the solar photospheric temperature at $\sim 5000\ \mathrm K$. In which case the disk area is $3 \pi R_{\odot}^2$, with an "areal density" of $\sigma \sim M_{\rm Jup}/3\pi R_{\odot}^2$. In hydrostatic equilibrium, the scale height will be $\sim kT/g m_\mathrm H$, where $g$ is the gravitational field and $m_\mathrm H$ the mass of a hydrogen atom. The gravity (of a plane) will be $g \sim 4\pi G \sigma$. Putting in $T \sim 5000\ \mathrm K$, we get a scale height of $\sim 0.1 R_{\odot}$. Given that Earth is in the ecliptic plane and this is where the disk will be, then a large fraction, $\gt 20\ \%$, of the sunlight reaching the Earth may be blocked. To work out if this is the case, we need to work out an optical depth of the material. For a scale height of $0.1 R_{\odot}$ and a planar geometry, then the density of the material is $\sim 3\ \mathrm{kg/m^3}$. Looking though this corresponds to a column density of $\sim 10^{10}\ \mathrm{kg/m^2}$. For comparison, the solar photospheric density is of order $10^{-12}\ \mathrm{kg/m^3}$ and is only the upper $1000\ \mathrm{km}$ of the Sun. Given that the definition of the photosphere is where the material becomes optically thick, we can conclude that a tidally shredded Jupiter is optically thick to radiation and indeed the sunlight falling on the Earth would be very significantly reduced – whether or not the amount of radiation impacting the Earth is reduced or increased is a tricky radiative transfer problem, since if the disk were at $5000\ \mathrm K$ and optically thick it would be kicking off a lot of radiation! How long the accretion disk would remain, I am unsure how to calculate. It depends on the assumed viscosity and temperature structure and how much mass is lost through evaporation/winds. The accreted material will have radiated away a large fraction of its gravitational potential energy, so the energetic effects will be much less severe than Scenario 1. However, the Sun will accrete $\sim 10^{42}\ \mathrm{kg\ m^2\ s^{-1}}$ of angular momentum, which is comparable to its current angular momentum.
The accretion of Jupiter in this way is therefore sufficient to increase the angular momentum of the Sun by a significant amount . In the long term this will have a drastic effect on the magnetic activity of the Sun – increasing it by a factor of a few to an order of magnitude. | {
"source": [
"https://physics.stackexchange.com/questions/402383",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/177855/"
]
} |
402,395 | trying to select a suitable material for the pinhole aperture of a pinhole camera. The pinhole minimum size is limited by diffraction. Is there a change in the quality/quantity of diffraction if the edge around which light diffracts is an electrical conductor or not ? (for ex in old polarizing filters the wires would interact with the electric field of the EM wave) | Both the quora answers are incorrect. The idea that "nothing happens" is incorrect for reasons I explain in great detail below. The idea that somehow Jupiter spreads itself across the surface of the Sun or directly influences the luminosity of the Sun by doing so is wrong on many levels as pointed out by Victor Toth on the quora page and by Rob and Chris as answers here. Instead I put forward a couple of scenarios where the large amount of accreted energy and/or angular momentum certainly do have an effect on the Sun and/or the radiation the Earth receives from the Sun. Scenario 1: The scenario where Jupiter just drops into the Sun from its current position would certainly have short-term effects. But short-term here means compared with the lifetime of the Sun, not hundreds of years. The kinetic energy of Jupiter at the Sun's surface would be of order $GM_{\odot}M_\mathrm{Jup}/R_{\odot} \sim 4\times 10^{38}$ joules. The solar luminosity is $3.83 \times 10^{26}\ \mathrm{J/s}$. The addition of this much energy (if it is allowed to thermalise) would potentially affect the luminosity of the Sun for timescales of tens of thousands of years. The exact effects will depend on where the energy is deposited. Compared with the binding energy of the star, the additional energy is negligible, but if the energy is dissipated in the convection zone then kinetic energy would do work and lift the convective envelope of the Sun. In other words, the Sun would both increase in luminosity and in radius. If the effects were just limited to the convective envelope, then this has a mass of around $0.02 M_{\odot}$ and so could be "lifted" by $\sim 4\times 10^{38} R_{\odot}^2/GM_{\odot}M_{\rm conv} \sim 0.05 R_{\odot}$. So in this scenario, the Sun would both expand and become more luminous. The relevant timescale is the Kelvin-Helmholtz timescale of the convective envelope , which is of order $GM_{\odot}M_{\rm conv}/R_{\odot} L_{\odot} \sim $few $10^5$ years. If the planet somehow survived and punched its way to the centre of the Sun, then much less energy would be deposited in the convection zone and the effects would be lessened. On longer timescales the Sun would settle back down to the main sequence, with a radius and luminosity only slightly bigger than it was before. This all assumes that Jupiter can remain intact as it falls. It certainly wouldn't "evaporate" in this direct infall scenario, but would it get tidally shredded before it can disappear below the surface? The Roche limit is of order $R_{\odot} (\rho_{\odot}/\rho_{\rm Jup})^{1/3}$. But the average densities of the Sun and Jupiter are almost identical. So it seems likely that Jupiter would be starting to be tidally ripped apart, but as it is travelling towards the Sun at a few hundred km/s at this point, tidal breakup could not be achieved before it had disappeared below the surface. So my conclusion is that dropping Jupiter into the Sun in this scenario would be like dropping a depth charge, with a lag of order $10^{5}$ years before the full effects became apparent. Scenario 2: Jupiter arrives at Roche limit (just above the solar surface) having mysteriously lost a large amount of angular momentum. In this case the effects may be experienced on human timescales. In this case what will happen is Jupiter will be (quickly) shredded by the tidal field, possibly leaving a substantial core. At an orbital radius of $2 R_{\odot}$, the orbital period will be about 8 hours, the orbital speed about $300\ \mathrm{km/s}$ and the orbital angular momentum about $10^{42}\ \mathrm{kg\ m^2\ s^{-1}}$. Assuming total destruction, much of the material will form an accretion disc around the Sun, since it must lose some of its angular momentum before it can be accreted. How much of the Sun's light is blocked is uncertain. It mainly depends on how the material is distributed in the disk, especially the disk scale height. This in turn depends on the balance of the heating and cooling mechanisms and hence the temperature of the disk. Some sort of minimal estimate could be to assume the disk is planar and spread evenly between the solar surface and $2R_{\odot}$ and that it gets close to the solar photospheric temperature at $\sim 5000\ \mathrm K$. In which case the disk area is $3 \pi R_{\odot}^2$, with an "areal density" of $\sigma \sim M_{\rm Jup}/3\pi R_{\odot}^2$. In hydrostatic equilibrium, the scale height will be $\sim kT/g m_\mathrm H$, where $g$ is the gravitational field and $m_\mathrm H$ the mass of a hydrogen atom. The gravity (of a plane) will be $g \sim 4\pi G \sigma$. Putting in $T \sim 5000\ \mathrm K$, we get a scale height of $\sim 0.1 R_{\odot}$. Given that Earth is in the ecliptic plane and this is where the disk will be, then a large fraction, $\gt 20\ \%$, of the sunlight reaching the Earth may be blocked. To work out if this is the case, we need to work out an optical depth of the material. For a scale height of $0.1 R_{\odot}$ and a planar geometry, then the density of the material is $\sim 3\ \mathrm{kg/m^3}$. Looking though this corresponds to a column density of $\sim 10^{10}\ \mathrm{kg/m^2}$. For comparison, the solar photospheric density is of order $10^{-12}\ \mathrm{kg/m^3}$ and is only the upper $1000\ \mathrm{km}$ of the Sun. Given that the definition of the photosphere is where the material becomes optically thick, we can conclude that a tidally shredded Jupiter is optically thick to radiation and indeed the sunlight falling on the Earth would be very significantly reduced – whether or not the amount of radiation impacting the Earth is reduced or increased is a tricky radiative transfer problem, since if the disk were at $5000\ \mathrm K$ and optically thick it would be kicking off a lot of radiation! How long the accretion disk would remain, I am unsure how to calculate. It depends on the assumed viscosity and temperature structure and how much mass is lost through evaporation/winds. The accreted material will have radiated away a large fraction of its gravitational potential energy, so the energetic effects will be much less severe than Scenario 1. However, the Sun will accrete $\sim 10^{42}\ \mathrm{kg\ m^2\ s^{-1}}$ of angular momentum, which is comparable to its current angular momentum.
The accretion of Jupiter in this way is therefore sufficient to increase the angular momentum of the Sun by a significant amount . In the long term this will have a drastic effect on the magnetic activity of the Sun – increasing it by a factor of a few to an order of magnitude. | {
"source": [
"https://physics.stackexchange.com/questions/402395",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/26275/"
]
} |
402,407 | Consider two spacetimes with the same manifold $M$ but distinct metric $g,g'$. How do I identify a point $p \in M$ in two spacetimes? Specifically, if I give the coordinates of the point in one spacetime, can I find it in the other spacetime? E.g. Let $M=\mathbf{R}^4$, If a point $p\in \mathbf{R}^4$ in Minkowski spacetime $(\mathbf{R}^4,\eta)$ is given in polar coordinates as $(t_0,r_0,\theta_0,\phi_0)$, does it mean in Schwarzschild spacetime with Schwarzschild coordinates, it is the point with coordinates $(t_0,r_0,\theta_0,\phi_0)$ as well? Do they use the same coordidnate chart? I think maybe the coordinate charts have to agree in order to do that, but I am not sure whether there is a coordinate-independent way of doing that. If not, how do we check whether the two coordinate charts agree in general? | Both the quora answers are incorrect. The idea that "nothing happens" is incorrect for reasons I explain in great detail below. The idea that somehow Jupiter spreads itself across the surface of the Sun or directly influences the luminosity of the Sun by doing so is wrong on many levels as pointed out by Victor Toth on the quora page and by Rob and Chris as answers here. Instead I put forward a couple of scenarios where the large amount of accreted energy and/or angular momentum certainly do have an effect on the Sun and/or the radiation the Earth receives from the Sun. Scenario 1: The scenario where Jupiter just drops into the Sun from its current position would certainly have short-term effects. But short-term here means compared with the lifetime of the Sun, not hundreds of years. The kinetic energy of Jupiter at the Sun's surface would be of order $GM_{\odot}M_\mathrm{Jup}/R_{\odot} \sim 4\times 10^{38}$ joules. The solar luminosity is $3.83 \times 10^{26}\ \mathrm{J/s}$. The addition of this much energy (if it is allowed to thermalise) would potentially affect the luminosity of the Sun for timescales of tens of thousands of years. The exact effects will depend on where the energy is deposited. Compared with the binding energy of the star, the additional energy is negligible, but if the energy is dissipated in the convection zone then kinetic energy would do work and lift the convective envelope of the Sun. In other words, the Sun would both increase in luminosity and in radius. If the effects were just limited to the convective envelope, then this has a mass of around $0.02 M_{\odot}$ and so could be "lifted" by $\sim 4\times 10^{38} R_{\odot}^2/GM_{\odot}M_{\rm conv} \sim 0.05 R_{\odot}$. So in this scenario, the Sun would both expand and become more luminous. The relevant timescale is the Kelvin-Helmholtz timescale of the convective envelope , which is of order $GM_{\odot}M_{\rm conv}/R_{\odot} L_{\odot} \sim $few $10^5$ years. If the planet somehow survived and punched its way to the centre of the Sun, then much less energy would be deposited in the convection zone and the effects would be lessened. On longer timescales the Sun would settle back down to the main sequence, with a radius and luminosity only slightly bigger than it was before. This all assumes that Jupiter can remain intact as it falls. It certainly wouldn't "evaporate" in this direct infall scenario, but would it get tidally shredded before it can disappear below the surface? The Roche limit is of order $R_{\odot} (\rho_{\odot}/\rho_{\rm Jup})^{1/3}$. But the average densities of the Sun and Jupiter are almost identical. So it seems likely that Jupiter would be starting to be tidally ripped apart, but as it is travelling towards the Sun at a few hundred km/s at this point, tidal breakup could not be achieved before it had disappeared below the surface. So my conclusion is that dropping Jupiter into the Sun in this scenario would be like dropping a depth charge, with a lag of order $10^{5}$ years before the full effects became apparent. Scenario 2: Jupiter arrives at Roche limit (just above the solar surface) having mysteriously lost a large amount of angular momentum. In this case the effects may be experienced on human timescales. In this case what will happen is Jupiter will be (quickly) shredded by the tidal field, possibly leaving a substantial core. At an orbital radius of $2 R_{\odot}$, the orbital period will be about 8 hours, the orbital speed about $300\ \mathrm{km/s}$ and the orbital angular momentum about $10^{42}\ \mathrm{kg\ m^2\ s^{-1}}$. Assuming total destruction, much of the material will form an accretion disc around the Sun, since it must lose some of its angular momentum before it can be accreted. How much of the Sun's light is blocked is uncertain. It mainly depends on how the material is distributed in the disk, especially the disk scale height. This in turn depends on the balance of the heating and cooling mechanisms and hence the temperature of the disk. Some sort of minimal estimate could be to assume the disk is planar and spread evenly between the solar surface and $2R_{\odot}$ and that it gets close to the solar photospheric temperature at $\sim 5000\ \mathrm K$. In which case the disk area is $3 \pi R_{\odot}^2$, with an "areal density" of $\sigma \sim M_{\rm Jup}/3\pi R_{\odot}^2$. In hydrostatic equilibrium, the scale height will be $\sim kT/g m_\mathrm H$, where $g$ is the gravitational field and $m_\mathrm H$ the mass of a hydrogen atom. The gravity (of a plane) will be $g \sim 4\pi G \sigma$. Putting in $T \sim 5000\ \mathrm K$, we get a scale height of $\sim 0.1 R_{\odot}$. Given that Earth is in the ecliptic plane and this is where the disk will be, then a large fraction, $\gt 20\ \%$, of the sunlight reaching the Earth may be blocked. To work out if this is the case, we need to work out an optical depth of the material. For a scale height of $0.1 R_{\odot}$ and a planar geometry, then the density of the material is $\sim 3\ \mathrm{kg/m^3}$. Looking though this corresponds to a column density of $\sim 10^{10}\ \mathrm{kg/m^2}$. For comparison, the solar photospheric density is of order $10^{-12}\ \mathrm{kg/m^3}$ and is only the upper $1000\ \mathrm{km}$ of the Sun. Given that the definition of the photosphere is where the material becomes optically thick, we can conclude that a tidally shredded Jupiter is optically thick to radiation and indeed the sunlight falling on the Earth would be very significantly reduced – whether or not the amount of radiation impacting the Earth is reduced or increased is a tricky radiative transfer problem, since if the disk were at $5000\ \mathrm K$ and optically thick it would be kicking off a lot of radiation! How long the accretion disk would remain, I am unsure how to calculate. It depends on the assumed viscosity and temperature structure and how much mass is lost through evaporation/winds. The accreted material will have radiated away a large fraction of its gravitational potential energy, so the energetic effects will be much less severe than Scenario 1. However, the Sun will accrete $\sim 10^{42}\ \mathrm{kg\ m^2\ s^{-1}}$ of angular momentum, which is comparable to its current angular momentum.
The accretion of Jupiter in this way is therefore sufficient to increase the angular momentum of the Sun by a significant amount . In the long term this will have a drastic effect on the magnetic activity of the Sun – increasing it by a factor of a few to an order of magnitude. | {
"source": [
"https://physics.stackexchange.com/questions/402407",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/65278/"
]
} |
402,420 | I had a question regarding the addition of electric potentials. Consider two positively charged particles $q_1$ and $q_2$ at distance $R$ apart. Let the charges have magnitudes $q_1$ and $q_2$. For a moment, let me remove $q_2$ and calculate the potential a small distance $r$ from $q_1$. Now, let me put $q_2$ back and calculate the potential at the same point ($q_1$, $q_2$, $r$ lie on the same straight line and $r$ is between $q_1$ and $q_2$). When is the potential greater and why? I did get the explanation that because the potential is scalar you just add individual potentials and hence it's greater when both the charges are present, but I didn't understand this thoroughly enough. When I looked at it, I saw that when both charges are present, the force on a test charge at $r$ is smaller in magnitude (because the test charge experiences two forces in opposite directions) and so the potential (which is integral $F$ . $dr$ ) would be smaller than if only $q_1$ were present. But then again, the question was from where to where do I integrate, from infinity to $r$ or from the point between $q_1$ and $q_2$ where the net force is zero (and hence potential is 0) to $r$. Can someone please help me out? | Both the quora answers are incorrect. The idea that "nothing happens" is incorrect for reasons I explain in great detail below. The idea that somehow Jupiter spreads itself across the surface of the Sun or directly influences the luminosity of the Sun by doing so is wrong on many levels as pointed out by Victor Toth on the quora page and by Rob and Chris as answers here. Instead I put forward a couple of scenarios where the large amount of accreted energy and/or angular momentum certainly do have an effect on the Sun and/or the radiation the Earth receives from the Sun. Scenario 1: The scenario where Jupiter just drops into the Sun from its current position would certainly have short-term effects. But short-term here means compared with the lifetime of the Sun, not hundreds of years. The kinetic energy of Jupiter at the Sun's surface would be of order $GM_{\odot}M_\mathrm{Jup}/R_{\odot} \sim 4\times 10^{38}$ joules. The solar luminosity is $3.83 \times 10^{26}\ \mathrm{J/s}$. The addition of this much energy (if it is allowed to thermalise) would potentially affect the luminosity of the Sun for timescales of tens of thousands of years. The exact effects will depend on where the energy is deposited. Compared with the binding energy of the star, the additional energy is negligible, but if the energy is dissipated in the convection zone then kinetic energy would do work and lift the convective envelope of the Sun. In other words, the Sun would both increase in luminosity and in radius. If the effects were just limited to the convective envelope, then this has a mass of around $0.02 M_{\odot}$ and so could be "lifted" by $\sim 4\times 10^{38} R_{\odot}^2/GM_{\odot}M_{\rm conv} \sim 0.05 R_{\odot}$. So in this scenario, the Sun would both expand and become more luminous. The relevant timescale is the Kelvin-Helmholtz timescale of the convective envelope , which is of order $GM_{\odot}M_{\rm conv}/R_{\odot} L_{\odot} \sim $few $10^5$ years. If the planet somehow survived and punched its way to the centre of the Sun, then much less energy would be deposited in the convection zone and the effects would be lessened. On longer timescales the Sun would settle back down to the main sequence, with a radius and luminosity only slightly bigger than it was before. This all assumes that Jupiter can remain intact as it falls. It certainly wouldn't "evaporate" in this direct infall scenario, but would it get tidally shredded before it can disappear below the surface? The Roche limit is of order $R_{\odot} (\rho_{\odot}/\rho_{\rm Jup})^{1/3}$. But the average densities of the Sun and Jupiter are almost identical. So it seems likely that Jupiter would be starting to be tidally ripped apart, but as it is travelling towards the Sun at a few hundred km/s at this point, tidal breakup could not be achieved before it had disappeared below the surface. So my conclusion is that dropping Jupiter into the Sun in this scenario would be like dropping a depth charge, with a lag of order $10^{5}$ years before the full effects became apparent. Scenario 2: Jupiter arrives at Roche limit (just above the solar surface) having mysteriously lost a large amount of angular momentum. In this case the effects may be experienced on human timescales. In this case what will happen is Jupiter will be (quickly) shredded by the tidal field, possibly leaving a substantial core. At an orbital radius of $2 R_{\odot}$, the orbital period will be about 8 hours, the orbital speed about $300\ \mathrm{km/s}$ and the orbital angular momentum about $10^{42}\ \mathrm{kg\ m^2\ s^{-1}}$. Assuming total destruction, much of the material will form an accretion disc around the Sun, since it must lose some of its angular momentum before it can be accreted. How much of the Sun's light is blocked is uncertain. It mainly depends on how the material is distributed in the disk, especially the disk scale height. This in turn depends on the balance of the heating and cooling mechanisms and hence the temperature of the disk. Some sort of minimal estimate could be to assume the disk is planar and spread evenly between the solar surface and $2R_{\odot}$ and that it gets close to the solar photospheric temperature at $\sim 5000\ \mathrm K$. In which case the disk area is $3 \pi R_{\odot}^2$, with an "areal density" of $\sigma \sim M_{\rm Jup}/3\pi R_{\odot}^2$. In hydrostatic equilibrium, the scale height will be $\sim kT/g m_\mathrm H$, where $g$ is the gravitational field and $m_\mathrm H$ the mass of a hydrogen atom. The gravity (of a plane) will be $g \sim 4\pi G \sigma$. Putting in $T \sim 5000\ \mathrm K$, we get a scale height of $\sim 0.1 R_{\odot}$. Given that Earth is in the ecliptic plane and this is where the disk will be, then a large fraction, $\gt 20\ \%$, of the sunlight reaching the Earth may be blocked. To work out if this is the case, we need to work out an optical depth of the material. For a scale height of $0.1 R_{\odot}$ and a planar geometry, then the density of the material is $\sim 3\ \mathrm{kg/m^3}$. Looking though this corresponds to a column density of $\sim 10^{10}\ \mathrm{kg/m^2}$. For comparison, the solar photospheric density is of order $10^{-12}\ \mathrm{kg/m^3}$ and is only the upper $1000\ \mathrm{km}$ of the Sun. Given that the definition of the photosphere is where the material becomes optically thick, we can conclude that a tidally shredded Jupiter is optically thick to radiation and indeed the sunlight falling on the Earth would be very significantly reduced – whether or not the amount of radiation impacting the Earth is reduced or increased is a tricky radiative transfer problem, since if the disk were at $5000\ \mathrm K$ and optically thick it would be kicking off a lot of radiation! How long the accretion disk would remain, I am unsure how to calculate. It depends on the assumed viscosity and temperature structure and how much mass is lost through evaporation/winds. The accreted material will have radiated away a large fraction of its gravitational potential energy, so the energetic effects will be much less severe than Scenario 1. However, the Sun will accrete $\sim 10^{42}\ \mathrm{kg\ m^2\ s^{-1}}$ of angular momentum, which is comparable to its current angular momentum.
The accretion of Jupiter in this way is therefore sufficient to increase the angular momentum of the Sun by a significant amount . In the long term this will have a drastic effect on the magnetic activity of the Sun – increasing it by a factor of a few to an order of magnitude. | {
"source": [
"https://physics.stackexchange.com/questions/402420",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/166132/"
]
} |
403,016 | Is there a physical limit to data transfer rate (e.g. for USB $3.0$, this rate can be a few Gbit per second)? I am wondering if there is a physical law giving a fundamental limit to data transfer rate, similar to how the second law of thermodynamics tells us perpetual motion cannot happen and relativity tells us going faster than light is impossible. | tl;dr - The maximum data rate you're looking for would be called the maximum entropy flux . Realistically speaking, we don't know nearly enough about physics yet to meaningfully predict such a thing. But since it's fun to talk about a data transfer cord that's basically a $1\mathrm{mm}$-tube containing a stream of black holes being fired near the speed of light, the below answer shows an estimate of $1.3{\cdot}{10}^{75}\frac{\mathrm{bit}}{\mathrm{s}}$, which is about $6.5{\cdot}{10}^{64}$ faster than the current upper specification for USB, $20\frac{\mathrm{Gbit}}{\mathrm{s}}=2{\cdot}{10}^{10}\frac{\mathrm{bit}}{\mathrm{s}}$. Intro You're basically looking for an upper bound on entropy flux: entropy : the number of potential states which could, in theory, codify information; flux : rate at which something moves through a given area. So,$$\left[\text{entropy flux}\right]~=~\frac{\left[\text{information}\right]}{\left[\text{area}\right]{\times}\left[\text{time}\right]}\,.$$ Note: If you search for this some more, watch out for "maximum entropy thermodynamics" ; " maximum " means something else in that context. In principle, we can't put an upper bound on stuff like entropy flux because we can't claim to know how physics really works. But, we can speculate at the limits allowed by our current models. Speculative physical limitations Wikipedia has a partial list of computational limits that might be estimated given our current models. In this case, we can consider the limit on maximum data density, e.g. as discussed in this answer . Then, naively, let's assume that we basically have a pipeline shipping data at maximum density arbitrarily close to the speed of light. The maximum data density was limited by the Bekenstein bound : In physics , the Bekenstein bound is an upper limit on the entropy $S$, or information $I$, that can be contained within a given finite region of space which has a finite amount of energy—or conversely, the maximum amount of information required to perfectly describe a given physical system down to the quantum level. – "Bekenstein bound" , Wikipedia [references omitted] Wikipedia lists it has allowing up to$$
I
~ \leq ~ {\frac {2\pi cRm}{\hbar \ln 2}}
~ \approx ~ 2.5769082\times {10}^{43}mR
\,,$$where $R$ is the radius of the system containing the information and $m$ is the mass. Then for a black hole, apparently this reduces to$$
I
~ \leq ~
\frac{A_{\text{horizon}}}{4\ln{\left(2\right)}\,{{\ell}_{\text{Planck}}^2}}
\,,$$where ${\ell}_{\text{Planck}}$ is the Planck length ; $A_{\text{horizon}}$ is the area of the black hole's event horizon. This is inconvenient, because we wanted to calculate $\left[\text{entropy flux}\right]$ in terms of how fast information could be passed through something like a wire or pipe, i.e. in terms of $\frac{\left[\text{information}\right]}{\left[\text{area}\right]{\times}\left[\text{time}\right]}.$ But, the units here are messed up because this line of reasoning leads to the holographic principle which basically asserts that we can't look at maximum information of space in terms of per-unit-of-volume, but rather per-unit-of-area. So, instead of having a continuous stream of information, let's go with a stream of discrete black holes inside of a data pipe of radius $r_{\text{pipe}}$. The black holes' event horizons have the same radius as the pipe, and they travel at $v_{\text{pipe}} \, {\approx} \, c$ back-to-back. So, information flux might be bound by$$
\frac{\mathrm{d}I}{\mathrm{d}t}
~ \leq ~
\frac{A_{\text{horizon}}}{4\ln{\left(2\right)}\,{{\ell}_{\text{Planck}}^2}}
{\times}
\frac{v_{\text{pipe}}}{2r_{\text{horizon}}}
~{\approx}~
\frac{\pi \, c }{2\ln{\left(2\right)}\,{\ell}_{\text{Planck}}^2} r_{\text{pipe}}
\,,$$where the observation that $
\frac{\mathrm{d}I}{\mathrm{d}t}~{\propto}~r_{\text{pipe}}
$ is basically what the holographic principle refers to. Relatively thick wires are about $1\,\mathrm{mm}$ in diameter, so let's go with $r_{\text{pipe}}=5{\cdot}{10}^{-4}\mathrm{m}$ to mirror that to estimate (WolframAlpha) :$$
\frac{\mathrm{d}I}{\mathrm{d}t}
~ \lesssim ~
1.3{\cdot}{10}^{75}\frac{\mathrm{bit}}{\mathrm{s}}
\,.$$ Wikipedia claims that the maximum USB bitrate is currently $20\frac{\mathrm{Gbit}}{\mathrm{s}}=2{\cdot}{10}^{10}\frac{\mathrm{bit}}{\mathrm{s}}$, so this'd be about $6.5{\cdot}{10}^{64}$ times faster than USB's maximum rate. However , to be very clear, the above was a quick back-of-the-envelope calculation based on the Bekenstein bound and a hypothetical tube that fires black holes near the speed of light back-to-back; it's not a fundamental limitation to regard too seriously yet. | {
"source": [
"https://physics.stackexchange.com/questions/403016",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/28745/"
]
} |
403,261 | Why are rainbows relatively rare? On any given day, there are billions of water drops in the air of varying sizes and dispersions, all of which light is passing through and refracting. What physical phenomenon has to occur so that these drops combine to form a rainbow? Why is it more digital (visible or not) than analog? Is it that the drops have to be consistently dispersed and sized? If that was the case, why don't rainbows appear just in the regions that have a suitable makeup? Or does a single drop cause the refraction we see across the sky? I was trying to explain rainbows to my kids but got totally confused... Here is an example of a rainbow at my work, 7:00am in Edwards, California, facing west, no rain, but it was partly cloudy. The Rainbow seemed to have no connection to the clouds. Hopefully the answers will allow for the existence of this rainbow. Thanks! | A number of conditions have to be just right in order to see a rainbow. The Sun has to be visible in the sky. Rainbows don't occur on overcast days. The light hitting the raindrops needs to come from what is close to a point source to have the reflections and refractions in a myriad number of raindrops combine to form a rainbow. Diffuse light (overcast conditions): No rainbow. The Sun has to be fairly low to the horizon. The primary bow forms a cone with your eye as the vertex and the line from the Sun through your head as the axis, with the red light at an angle of about 42° from the axis and the blue, about 40°. The Sun needs to be below 40° above the horizon to see a rainbow, and at that high of an angle, the rainbow won't be very good. Rainbows are best in when they form less than an hour or so after sunrise or less than an hour or so before sunset. Rain needs to be falling opposite the Sun. Off to the side: No rainbow. From a very low cloud at the horizon: No rainbow. It has to be rain rather than a fog or a mist. Cloud droplets are far too small to form a rainbow. Cloud droplets are about the same size as the wavelength of visible light. This means light hitting cloud droplets is diffracted rather than reflected and refracted. Clouds form glories, coronae, and fogbows. The latter are similar to rainbows, but without color. Mists form at best fuzzy rainbows; the drops are small that diffraction dominates over reflection and refraction. The drops need to be about a millimeter in diameter to form a rainbow. Altogether, this makes rainbows rather rare. | {
"source": [
"https://physics.stackexchange.com/questions/403261",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/194310/"
]
} |
403,646 | While studying the basics of quantum computers, I came across Hadamard gates and learned, that these gates are used to put qubits into superposition meaning that these qubits are both, 0 and 1 at the same time. I've also learned that superposition is very susceptible to external influences and can be "destroyed" quickly. Given that superposition seems to be so fragile: Does it exists naturally? Are there particles that are in superposition for a longer period of time? | One of the common misconceptions that people starting out with QM often have is to think that a system is either in a superposition state or it is not. Actually superposition is only defined relative to a particular basis (such as the eigenstates of some observable). If a system is in a state of superposition relative to one basis it is always possible to define a basis where it is not and vice versa. So for example a particle with a definite position is in a superposition of momentum states or a spin pointing up relative to the $z$ direction is in a superposition of up and down relative to the $x$ or $y$ directions. | {
"source": [
"https://physics.stackexchange.com/questions/403646",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/194505/"
]
} |
403,864 | The instantaneous acceleration $\textbf{a}(t)$ of a particle is defined as the rate of change of its instantaneous velocity $\textbf{v}(t)$: $$\textbf{a}(t)=\frac{\mathrm{d}}{\mathrm{d}t}\textbf{v}(t).\tag{1}$$ If the speed is constant, then $$\textbf{a}(t)=v\frac{\mathrm{d}}{\mathrm{d}t}\hat{\textbf{n}}(t)\tag{2}$$ where $\hat{\textbf{n}}(t)$ is the instantaneous direction of velocity which changes with time. Questions: According to the definition (1) what is a deceleration? In case (2), when will $\textbf{a}(t)$ represent a deceleration? For example, in uniform circular motion, why is it called the centripetal acceleration and not centripetal deceleration? | Acceleration is the general term for a changing velocity. Deceleration is a kind of acceleration in which the magnitude of the velocity is decreasing. The reason this might be confusing is because the word 'acceleration' is sometimes used to mean that the magnitude of the velocity is increasing , to contrast it with deceleration. One cannot go wrong, however, if one always takes acceleration to mean simply 'changing velocity'. In that case, circular motion corresponds to acceleration (because the velocity is changing) but not deceleration (because its magnitude is not decreasing). | {
"source": [
"https://physics.stackexchange.com/questions/403864",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36793/"
]
} |
403,870 | I know that to have a current in a circuit we need a potential difference that creates a gradient that makes electrons move from low potential to high potential. My question is that how, in a circuit, potential difference makes the charges move so perfect ( along the wire because that would need a constantly changing force on the charges along the wire) Thanks | Acceleration is the general term for a changing velocity. Deceleration is a kind of acceleration in which the magnitude of the velocity is decreasing. The reason this might be confusing is because the word 'acceleration' is sometimes used to mean that the magnitude of the velocity is increasing , to contrast it with deceleration. One cannot go wrong, however, if one always takes acceleration to mean simply 'changing velocity'. In that case, circular motion corresponds to acceleration (because the velocity is changing) but not deceleration (because its magnitude is not decreasing). | {
"source": [
"https://physics.stackexchange.com/questions/403870",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/194517/"
]
} |
404,264 | My son asks the above (if not in quite these words), and I am embarrassed to realize that I do not know. Can gravity, for example, or the strong or weak forces ever be repulsive? How/when? | Famously, magnets have North and South poles, and like repels like. You can think of these as positive and negative "magnetic charges". Similarly, electrical charges obey the like-repels-like rule (e.g. two electrons are both negatively charged and repel each other). The strong and weak nuclear forces can be thought of as more complicated cousins of electromagnetism, with multiple charges that still come in positive and negative. (For example, you can think of quarks as having a positive charge called a color , antiquarks as having a negative charge called an anticolor, and gluons as having each, like the poles of a magnet.) The details are more complicated, but yes, like still repels like. When we look at gravity, we find positive "charges" called masses that attract each other. Why is this case different? It turns out to be due to EM and nuclear forces having spin-1 carriers and gravity's hypothetical carriers being spin-2, but that's probably overkill for this question. Could gravity repel? It would require opposite charges, i.e. positive and negative, and where would you get negative mass from? The cosmological constant $\Lambda$ that accelerates the expansion of the universe can be thought of as a negative contribution to the universe's density, resulting in a repulsive effect. | {
"source": [
"https://physics.stackexchange.com/questions/404264",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/194812/"
]
} |
404,411 | I searched and found a lot of questions and answers about red shift here but none with the answer to mine. (sorry if it is there somewhere and I did not find it.) Everyone is saying the light from the far away galaxies is red shifted and I could find a lot of formulas and physics theories about that. My question is: light is red shifted compared with what? Why is not possible for the source to emit red light? I'm asking this question keeping in mind that first they saw red light and then decided that the Universe expands not the other way around. | Redshifts doesn't actually mean the light is red, or was ever red. That's what is confusing you. "Red" and "blue" in this context are shorthand ways to say "towards longer wavelengths/lower energies" (red) and "towards shorter wavelengths/higher energies" (blue), because in the visible light spectrum, red is at the low energy end of what we can see, and blue at the high energy end. In simple terms, light, radio, gamma waves - any electromagnetic radiation at all - from a source that is "red shifted" is a way to say that it is received (by us) at a lower energy than it "really" was emitted (assuming a suitable reference frame). But it could be any radiation at all. So "redshifted" could describe: gamma rays emitted from a distant galaxy that we detect as x rays, yellowish/white visible light from a star like our sun that an extragalactic observer detects as visible but more orange tinted due to their velocity, ultraviolet light perceived as blue visible light infrared rays perceived as radio waves so long as the explanation is based on the relative velocity of the emitter and observer, the effect of gravity, or the expansion of space (which actually "stretch" or "compress" the wavelength of each photon), and not due to factors such as filtering of light, which just bias the photons received. The same kind of statements, but inverted, are true for blue shifted light (orange light seen as yellow/white, xrays received as gamma rays, etc) Importance Many kinds of light we see in the universe are very well defined. For example, we know exactly what frequencies of light, excited hydrogen can emit when it loses energy. We also know exactly what frequencies hydrogen gas clouds can absorb as light travels through them (so that specific frequency is "missing" when we see it). [They're the same thing!] The frequencies related to each source commonly show up as a pattern of very specific frequencies, or a distribution of frequencies, not just one frequency. These patterns of frequencies are different for each element and act like a "fingerprint". In simple terms, it's possible to look at a pattern of frequencies (often drawn as " spectral lines " in graphical form), and be sure which lines represent what element. It is so specific, that we can often even identify the exact interaction that gave rise to the radiation (specific interactions usually have well known energies for the photons they produce). Knowing this, we can be sure what the original frequencies for that interaction or element would "really" have been. The difference between that and the frequency we actually detected, is the red or blue shift that the radiation has experienced. So a cosmologist can tell from the spectra they detect, what original frequencies were emitted, and they can also be absolutely sure whether the light or radio or other waves they detect always were that frequency, or were originally emitted at a different frequency but has been red or blue shifted by some amount (=received at a lower or higher frequency), and that this is due to their high relative velocities. (The other possible cause is gravitational redshift, see next section) Causes In astronomical terms, the most common cause by far for red/blueshifting is an object's relative velocity towards or away from Earth. In this case, the red/blueshift is ultimately due to special relativity (the movement of objects relative to an observer in spacetime). Whether in our own galaxy or elsewhere, most objects in space are moving towards or away from us. On a cosmic scale the expansion of the universe means that almost everything outside our own galactic supercluster is moving away from us at high speed - and the further away the faster it's received. Light and other radiation received from very old objects, which has been travelling for billions of years, will also be redshifted, because that radiation will have been affected by the expansion of space over time, so on a very large cosmic scale, redshifting is linked to time/age/years ago and distance , as well as velocity - known as Hubble's Law . Whether the velocity is due to space expanding, or the object's own movement within space, a red/blueshift will result. The other known cause for redshifting is the effect of extreme gravity, known as "gravitational redshift". In this case, the ultimate explanation is general relativity (the effect of mass and gravity on spacetime). For example, radiation given off very close to a massive object such as a black hole, or perhaps passing a very massive object on its journey to us, could be redshifted due to gravity. (Theoretically it could work the other way around as well - an observer who could somehow hover right in the vicinity of a black hole might see other objects as blueshifted - but in practice this is a perspective we never see on earth.) Historically for a time, this "dual cause" led to some confusion, because in the early days of radio astronomy, astronomers weren't always sure if they were seeing a very distant/fast-moving object, or a nearby object affected by gravity. However, these days astronomers are usually very sure which they're looking at. Example Suppose we try and use this knowledge. Instead of saying just that we detect radio waves of some frequencies from a source, we can say (for example) that what we detect is a match for emissions from hydrogen, with some carbon, and that the hydrogen lines were redshifted by X amount but the carbon spectrum was blueshifted by Y amount . Therefore we conclude we're actually looking at 2 objects, probably one containing carbon that's traveling towards us at a certain velocity, one containing hydrogen travelling away from us at a certain velocity. Perhaps one source is almost behind the other, or it's a binary star system. From the velocities and amount of red/blue shift, we can decide the distances (are either source in our local galaxy or cluster, or are they billions of light years away), and much more. If they are in a binary system we can expect to see their red/blueshifts change periodically as each of them moves more towards us, then more away. From their emissions we can figure what type of stars they are and therefore their likely/estimated mass (I'm simplifying a lot!). From the time taken to rise and fall in red/blue shift we can work out how long they take to orbit, and their relative masses, distances apart, etc. And so on, and so on. As a (simplified!) second example, we can measure the spectra of stars at the centre of our galaxy. If we plot over time, the star's positions versus the amount of red or blue shifting of their light, we find they periodically undergo changes in shift - redder shifted, then bluer shifted. That says their velocity relative to us gets larger and smaller. Conclusion: the stars in the centre of our galaxy are all orbiting something. The amount of shift and distance, and a bit of computer work, lets us work out how "tight" all their orbits are. So we can work out the mass of whatever they are all orbiting. We can discover the object has a huge mass. But we also know the size of the smallest orbits of these stars. Whatever the object is, that they're orbiting, it has to be smaller than the orbit of the stars, otherwise the stars would quickly lose energy and spiral in/merge. When you point a telescope there, you don't see any giant mass object - but we know one is there. It turns out that to fit that amount of mass in that size space, you'd have to have a black hole. Nothing else would do it. And that's one way we can be certain there's a large black hole at the centre of our galaxy (and many others), and calculate its mass. All from stellar red/blueshift measurements! Update: Hubble's law of redshifted light and the expansion of the universe Coming back to the OP, the question specifically refers to redshifted light and "deciding the universe was expanding". So I'll try a quick explanation (this is a whole question of its own!). About a hundred years ago, Hubble formulated his law (more accurately a rule of thumb) which said that light from distant galaxies was redshifted, and the more distant the galaxy, the more redshifted the light. Where galaxies were close enough to be measured directly, they turned out to be receding from Earth. Now, this might have meant they were all travelling outward at extreme velocities from some common centre, but could have many other meanings: one theory suggested matter was being created continually to replace it (the rate would have been very small). So although the Big Bang was conceptualised, there actually wasn't much evidence and it was only several decades later that other overwhelming evidence (radio astronomy, standard model, cosmological modelling, stellar lifetime cycles, expansion of space, fusion processes, and myriad other discoveries) gradually ended up supporting the Big Bang theory. We are now extremely sure, from many different kinds of observation and knowledge, that light from distant galaxies is redshifted to lower frequencies because of the expansion of space, and the Cosmic Microwave Background can be detected and identified as an extremely redshifted form of light from excited hydrogen atoms emitted at the dawn of our universe, when it was about 370,000 years old. But it's important to remember that it was not immediately obvious or accepted by many astronomers for several decades, that redshifted light meant that our known universe had to be expanding, or began at a specific point in time. People didn't just jump at that conclusion. They had extreme redshifts, that was undeniable - but what did they mean? It was not even clear how a universe might expand, if it did, or what might lead to an expansion, if that was what was being seen. So there were many unsatisfactory questions and doubts. As with much of science, the actual observations came first. Gaining an understanding of them, and what they meant, and testing theories which might explain a universe with those observations, took many years after that time. However, once modern cosmological ideas of the Big Bang began to be taken seriously as a theory, the detailed evidence gained through redshifted radiation became crucial evidence for both of these ideas and for much of modern cosmology. | {
"source": [
"https://physics.stackexchange.com/questions/404411",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/193562/"
]
} |
404,418 | What is the relation between potential energy & electric field? | Redshifts doesn't actually mean the light is red, or was ever red. That's what is confusing you. "Red" and "blue" in this context are shorthand ways to say "towards longer wavelengths/lower energies" (red) and "towards shorter wavelengths/higher energies" (blue), because in the visible light spectrum, red is at the low energy end of what we can see, and blue at the high energy end. In simple terms, light, radio, gamma waves - any electromagnetic radiation at all - from a source that is "red shifted" is a way to say that it is received (by us) at a lower energy than it "really" was emitted (assuming a suitable reference frame). But it could be any radiation at all. So "redshifted" could describe: gamma rays emitted from a distant galaxy that we detect as x rays, yellowish/white visible light from a star like our sun that an extragalactic observer detects as visible but more orange tinted due to their velocity, ultraviolet light perceived as blue visible light infrared rays perceived as radio waves so long as the explanation is based on the relative velocity of the emitter and observer, the effect of gravity, or the expansion of space (which actually "stretch" or "compress" the wavelength of each photon), and not due to factors such as filtering of light, which just bias the photons received. The same kind of statements, but inverted, are true for blue shifted light (orange light seen as yellow/white, xrays received as gamma rays, etc) Importance Many kinds of light we see in the universe are very well defined. For example, we know exactly what frequencies of light, excited hydrogen can emit when it loses energy. We also know exactly what frequencies hydrogen gas clouds can absorb as light travels through them (so that specific frequency is "missing" when we see it). [They're the same thing!] The frequencies related to each source commonly show up as a pattern of very specific frequencies, or a distribution of frequencies, not just one frequency. These patterns of frequencies are different for each element and act like a "fingerprint". In simple terms, it's possible to look at a pattern of frequencies (often drawn as " spectral lines " in graphical form), and be sure which lines represent what element. It is so specific, that we can often even identify the exact interaction that gave rise to the radiation (specific interactions usually have well known energies for the photons they produce). Knowing this, we can be sure what the original frequencies for that interaction or element would "really" have been. The difference between that and the frequency we actually detected, is the red or blue shift that the radiation has experienced. So a cosmologist can tell from the spectra they detect, what original frequencies were emitted, and they can also be absolutely sure whether the light or radio or other waves they detect always were that frequency, or were originally emitted at a different frequency but has been red or blue shifted by some amount (=received at a lower or higher frequency), and that this is due to their high relative velocities. (The other possible cause is gravitational redshift, see next section) Causes In astronomical terms, the most common cause by far for red/blueshifting is an object's relative velocity towards or away from Earth. In this case, the red/blueshift is ultimately due to special relativity (the movement of objects relative to an observer in spacetime). Whether in our own galaxy or elsewhere, most objects in space are moving towards or away from us. On a cosmic scale the expansion of the universe means that almost everything outside our own galactic supercluster is moving away from us at high speed - and the further away the faster it's received. Light and other radiation received from very old objects, which has been travelling for billions of years, will also be redshifted, because that radiation will have been affected by the expansion of space over time, so on a very large cosmic scale, redshifting is linked to time/age/years ago and distance , as well as velocity - known as Hubble's Law . Whether the velocity is due to space expanding, or the object's own movement within space, a red/blueshift will result. The other known cause for redshifting is the effect of extreme gravity, known as "gravitational redshift". In this case, the ultimate explanation is general relativity (the effect of mass and gravity on spacetime). For example, radiation given off very close to a massive object such as a black hole, or perhaps passing a very massive object on its journey to us, could be redshifted due to gravity. (Theoretically it could work the other way around as well - an observer who could somehow hover right in the vicinity of a black hole might see other objects as blueshifted - but in practice this is a perspective we never see on earth.) Historically for a time, this "dual cause" led to some confusion, because in the early days of radio astronomy, astronomers weren't always sure if they were seeing a very distant/fast-moving object, or a nearby object affected by gravity. However, these days astronomers are usually very sure which they're looking at. Example Suppose we try and use this knowledge. Instead of saying just that we detect radio waves of some frequencies from a source, we can say (for example) that what we detect is a match for emissions from hydrogen, with some carbon, and that the hydrogen lines were redshifted by X amount but the carbon spectrum was blueshifted by Y amount . Therefore we conclude we're actually looking at 2 objects, probably one containing carbon that's traveling towards us at a certain velocity, one containing hydrogen travelling away from us at a certain velocity. Perhaps one source is almost behind the other, or it's a binary star system. From the velocities and amount of red/blue shift, we can decide the distances (are either source in our local galaxy or cluster, or are they billions of light years away), and much more. If they are in a binary system we can expect to see their red/blueshifts change periodically as each of them moves more towards us, then more away. From their emissions we can figure what type of stars they are and therefore their likely/estimated mass (I'm simplifying a lot!). From the time taken to rise and fall in red/blue shift we can work out how long they take to orbit, and their relative masses, distances apart, etc. And so on, and so on. As a (simplified!) second example, we can measure the spectra of stars at the centre of our galaxy. If we plot over time, the star's positions versus the amount of red or blue shifting of their light, we find they periodically undergo changes in shift - redder shifted, then bluer shifted. That says their velocity relative to us gets larger and smaller. Conclusion: the stars in the centre of our galaxy are all orbiting something. The amount of shift and distance, and a bit of computer work, lets us work out how "tight" all their orbits are. So we can work out the mass of whatever they are all orbiting. We can discover the object has a huge mass. But we also know the size of the smallest orbits of these stars. Whatever the object is, that they're orbiting, it has to be smaller than the orbit of the stars, otherwise the stars would quickly lose energy and spiral in/merge. When you point a telescope there, you don't see any giant mass object - but we know one is there. It turns out that to fit that amount of mass in that size space, you'd have to have a black hole. Nothing else would do it. And that's one way we can be certain there's a large black hole at the centre of our galaxy (and many others), and calculate its mass. All from stellar red/blueshift measurements! Update: Hubble's law of redshifted light and the expansion of the universe Coming back to the OP, the question specifically refers to redshifted light and "deciding the universe was expanding". So I'll try a quick explanation (this is a whole question of its own!). About a hundred years ago, Hubble formulated his law (more accurately a rule of thumb) which said that light from distant galaxies was redshifted, and the more distant the galaxy, the more redshifted the light. Where galaxies were close enough to be measured directly, they turned out to be receding from Earth. Now, this might have meant they were all travelling outward at extreme velocities from some common centre, but could have many other meanings: one theory suggested matter was being created continually to replace it (the rate would have been very small). So although the Big Bang was conceptualised, there actually wasn't much evidence and it was only several decades later that other overwhelming evidence (radio astronomy, standard model, cosmological modelling, stellar lifetime cycles, expansion of space, fusion processes, and myriad other discoveries) gradually ended up supporting the Big Bang theory. We are now extremely sure, from many different kinds of observation and knowledge, that light from distant galaxies is redshifted to lower frequencies because of the expansion of space, and the Cosmic Microwave Background can be detected and identified as an extremely redshifted form of light from excited hydrogen atoms emitted at the dawn of our universe, when it was about 370,000 years old. But it's important to remember that it was not immediately obvious or accepted by many astronomers for several decades, that redshifted light meant that our known universe had to be expanding, or began at a specific point in time. People didn't just jump at that conclusion. They had extreme redshifts, that was undeniable - but what did they mean? It was not even clear how a universe might expand, if it did, or what might lead to an expansion, if that was what was being seen. So there were many unsatisfactory questions and doubts. As with much of science, the actual observations came first. Gaining an understanding of them, and what they meant, and testing theories which might explain a universe with those observations, took many years after that time. However, once modern cosmological ideas of the Big Bang began to be taken seriously as a theory, the detailed evidence gained through redshifted radiation became crucial evidence for both of these ideas and for much of modern cosmology. | {
"source": [
"https://physics.stackexchange.com/questions/404418",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/185077/"
]
} |
405,542 | What methods would they use to predict what would happen in a situation when a probe is being acted upon by the gravity of two stars, say? | The three body problem isn’t “solved” in the sense that there is no known closed form solution that works for any general initial conditions. However, when you have two massive bodies and one that is considerably lighter, you can estimate the trajectory with almost any degree of accuracy. Furthermore, numerical techniques will allow you to do that with multiple bodies and without any restrictions on their mass. The analysis roughly comes down to this: at a given instant the position and velocity of all objects is known (the initial conditions) from the positions we know the gravitational forces on each object (magnitude and direction) given these forces and the mass of each object, we can compute the acceleration at this instant from the position, velocity and instantaneous acceleration, we can compute the velocity and position a very short time later repeat for small time steps to compute the complete orbit (obviously, if you are firing the rocket you also need to take account of the thrust and the changing mass) There is an entire field of scientific computing dedicated to doing this right. And the method briefly made an appearance in the movie “Hidden figures” - at that time these things were still under development, and of course computers were much slower, and had less memory, than today’s machines. I have written several answers on this site that used such an approach. Probably the most relevant is this one | {
"source": [
"https://physics.stackexchange.com/questions/405542",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/180708/"
]
} |
405,841 | This wikipedia article states that emissivity of polished copper is 0.04 , and emissivity of oxidized copper is 0.87 - more than 20x of the polished copper. So my question is - why are all copper heatsinks shiny and polished? Wouldn't oxidized copper heatsink be much more effective in radiating the heat away from the source? Searching for " oxidized copper heatsink " I find people asking and giving advice on how to remove oxidation from the heatsink because it makes the heatsink less effective - seemingly contradicting the information presented on given wikipedia article? Is there something I am misunderstanding about emissive properties of a material? Is there a way I could oxidize a copper heatsink in order to make it more effective? | Radiative heat transfer is not dominant at the temperatures at which computer/electronic heatsinks operate, so the emissivity of the heatsink fin surfaces is not important for their operation. Conduction of heat from the copper to the air, and then convection driven either by buoyancy or mechanical ventilation, is the dominant heat transfer mode that heatsinks exploit. This makes the cleanliness of the fins far more important than their emissivity, which means anything that maintains them in a dust-free state will enhance their operation. This is why heatsink fins are made as smooth as possible, and not rough, and why the fan intake will have a lint filter on it. The oxide tarnish that naturally forms on exposed copper fins at near-room temperatures is far thinner than a thousandth of an inch and therefore has a negligible effect on heat transfer. People who polish the oxides off to "improve" heat transfer are misinformed. | {
"source": [
"https://physics.stackexchange.com/questions/405841",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/195609/"
]
} |
405,848 | This is a diagram of a tension lever. Tension exerted by this tool only depends on where you hang a mass.
Suppose that the gravitational force of a mass is Mg.
If you hang it at 1(in the diagram), T is just Mg.
But if you do it at 2, according to this diagram, T will be 2Mg. I wonder how this is possible. I managed to think that rotational movement or rotational inertia is kind of involved in it, but I don't come up with any clear idea. | Radiative heat transfer is not dominant at the temperatures at which computer/electronic heatsinks operate, so the emissivity of the heatsink fin surfaces is not important for their operation. Conduction of heat from the copper to the air, and then convection driven either by buoyancy or mechanical ventilation, is the dominant heat transfer mode that heatsinks exploit. This makes the cleanliness of the fins far more important than their emissivity, which means anything that maintains them in a dust-free state will enhance their operation. This is why heatsink fins are made as smooth as possible, and not rough, and why the fan intake will have a lint filter on it. The oxide tarnish that naturally forms on exposed copper fins at near-room temperatures is far thinner than a thousandth of an inch and therefore has a negligible effect on heat transfer. People who polish the oxides off to "improve" heat transfer are misinformed. | {
"source": [
"https://physics.stackexchange.com/questions/405848",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/195618/"
]
} |
406,057 | I'm an astrophysics student and I've been researching this topic and there is one point that keeps eluding me. How did the scientific community realize that there had to be dark matter in the Universe? | Short answer: Due to a discrepancy between the density of matter which responds to electromagnetic radiation and the average calculated density of matter in the universe. We have several ways to measure the average distribution of matter in the universe: for example, we can use the mass of an average star and the average number of stars in a galaxy, to get the average density of matter which is potentially visible (responds to electromagnetic radiation). Alternatively, we can study the redshifts of different galaxies in a cluster, and thus determine the motion of each galaxy. This can help us calculate the gravitational force acting on each of the galaxies in the cluster, and hence we can find the mass of ALL matter in the cluster. Note how different this is from the previous method, which is for visible bodies only. There are a few other techniques, but the essence is that the density of all matter is about 32% the critical density of the universe, but the density of matter which interacts with EM radiation is only 5% of the critical density. We call this constituents of this gap 'dark matter'. | {
"source": [
"https://physics.stackexchange.com/questions/406057",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/172306/"
]
} |
406,592 | Black holes cannot be seen because they do not emit visible light or any electromagnetic radiation. Then how do astronomers infer their existence? I think it's now almost established in the scientific community that black holes do exist and certainly, there is a supermassive black hole at the centre of our galaxy. What is the evidence for this? | Black holes cannot be seen because they do not emit visible light or any electromagnetic radiation. This is not absolutely correct in the sense that visible light is emitted during the capture of charged matter from the radiation as it is falling into the strong gravitational potential of the black hole, but it is not strong enough to characterize a discovery of a black hole. X rays are also emitted if the acceleration of the charged particles if high, as is expected by a black hole attractive sink. The suspicion of the existence of a black hole comes from kinematic irregularities in orbits. For example: Doppler studies of this blue supergiant in Cygnus indicate a period of 5.6 days in orbit around an unseen companion. ..... An x-ray source was discovered in the constellation Cygnus in 1972 (Cygnus X-1). X-ray sources are candidates for black holes because matter streaming into black holes will be ionized and greatly accelerated, producing x-rays. A blue supergiant star, about 25 times the mass of the sun, was found which is apparently orbiting about the x-ray source. So something massive but non-luminous is there (neutron star or black hole). Doppler studies of the blue supergiant indicate a revolution period of 5.6 days about the dark object. Using the period plus spectral measurements of the visible companion's orbital speed leads to a calculated system mass of about 35 solar masses. The calculated mass of the dark object is 8-10 solar masses; much too massive to be a neutron star which has a limit of about 3 solar masses - hence black hole. This is of course not a proof of a black hole - but it convinces most astronomers. Further evidence that strengthens the case for the unseen object being a black hole is the emission of X-rays from its location, an indication of temperatures in the millions of Kelvins. This X-ray source exhibits rapid variations, with time scales on the order of a millisecond. This suggests a source not larger than a light-millisecond or 300 km, so it is very compact. The only possibilities that we know that would place that much matter in such a small volume are black holes and neutron stars, and the consensus is that neutron stars can't be more massive than about 3 solar masses. From frequently asked questions, What evidence do we have for the existence of black holes? , first in a Google search: Astronomers have found convincing evidence for a supermassive black hole in the center of our own Milky Way galaxy, the galaxy NGC 4258, the giant elliptical galaxy M87, and several others. Scientists verified the existence of the black holes by studying the speed of the clouds of gas orbiting those regions. In 1994, Hubble Space Telescope data measured the mass of an unseen object at the center of M87. Based on the motion of the material whirling about the center, the object is estimated to be about 3 billion times the mass of our Sun and appears to be concentrated into a space smaller than our solar system. Again, it is only a black hole that fits these data in our general relativity model of the universe. So the evidence for our galaxy is based on kinematic behavior of the stars and star systems at the center of our galaxy. | {
"source": [
"https://physics.stackexchange.com/questions/406592",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/164488/"
]
} |
406,597 | Why does a glass rod or colorless gem having refractive index close to that of water becomes invisible in water? I couldn't get what happen that they tend to camouflage? Why can't our eye differentiate one(gem) from the other(water)? | Black holes cannot be seen because they do not emit visible light or any electromagnetic radiation. This is not absolutely correct in the sense that visible light is emitted during the capture of charged matter from the radiation as it is falling into the strong gravitational potential of the black hole, but it is not strong enough to characterize a discovery of a black hole. X rays are also emitted if the acceleration of the charged particles if high, as is expected by a black hole attractive sink. The suspicion of the existence of a black hole comes from kinematic irregularities in orbits. For example: Doppler studies of this blue supergiant in Cygnus indicate a period of 5.6 days in orbit around an unseen companion. ..... An x-ray source was discovered in the constellation Cygnus in 1972 (Cygnus X-1). X-ray sources are candidates for black holes because matter streaming into black holes will be ionized and greatly accelerated, producing x-rays. A blue supergiant star, about 25 times the mass of the sun, was found which is apparently orbiting about the x-ray source. So something massive but non-luminous is there (neutron star or black hole). Doppler studies of the blue supergiant indicate a revolution period of 5.6 days about the dark object. Using the period plus spectral measurements of the visible companion's orbital speed leads to a calculated system mass of about 35 solar masses. The calculated mass of the dark object is 8-10 solar masses; much too massive to be a neutron star which has a limit of about 3 solar masses - hence black hole. This is of course not a proof of a black hole - but it convinces most astronomers. Further evidence that strengthens the case for the unseen object being a black hole is the emission of X-rays from its location, an indication of temperatures in the millions of Kelvins. This X-ray source exhibits rapid variations, with time scales on the order of a millisecond. This suggests a source not larger than a light-millisecond or 300 km, so it is very compact. The only possibilities that we know that would place that much matter in such a small volume are black holes and neutron stars, and the consensus is that neutron stars can't be more massive than about 3 solar masses. From frequently asked questions, What evidence do we have for the existence of black holes? , first in a Google search: Astronomers have found convincing evidence for a supermassive black hole in the center of our own Milky Way galaxy, the galaxy NGC 4258, the giant elliptical galaxy M87, and several others. Scientists verified the existence of the black holes by studying the speed of the clouds of gas orbiting those regions. In 1994, Hubble Space Telescope data measured the mass of an unseen object at the center of M87. Based on the motion of the material whirling about the center, the object is estimated to be about 3 billion times the mass of our Sun and appears to be concentrated into a space smaller than our solar system. Again, it is only a black hole that fits these data in our general relativity model of the universe. So the evidence for our galaxy is based on kinematic behavior of the stars and star systems at the center of our galaxy. | {
"source": [
"https://physics.stackexchange.com/questions/406597",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/174412/"
]
} |
406,604 | I have read about schlieren photography , which uses the ability of non-uniform air to create shadows. Is it really possible that air makes shadows? | Taken from this site : Yes, air can indeed make shadows. A shadow occurs when an object in a light beam prevents some of the light from continuing on in the forward direction. When the light beam hits a wall or the ground, a darker shape is visible where less light is hitting the surface. Both the light and the shadow, which is just the absence of light, travel to the surface at the speed of light. There are three ways that an object can prevent light from continuing on in the forward direction: Absorption : The light that hits the object is absorbed and converted to heat. A black table creates a shadow on the wall mostly by absorbing the light that hits it. Reflection : The light that hits the object is reflected off the front surface and redirected to another part of the room. A silvery bowl creates a shadow on the wall by reflecting away the light that hits its front surface. Refraction The light that hits the object passes through, but the light's direction is bent by the object. If the direction is bent enough, the light that passes through the object will be angled out of the forward-traveling beam. As a result, the beam will have a dark spot; a shadow.
Consider completely transparent objects such as glass cups, bottles of water, or the lenses of eyeglasses. Even though such transparent objects do not absorb or reflect very much light, they still interact with light through refraction. Refraction is what makes transparent cups visible to our eyes. Refraction also enables clear objects to cast shadows. Take off your eyeglasses and place them on the table at night under the illumination of a single lamp and you will see a distinct shadow caused by the transparent lenses. Although air is almost perfectly transparent, it can still cast shadows via refraction. The key principle regarding refraction is that light is bent when the index of refraction differs from one location to the next. Air and glass are different materials and have different indices of refraction. Light therefore bends when it goes from air into glass, such as at the surface of a glass lens. Refraction does not happen inside a glass lens because the material inside the lens is uniform. Refraction happens at the surface of a glass lens because that is the only place where the index of refraction differs. Uniform air itself cannot refract light and create shadows because the index of refraction does not differ anywhere. But, when different regions of air have different indices of refraction, the air can indeed bend light away from the forward direction and create a shadow. The most common way to get a changing index of refraction in different regions of air is to heat the air. As air heats up, it expands and its index of refraction changes. A pocket of warm air sitting next to a pocket of cold air will therefore constitute regions with different indices of refraction. The interface between the cold air and the warm air will therefore bend light and cause shadows. This effect is most visible when strong direct sunlight is coming in sideways through a window, passes through cold ambient air and then passes through the hot air above a heater. The shadow that this air system creates on the far wall consists of waving, rolling lines mimicking the turbulent motion of the hot air as it rises. The index of refraction of air also changes as the pressure and composition changes, therefore these effects can also lead to air shadows. For instance, the pressure variations caused by a plane plowing through the air can cause shadows. Also, gases being vented into ambient air creates spatial variations in the air, and therefore shadow-causing variations in the index of refraction. The ability of non-uniform air to create shadows is used to great advantage in the imaging technique known as schlieren photography. In schlieren photography, the shadows are used to accurately map out the variations in the air. | {
"source": [
"https://physics.stackexchange.com/questions/406604",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/-1/"
]
} |
407,194 | Thermodynamics does not allow the attainment of the absolute zero of temperature. Is then the term "negative temperature" a misnomer? | Your question seems to be predicated on the idea that negative absolute temperature is somehow supposed to be "colder" than absolute zero, and you are right that that would be non-sensical. But actually, in a very precise sense, negative absolute temperatures are hotter than all positive temperatures, see also this question . This is simply a result of the statistical definition of temperature $T$, which is
$$ \frac{1}{T} = \frac{\partial S}{\partial E},$$
where $S$ is the entropy of the system and $E$ its energy content, so $\beta := \frac{1}{T}$ is actually the more natural quantity to think about in the physical formalism. See also this excellent answer by DanielSank for how the $\frac{1}{T}$ appears naturally as a Lagrange multiplier in thermodynamics. From the above, we see that temperature is negative for system whose entropy decreases as the energy rises. Such systems are unusual, but they are not forbidden. Absolute zero, $T\to 0$, corresponds to $\beta \to \infty$. As the system gets hotter, $\beta$ decreases. $\beta\to 0$ looks weird in terms of temperature, since it corresponds to $T\to \infty$, but since it is $\beta$ that is of primary physical importance, it is not actually forbidden for it to cross zero and become negative. In terms of temperature, a system crossing the $\beta = 0$ point would have to be described as heating up all the way to "infinite positive temperature", then flipping the sign of the temperature and starting to go towards $T= 0$ again from $-\infty$. | {
"source": [
"https://physics.stackexchange.com/questions/407194",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/164488/"
]
} |
407,289 | I notice that online definitions of this experimental law always say, molecules or atoms . From the Wikipedia article on Avogadro's Law : $${\frac {V_{1}}{n_{1}}}={\frac {V_{2}}{n_{2}}}$$ The equation shows
that, as the number of moles of gas increases, the volume of the gas
also increases in proportion. Similarly, if the number of moles of gas
is decreased, then the volume also decreases. Thus, the number of
molecules or atoms in a specific volume of ideal gas is independent of
their size or the molar mass of the gas. In lumenlearning : Key Points The number of molecules or atoms in a specific volume of ideal gas is independent of size or the gas’ molar mass. This made me wonder if $n$ in the $PV = nRT$ can also be the number of atoms in that volume of gas. Taking a practical example, what is the answer to the following question? Statement ( I ) : Atoms can neither be created nor destroyed. Statement ( II ) : Under similar conditions of temperature and pressure, equal volumes of gases do not contain an equal number of atoms. My question is, if $P$, $V$ and $T$ are equal, can we say $n$ (number of atoms) are equal? The answer given is that, no they need not be equal since only number of molecules will be equal. The gas can consist of a mixture of diatomic and triatomic molecules, we can have the same number of molecules but different number of atoms. From what I read on Kinetic Molecular Theory, the volume occupied by the molecules of the gas is negligible compared to the volume of the gas itself. This is the central assumption. So I guess the law applied only to molecules and not atoms or the generic "particles" as how some sites define it. | I notice that online definitions of this experimental law always say, molecules or atoms. The problem with just calling them all "molecules" and being done with it is some are uncomfortable with using that term for unbound atoms. If you have a container of He, there are no "molecules" in it. So when it says "molecules or atoms", it means "molecules or unbound atoms". It's not trying to say that the total number of atoms within the different molecular species matter. | {
"source": [
"https://physics.stackexchange.com/questions/407289",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/47252/"
]
} |
407,688 | Is there an answer to the question why there are only four fundamental interactions of nature ? | The answer "because we do not need more" by @rubenvb is fine. Studying physics, you must realize that physics is not answering fundamental "why" questions. Physics uses mathematical tools to model measurements and these models have to fit new data, i.e. be predictive. As long as the models are not falsified, they are considered valid and useful. Once falsified, modifications or even drastic new models are sought. A prime example, quantum mechanics, when classical mechanics was invalidated: black body radiation, photoelectric effect and atomic spectra falsified efforts of classical modelling. Physics using the appropriate models show "how" one goes from data to predictions for new experimental data. Looking for "why" in the models, one goes up or down the mathematics and arrives at the answer "because that is what has been measured" | {
"source": [
"https://physics.stackexchange.com/questions/407688",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/164488/"
]
} |
407,875 | Supposing I have a piece of paper that can be folded infinitely. In the first $5 \, \mathrm{s}$, I fold it to twice its thickness. In the next $5 \, \mathrm{s}$, I fold it to 4 times. If I fold it to twice its thickness in the $n^{\text{th}}~5 \, \mathrm{s}$, since time increases linearly and thickness doubles in each $5 \, \mathrm{s}$, will I not be able to increase the speed of increase in thickness of the piece of paper to beyond the speed of light? | No, you can't, for a couple of different reasons. The first is the difficulty of folding paper more than a few times. Mythbusters managed to fold one sheet 11 times I think, using a very large sheet of paper and the help of a steamroller. It took a lot longer than 5 seconds per fold. The second issue is more fundamental. You could resolve the first issue by just cutting the paper in half and stacking one half on top of the other instead of folding it. But then you have another problem: suppose your stack of paper has reached one light year in height. Next you have to cut it in half and put one half on top of the other to make a two light-year stack. With some cleverness you can do the cutting as quickly as you want. (For example, you could cut it using a carefully timed laser pulse from far away.) But once it's cut you have to move one half of the stack upward by one light year, so that the bottom of that half lines up with the top of the other half. You can't move the stack faster than light, so no matter how you do this it has to take at least one year. The next iteration will take two years, the next four, and so on, and the top of the combined stack will never move faster than light. So really the logic of your question has to be reversed: it's not that you can move faster than light if you fold a piece of paper every five seconds, it's that you can't fold a piece of paper indefinitely every five seconds, because doing so would mean moving something faster than light. (There is a third issue too, which is that every time you cut the paper in half you reduce its size, and eventually you'll just have a stack of atoms that you can't cut. But of course you can always just start with a bigger sheet of paper.) As David Starkey points out in a comment, you can actually do a factor of two better than this, if you don't mind the bottom of the stack moving as well as the top. Then you can move one half of the stack down at the same time as moving the other up, so each one only has to move half a light year instead of one. But of course this doesn't change the overall argument. Each end of the stack is still limited by the speed of light, so you can't double the height of a one light-year stack in less than 0.5 years. | {
"source": [
"https://physics.stackexchange.com/questions/407875",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/196589/"
]
} |
408,771 | I remember years ago in school my chemistry teacher showed us a tangled wire that untangled itself when a current is applied, can anyone suggest what the material may be? | You are almost certainly thinking of nitinol wire or "memory wire". However, it's not electricity that makes it untangle. It's heat. Running current through the wire is just a way to heat it. When at room temperature, nitinol wire can be easily bent. When heated, it acts like a spring trying to go back to its unbent shape. You can see the same effect by twisting some wire, then dropping it into boiling water. There have been "electric pistons" built on this principle. The piston is driven by a spring of memory wire. When cold, the piston is easily compressed. When the spring is heated by running electric current thru it, it pushes against the piston harder than what it took to push the piston in when cold. This effect has niche uses but is mostly a curiosity. The overall cycle is not very efficient. | {
"source": [
"https://physics.stackexchange.com/questions/408771",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/196982/"
]
} |
409,105 | I know you can't have work without any displacement, so I was kind of wondering as to what keeps, for example, a man on a jetpack, off the ground but with no more change in height from the initial height he was on? Is this still a form of energy or something else because if he burns fuel to keep himself off the ground, doesn't that mean energy is being used? | A table can forever keep an apple "levitated" above the ground with it's normal force. That requires no energy. No work is done. A force does not spend energy to fight against another force . The force may cost energy to be produced , though. This is a separate issue. The jetpack spends fuel to produce an up-drift force, the human body spends nutrition to extend/contract muscles to produce the "holding"-force to hold a milk can, but the table spends nothing to produce it's normal force. The jetpack falls down after a while and you feel tired after a while, not because work was done on the objects, but because work was done inside those "machines" (jetpack and body) that produce the forces. The table never gets tired. It never spends any work. The issue is clearly not about holding anything. It takes no energy to hold stuff. You are correct that no work is done on the levitating man, if he undergoes no displacement. Work may be done inside the "machine" that produces the force, but that is internal. | {
"source": [
"https://physics.stackexchange.com/questions/409105",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/197114/"
]
} |
409,108 | "By adding a neutron to $U^{235}_{92}$ an even-even nucleus is obtained. The binding energy increases in the process, so the energy gained in the process is greater." I'm confused, if the binding energy increases, how can fission be favored? Furthermore, how can the energy gained in the process be greater if $B$ is a negative contribution to the nucleus energy? | A table can forever keep an apple "levitated" above the ground with it's normal force. That requires no energy. No work is done. A force does not spend energy to fight against another force . The force may cost energy to be produced , though. This is a separate issue. The jetpack spends fuel to produce an up-drift force, the human body spends nutrition to extend/contract muscles to produce the "holding"-force to hold a milk can, but the table spends nothing to produce it's normal force. The jetpack falls down after a while and you feel tired after a while, not because work was done on the objects, but because work was done inside those "machines" (jetpack and body) that produce the forces. The table never gets tired. It never spends any work. The issue is clearly not about holding anything. It takes no energy to hold stuff. You are correct that no work is done on the levitating man, if he undergoes no displacement. Work may be done inside the "machine" that produces the force, but that is internal. | {
"source": [
"https://physics.stackexchange.com/questions/409108",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/144093/"
]
} |
409,109 | Newton's third law of motion states that every action has an equal and opposite reaction. That is the reason we do not sink into the earth, because when our weight exerts a force on the earth it also exerts an equal and opposite force on us. But when we stand on quicksand or on fluids we can sink in. How is this possible? Does it not exert an equal and opposite force on us? Or are Newton's laws different in the case of fluids and substances of low densities? | This is a common confusion when people first learn Newton's Third Law. They get the idea that it implies motion can never begin. The (wrong) argument is that, since the Earth exerts a force $F$ on you by gravity, the sand must exert an equal and opposite force $N = -F$ on you. Then the total force on you is $N + F = 0$ , so you can't fall. Of course this argument can't be right, because it either means that nothing can ever start moving, or that Newton's laws don't apply to sand, as you propose, and neither make sense. What Newton's Third Law really says here is if the Earth exerts a gravity force $F$ on you, you exert a force $-F$ on the Earth if the sand exerts a normal force $N$ on you, you exert a force $-N$ on the sand It gives no relationship at all between $F$ and $N$ , so you can indeed begin to fall. | {
"source": [
"https://physics.stackexchange.com/questions/409109",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/193927/"
]
} |
409,121 | A charged capacitor will shock you if you come too close to it or touch it, and you are connected to the ground. A generator will only generate electricity if it has a complete circuit, it cannot just be connected to something which is connected to the ground. Why is this? | This is a common confusion when people first learn Newton's Third Law. They get the idea that it implies motion can never begin. The (wrong) argument is that, since the Earth exerts a force $F$ on you by gravity, the sand must exert an equal and opposite force $N = -F$ on you. Then the total force on you is $N + F = 0$ , so you can't fall. Of course this argument can't be right, because it either means that nothing can ever start moving, or that Newton's laws don't apply to sand, as you propose, and neither make sense. What Newton's Third Law really says here is if the Earth exerts a gravity force $F$ on you, you exert a force $-F$ on the Earth if the sand exerts a normal force $N$ on you, you exert a force $-N$ on the sand It gives no relationship at all between $F$ and $N$ , so you can indeed begin to fall. | {
"source": [
"https://physics.stackexchange.com/questions/409121",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/180969/"
]
} |
409,189 | Assuming that dark matter is not made of WIMPs (weakly interacting massive particles), but interacts only gravitationally, what would be the possible mechanism giving mass to dark matter particles? If they don't interact weakly, they couldn't get mass from interacting with the Higgs field. The energy of gravitational interactions alone does not seem to be sufficient to account for a large particle mass. Would this imply that dark matter consists of a very large number of particles with a very small mass, perhaps much smaller than of neutrinos? Or do we need quantum gravity to explain the origin of mass of dark matter? | I think this question contains a misconception unfortunately caused by popular science descriptions of the Standard Model. The question seems to assume there needs to be some concrete source that particles "get" mass from, as if mass is a resource like money and the Higgs field is giving it out. But that's not right. In a generic field theory there is no issue adding a new field $\psi$ whose particles have mass. The only thing you have to do is make sure the Lagrangian has a term proportional to $\psi^2$. You might protest that this violates the conservation of energy because the mass has to "come from" somewhere, but that's not right. Mass is the energy price for creating a particle. I don't create money by changing the pricetag of an item in a store. The reason science popularizers say that mass must come from the Higgs mechanism is because of a peculiarity of the Standard Model (SM). The symmetries of the SM forbid a term such as $\psi^2$ for any field $\psi$ in the SM, so we need a trick to get a mass term. In brief, the Higgs field $\phi$ allows us to write terms like $\phi \psi^2$ which do respect the symmetry. This is an interaction term, but we can set up the Lagrangian so the Higgs field $\phi$ acquires a constant part, yielding the $\psi^2$ mass term we wanted. However, once you start speculating about dark matter models, especially dark matter that does not interact with the electroweak force at all, these constraints don't apply and generically there is nothing forbidding a $\psi^2$ term. There's no need for any special mechanism for "giving" mass. You just treat mass exactly like you did in high school, intro mechanics and quantum mechanics: write it down, call it $m$ and call it a day. | {
"source": [
"https://physics.stackexchange.com/questions/409189",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/164879/"
]
} |
409,521 | Suppose a charge $q$ is experiencing a force due to charge $Q$. Suppose we move the charge $Q$ very slowly (no acceleration) what's the instantaneous impact on the charge $q$? How will the $q$ react? | No two things in the universe happen "instantaneously", unless they are at exactly the same location, because "instantaneously" would have different meanings for observers moving at different velocities. Maxwell's equations, which describe electromagnetic interactions perfectly for most practical purposes, contain time-dependent terms that describe the propagation of changes in an electromagnetic field. If your Q is moved at all, whether fast or slow, the resulting change in its field at a distance D does not occur until a time t = D/c . That is, the change propagates out from Q at the speed of light. This is an observable fact that, per special relativity, is the same for all observers. | {
"source": [
"https://physics.stackexchange.com/questions/409521",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/194291/"
]
} |
409,951 | I was reading Herbert Goldstein's Classical Mechanics . Its first chapter explains holonomic and non-holonomic constraints, but I still don’t understand the underlying concept. Can anyone explain it to me in detail and in simple language? | If you have a mechanical system with $N$ particles, you'd technically need $n = 3N$ coordinates to describe it completely. But often it is possible to express one coordinate in terms of others: for example of two points are connected by a rigid rod, their relative distance does not vary. Such a condition of the system can be expressed as an equation that involves only the spatial coordinates $q_i$ of the system and the time $t$ , but not on momenta $p_i$ or higher derivatives wrt time. These are called holonomic constraints : $$f(q_i, t) = 0.$$ The cool thing about them is that they reduce the degrees of freedom of the system. If you have $s$ constraints, you end up with $n' = 3N-s < n$ degrees of freedom. An example of a holonomic constraint can be seen in a mathematical pendulum. The swinging point on the pendulum has two degrees of freedom ($x$ and $y$). The length $l$ of the pendulum is constant, so that we can write the constraint as $$x^2 + y^2 - l^2 = 0.$$ This is an equation that only depends on the coordinates. Furthermore, it does not explicitly depend on time, and is therefore also a scleronomous constraint. With this constraint, the number of degrees of freedom is now 1. Non-holonomic constraints are basically just all other cases: when the constraints cannot be written as an equation between coordinates (but often as an inequality). An example of a system with non-holonomic constraints is a particle trapped in a spherical shell. In three spatial dimensions, the particle then has 3 degrees of freedom. The constraint says that the distance of the particle from the center of the sphere is always less than $R$: $$\sqrt{x^2 + y^2 + z^2} < R.$$
We cannot rewrite this to an equality, so this is a non-holonomic, scleronomous constraint. | {
"source": [
"https://physics.stackexchange.com/questions/409951",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/197462/"
]
} |
410,191 | Why do planets revolve around the Sun if there is a force called frictional force? Why do planets not stop rotating while the objects in motion stop after a while? Why are they continuously moving? Why are planets not affected by frictional force? | For friction to occur there must be a medium which can exert the friction forces. For example if a raindrop falls through air there is friction against the air and it reaches some terminal velocity after a few meters. If a car rolls on a street there is friction in the bearings and rolling friction between the road and the tyres which slows it down. In space, however, there is (almost) no such medium. The gas within the solar system, the so called interplanetary medium is so incredibly dilute, that the friction effects are usually negligible even on the scale of millions of years. There is, however, some research that suggests that the dynamic drag (that is, friction) of the interplanetary medium may be relevant in some processes in the evolution for planetary systems, see for example the paper Dynamical Friction and Resonance Trapping in Planetary Systems by Nader Haghighipour, where an orbital resonance is reached when the friction with the interplanetary medium is considered. For completeness, I should add there are other effects in orbital mechanics where friction is relevant. The most common one is related to tidal forces and, for example, the reason that we always see the same side of the moon (this condition is called tidal locking ). With these effects the friction is within the orbiting bodies resisting the tidal deformation and slows down their revolution until it is in sync with the orbital rotation. This exact effect also slows down earth's revolution which causes the days to become longer on the scale of hundred of millions of years. By now our atomic clocks are so precise, that we need to adjust our time-keeping to stay in sync with earth's slowing rotation , that is we can measure the effect of tidal deceleration (and other effects) on earth's revolution on the time-scale of years (by comparing the time our precise clocks give to the astronomic measurement of the position of earth). There is even evidence for this in old sediment rocks that formed in shallow seas with tides (see this Wikipedia article ). Also, there is Poynting-Robertson drag , where dust particles are slowed down due to the net radiation pressure tangential to their orbits. | {
"source": [
"https://physics.stackexchange.com/questions/410191",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/197607/"
]
} |
410,328 | The experiment This experiment is documented in a documentary called Convex Earth . The exact location the following information is taken from starts at 14:25 . High frequency directional antennas are set up 14 km apart, 1.5m from water level [I recall them saying 1m on the video, but in the experiment note, location and height , I've added below, it says 1.5m] . Both are on points along the coast of a large body of water, with sufficient coastal curvature for there to be only water between the two points. Thus, there are no objects or land masses obstructing them. According to the experimenters, the curvature of the Earth over that distance, for an antenna 1 m above ground, would present an obstacle of 3.84 m. This should be sufficient to prevent the antennas from remaining in radio contact. The above image illustrates what is described above. (The house is in the image as an example of an object 3.84 m high.) Research I've read online that radio waves, especially small ones, would be almost entirely unaffected by gravity. Really large radio waves can bend ever so slightly around the Earth's curvature, beyond line of sight, but it's nominal. This would seem to be in contradiction to what this experiment has demonstrated. One example of such information, and another . Coordinate Info [Added Later] On their web site (which I've looked up since posting this question, to get more exacting details), they state the locations of the two antenna as: Team A: São Lourenço do Sul, RS 31 ° 22'42.37 "S 51 ° 57'40.79" W
Team B: São Lourenço do Sul, RS 31 ° 30'0.91 "S 52 ° 0'26.88" W I've checked, and the distance between those points is 14.24 km. I used this tool to check. Here is a screen shot of the result. Radio equipment [added June 9] The equipment used in the experiment was two sets of the following : 1 Radio Ubiquiti Bullet M5 HP Approved by Anatel to 400mW operating in the band 5800 Mhz 1 satellite dish Aquarius Approved by the FCC with 24 dB gain and 4 degrees of openness 1 UHF Radio HT HT VHF Radio 1 with 2 dB omni directional antenna 1 radio HT VHF / UHF dual band All other info on the experiment is detailed here . Location and height In the video cited, I got the impression one antenna was on land, and the other antenna was on a boat out on water. But looking at the locations given in experiment notes, it looks to me that both are at points along the coastline. I was not sure why that was. I've since seen this note on the experiment notes: Note: Team B coordinate the currently appears on the water, but when
in 2011 the experiment was conducted had a cove where the equipment
was installed. The equipment of both teams were positioned at 1.5
meters the water level height. Question What is the scientific explanation, using accepted laws of physics, to explain how these high frequency radio waves can make contact with the opposite antenna over a distance of 14 km? Additional related info A similar experiment was conducted by the same researchers, using a laser beam. It was transmitted across a distance of 33.78 km, at 1.5 m above water level. It too was successfully transmitted between the two points with that distance between them. | EDIT: In the interest of avoiding spreading misleading information, I have removed the portions of this answer that have been disputed or refuted in the comments and edits on this question. Specifically, the parts about the ACK/Distance shown on the screen at 42:47 and the calculation of the curvature have been removed. The rest of this answer, however, still stands. TL;DR: They erroneously believed that radio antennae were lasers. The antennae should still be able to connect even on a curved Earth. The video pretends that the signal leaving the radio antennae is like a laser beam, focused in the line that emanates from transmitter to receiver without diverging. In reality, this isn't even close to true, even for directional radio antennae. Both the transmitted signal and the receiver acceptance get wider farther from the respective antennae, purely due to the diffractive properties of waves. This means that the signal actually propagates in a large ellipsoidal region between the antennae called the Fresnel zone **. The rule of thumb that is used in engineering systems is that as long as at least 60 percent of the Fresnel zone is unobstructed, signal reception should be possible. The maximum radius $F$ of the Fresnel zone is given in the same Wikipedia article by $$F=\frac{1}{2}\sqrt{\frac{cD}{f}}\,,$$ where $c=3\times {10}^8 \frac{\mathrm{m}}{\mathrm{s}}$ is the speed of light, $D$ is the propagation distance and $f$ is the frequency. Using $D=14 \, \mathrm{km}$ and $f=5.880 \, \mathrm{GHz},$ we see that $F=13.69 \, \mathrm{m}.$ As you can see, the beam expands massively over such a distance. If you cut out the lower $3.84 \, \mathrm{m}$ of that circle, you would find that the fraction of the beam that is obstructed for obstruction height $h$ from the formula for the area of the cut-out portion given here : $$\frac{A_{\text{obstructed}}}{A_{\text{whole beam}}}=\frac{F^2\cos^{-1}\left(\frac{F-h}{F}\right)-(F-h)\sqrt{2Fh-h^2}}{\pi F^2}\,.$$ Evaluating this expression for $F=13.69 \, \mathrm{m}$ and $h=3.84 \, \mathrm{m}$ gives you an obstruction fraction of $\frac{A_{\text{obstructed}}}{A_{\text{whole beam}}}=0.085.$ So, even on a curved earth, only 8.5 percent of the beam would be obstructed. This is well within the rule of thumb (which required less than 40 percent obstruction), so the antennae should still be able to connect on a curved Earth. **In reality, propagation of radio waves between two antennae is complicated , and I'm necessarily skipping over a lot of details here, or else this post would become a textbook. What I refer to as the "Fresnel zone" here is technically the first Fresnel zone, but the distinction is not necessary here. | {
"source": [
"https://physics.stackexchange.com/questions/410328",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/197670/"
]
} |
410,359 | We know that in classical thermodynamics $$v_{rms} = \sqrt{\frac{3k_B T}{m}}$$ However we immediately see that this is wrong for high temperatures as there is no upper bound on velocity. How do I get the exact equation? My approach- We have, $E = \sqrt{m_o^2c^4 + p^2c^2}$ Now from thermal energy we have total energy to be(sum of rest energy and thermal energy) $E = m_o c^2 + \frac{3}{2}k_B T$ Thus, $$m_oc^2 + \frac{3}{2}k_B T = \sqrt{m_o^2c^4 + p^2c^2}$$ Here, $p = mv$ & $m = \frac{m_o}{\sqrt{1-\frac{v^2}{c^2}}} $ Then we can solve for $v$ $ ( \sim v_{rms})$ I am not sure if this is right. Can someone correct me? Can you give me atleast the final result if not the entire drivation? | The assumption that the thermal energy is $\frac{3}{2}k_bT$ is actually only valid at non-relativistic temperatures. In general we have to use the equipartition theorem to find the relation between temperature and energy: \begin{equation}
\left< x_m \frac{\partial E}{\partial x_n} \right> = \delta_{mn} k_BT,
\end{equation}
where $x$ can be a coordinate or conjugate momentum. Just taking the one-dimensional case for simplicity, in the Newtonian regime, $E = \frac{mv^2}{2}$, so that $v=\sqrt{\frac{k_BT}{m}}$. But in the relativistic case,
$E = \sqrt{p^2c^2 + m_0^2c^4}$. This means that \begin{equation}
\frac{c^2p^2}{\sqrt{p^2c^2 + m_0^2c^4}} = k_BT,
\end{equation}
so
\begin{equation}
p^2 = \frac{k_B^2T^2c^2 \pm \sqrt{k_B^4T^4c^4 + 4k_B^2T^2m_0^2c^8}}{2c^4}.
\end{equation} As $T \rightarrow \infty$ we get $p = k_BT/c$, or $E=k_BT$ (as the mass-term in the energy becomes negligible compared to the momentum). This is the well-known energy-equation for the ultra-relativistic gas .
As $T \rightarrow 0$ we get $v = \sqrt{\frac{k_BT}{m_0}}$, the Newtonian result. | {
"source": [
"https://physics.stackexchange.com/questions/410359",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/156705/"
]
} |
410,585 | Visible light (~500 THz) as well as gamma rays (~100 EHz) are electromagnetic radiation but we can reflect visible light using a glass mirror but not gamma rays. Why is that? | Look at the electromagnetic spectrum: Visible frequencies have wavelengths of microns, $10^{-6}$ meters. Gamma rays have a wavelength of $10^{-12}$ meters, picometers. In physics, there are two mainframes, the classical frame, which includes Maxwell's electrodynamics, Newton's mechanics, and derivative theories, and the quantum mechanical frame which becomes necessary for small distances and high energies, where gammas (photons), electrons, atoms, nucleons, lattices belong. The classical electromagnetic wave emerges from zillions of superposed photons. Maxwell's equations describe very well the behavior of light beams when scattering or reflecting or generally interacting for macroscopic distances and small energies. Reflection, classically, needs a very flat surface so that the phases of the reflected waves are retained. Depending on the material the classical beams may be absorbed, decohered in reflecting from many point sources, or reflected coherently if the scattering is elastic (mirrors elastically and coherently scatter incoming light). Gamma rays though force us to go to the micro level, because of the very small wavelength that describes them as a light beam. One has to look at the details of the surface, and whether a classical smooth surface for classical reflections can be modelled for gammas, and the answer is, no it cannot. The spacing between atoms in most ordered solids is on the order of a few ångströms (a few tenths of a nanometer). For micron wavelengths (optical light) the fields built up by atoms with angstrom distances in the lattice appear smooth and can be classically modelled. Gamma rays considered as a classical light beam, with their picometer wavelengths see mostly empty space between the atoms of the solid. An alternative analysis, still within the quantum frame, would be considering the photons which make up light, and the Heisenberg uncertainty $ΔpΔx$ in the location of the photon. For the small wavelengths of gamma rays, the photons see mostly empty space. | {
"source": [
"https://physics.stackexchange.com/questions/410585",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/114410/"
]
} |
411,314 | If I take a 1 watt heating element, put it in a glass of water and I put them both inside a sealed imaginary chamber that does not conduct heat outside (again, imaginary). Will the water eventually boil? If not, I don't understand why, since electricity keeps flowing through the heating element, generating more and more energy in joules, and since temperature is just an increase in joules per kilogram, in an ideally sealed chamber, heat would accumulate and slowly raise the temperature. | In the scenario you describe, with no heat loss to the environment (perfect insulation), the temperature of the water will rise without limit so long as you keep adding energy to the system, which in the scenario is at a rate of 1 W. Given enough time (and assuming your container doesn't break down) this will exceed the temperature of the sun. At some point there will probably be one or more phase changes, and the point at which these occur will depend on the pressure. You don't indicate if this is kept at atmospheric pressure (isobaric) or if the pressure is allowed to rise and the boundary is fixed (isochoric). This is relevant as water only boils at 100°C at sea level atmospheric pressure. The temperature of the element is irrelevant because if you keep pumping energy into it, it will also continue to rise as the temperature of the water rises. The element and water system will never be in thermodynamic equilibrium so long as you continue to pump in energy. The relative temperatures of the element and water at any given moment will depend on the thermal properties of both materials. | {
"source": [
"https://physics.stackexchange.com/questions/411314",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/172965/"
]
} |
411,322 | The energy of light is given by: $$ E = h\nu = \frac{hc}{λ} $$ which seems weird to me is that the equation has nothing to do with its amplitude. But intuitively, since the light is wave, the energy of wave should dependent on its amplitude. So I wondered why the energy of light/photon has nothing to to with its amplitude according to the above equation? | In the scenario you describe, with no heat loss to the environment (perfect insulation), the temperature of the water will rise without limit so long as you keep adding energy to the system, which in the scenario is at a rate of 1 W. Given enough time (and assuming your container doesn't break down) this will exceed the temperature of the sun. At some point there will probably be one or more phase changes, and the point at which these occur will depend on the pressure. You don't indicate if this is kept at atmospheric pressure (isobaric) or if the pressure is allowed to rise and the boundary is fixed (isochoric). This is relevant as water only boils at 100°C at sea level atmospheric pressure. The temperature of the element is irrelevant because if you keep pumping energy into it, it will also continue to rise as the temperature of the water rises. The element and water system will never be in thermodynamic equilibrium so long as you continue to pump in energy. The relative temperatures of the element and water at any given moment will depend on the thermal properties of both materials. | {
"source": [
"https://physics.stackexchange.com/questions/411322",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/194362/"
]
} |
412,331 | Since stellar fusion can’t progress beyond iron, and a large enough star collapsed into a black hole because an iron core stalled fusion, wouldn’t that mean all black holes are predominantly iron? | If we are talking about stellar-sized black holes, then the object that collapses to form a black hole will have a high concentration of iron (and other iron-peak elements like manganese, nickel and cobalt) at its core, and it is the core-collapse that begins the black hole formation process, but much more material than this will eventually form that black hole. It appears, empirically, that the minimum mass of a stellar-sized black hole is around $4M_{\odot}$, but is more typically around $10-15M_{\odot}$. But the extinct core of iron in a pre-supernova star is unlikely to exceed around $1.5-2M_{\odot}$ even for the most massive of supernova progenitors (see for example these slides ). Thus most of the material that collapses into a black hole is not iron, it is actually the carbon, oxygen, silicon neon and helium that surrounded the iron core. Much of the nuclear material will be photodisintegrated into its constituent baryons (or alpha particles) during the collapse. Neutronisation reactions will turn most of the protons in the high density material into neutrons. Even at equilibrium, when densities higher than about $10^{14}$ kg/m$^{3}$ are reached then any remaining nuclear material will begin to transmute into all sorts of weird and wonderful neutron-rich nuclei (as in the crusts of neutron stars) and by the time you reach densities of $\sim 10^{17}$ kg/m$^{3}$ (which is still well outside the event horizon of a stellar-sized black hole), the nuclei will lose their identity in any case, and become a fluid of neutrons, protons and electrons. A second point to consider is whether it makes any sense to talk about the composition of a black hole. Composition is not one of the things you can measure - these are restricted to mass, angular momentum and charge. The other details are lost from a (classical) black hole. | {
"source": [
"https://physics.stackexchange.com/questions/412331",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/94052/"
]
} |
413,184 | Imagine you have a mirror which looks like this: Since it is an ordinary household mirror when you zoom in on it, it should have a fractal-like structure so when you zoom in it may look something like this: A surface like this should reflect light in all directions right? | The effect of the roughness of the surface on the scattering of light depends, according to the Rayleigh criterion for rough surfaces, on the wavelength of light and on the incident angle. Basically, it says that if the roughness of a surface, which could be characterized by the difference in height at peaks and valleys of the surface, $\Delta h$, is smaller than $\frac {\lambda} {8cos\theta}$, the surface could be considered smooth. It has to do with constructive and destructive interference of of the reflected light. This is a very crude definition of roughness, but it gives you an idea that the same surface will appear smoother or more specular as the wavelength and the incident angle of the light increases. So, although at the microscopic level the surface of a mirror may look rough, for the wavelengths associated with visible light, it is obviously pretty smooth and produces almost perfect specular reflection. | {
"source": [
"https://physics.stackexchange.com/questions/413184",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/198554/"
]
} |
413,406 | The gravitational potential $G_\text{pot}$ has units of energy per unit mass: $$
\bigg[\rm\frac{J}{kg}\bigg] = \bigg[\rm\frac{kg\cdot m^2}{s^2\cdot kg}\bigg] = \bigg[\rm\frac{m^2 }{s^2}\bigg].
$$ The gravitational force is $F = - \nabla G_\text{pot}$ so this would lead me to believe that unit-wise, due to the gradient, we have a similar expression to the above, apart from an additional $\rm m$ in the denominator: $$
\bigg[\rm\frac{J}{kg\cdot m}\bigg] = \bigg[\rm\frac{kg\cdot m}{s^2\cdot kg}\bigg] = \bigg[\rm\frac{m }{s^2}\bigg].
$$ But force has units of Newtons: $$
\bigg[\rm N\bigg] = \bigg[\rm\frac{kg\cdot m}{s^2}\bigg] \neq \bigg[\rm\frac{m}{s^2}\bigg]
$$ So why am I missing a $\rm kg$ in my units when I take the gradient of the gravitational potential? | You are using a wrong relation. The relation is not " force equals the negative gradient of gravitational potential " but " force equals the negative gradient of gravitational potential energy ": $$F=-\nabla U= -\frac{dU}{dx}$$ The $U$ here is potential energy , not potential . A potential is rather a potential energy per mass . Had you used potential energy to derive the force unit, you would indeed have gotten the correct force unit of $[\mathrm{\frac{kg \; m}{s^2}}]=[\mathrm{N}]$ . But using potential to derive the unit, you get not the unit of force but that of force per mass , $[\mathrm{\frac{kg \; m}{s^2}/kg}]=[\mathrm{\frac{m}{s^2}}]=[\mathrm{\frac{N}{kg}}]$ . This is why (due to the "per-mass" feature) you are lacking one $\mathrm{kg}$ in the derived unit. | {
"source": [
"https://physics.stackexchange.com/questions/413406",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/110311/"
]
} |
413,604 | From Taylor's theorem, we know that a function of time $x(t)$ can be constructed at any time $t>0$ as $$x(t)=x(0)+\dot{x}(0)t+\ddot{x}(0)\frac{t^2}{2!}+\dddot{x}(0)\frac{t^3}{3!}+...\tag{1}$$ by knowing an infinite number of initial conditions $x(0),\dot{x}(0),\ddot{x}(0),\dddot{x}(0),...$ at $t=0$. On the other hand, it requires only two initial conditions $x(0)$ and $\dot{x}(0)$, to obtain the function $x(t)$ by solving Newton's equation $$m\frac{d^2}{dt^2}x(t)=F(x,\dot{x},t).\tag{2}$$ I understand that (2) is a second order ordinary differential equation and hence, to solve it we need two initial conditions $x(0)$ and $\dot{x}(0)$. But how do we reconcile (2) which requires only two initial conditions with (1) which requires us to know an infinite number of initial informations to construct $x(t)$? How is it that the information from higher order derivatives at $t=0$ become redundant? My guess is that due to the existence of the differential equation (2), all the initial conditions in (1) do not remain independent but I'm not sure. | On the other hand, it requires only two initial conditions x(0) and
x˙(0), to obtain the function x(t) by solving Newton's equation For notational simplicity, let $$x_0 = x(0)$$
$$v_0 = \dot x(0)$$ and then write your equations as $$x(t) = x_0 + v_0t + \ddot x(0)\frac{t^2}{2!} + \dddot x(0)\frac{t^3}{3!} + \cdots$$ $$m\ddot x(t) = F(x,\dot x,t)$$ Now, see that $$\ddot x(0) = \frac{F(x_0,v_0,0)}{m}$$ $$\dddot x(0) = \frac{\dot F(x_0,v_0,0)}{m}$$ and so on. Thus $$x(t) = x_0 + v_0t + \frac{F(x_0,v_0,0)}{m}\frac{t^2}{2!} + \frac{\dot F(x_0,v_0,0)}{m}\frac{t^3}{3!} + \cdots$$ In other words, the initial value of the 2nd and higher order time derivatives of $x(t)$ are determined by $F(x,\dot x, t)$. | {
"source": [
"https://physics.stackexchange.com/questions/413604",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/36793/"
]
} |
413,741 | I know an electric field can exist without a magnetic field as in the case where you have a stationary point charge. But, magnetic fields are created by moving charges so wouldn't you always need an electric field to have a magnetic field? Even in the case of permanent magnets, from what I know, it's the aligned moving electrons in the atoms of the material which cause the magnetic properties so doesn't that mean there's always an electric field in order to have a magnetic field? | The "magnetic field" is a concept within classical electrodynamics. Maxwell's equations were developed in the mid 19th century at a time where basic atomic physics was still a nascent field of study. Viewed in the contemporary historical context, a permanent magnet is a perfectly fine example of a magnetic field without an electric field. Within the theory of classical electrodynamics, there is no explanation for why the magnetic field exists, only that it does exist, and how it's related to the electric field. Permanent magnets have a magnetic field as an intrinsic, fundamental property, similar to the reasons rocks have mass. They just do. In the past one and a half centuries other theories have been developed. For example the magnetic field can be explained by special relativity as length contraction apparently creating a charge imbalance, so it could be said the magnetic field doesn't exist as a fundamental property but is rather a manifestation of the electric field in moving reference frame, and quantum physics explains permanent magnets as moving charges at sub-atomic scales . So viewed in the context of modern physics, there's really no need for a fundamental magnetic field at all since it can be explained in terms of the electric field and motion. The discovery of a magnetic monopole would change this, but although it would bring an elegant symmetry to the kinds of particles that exist, no evidence of a magnetic monopole has been found by experiment yet. | {
"source": [
"https://physics.stackexchange.com/questions/413741",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/199160/"
]
} |
413,846 | In reading these discussions I often see these two different definitions assumed. Yet they are very different. Which is correct: Does gravity CAUSE the bending of spacetime, or IS gravity the bending of spacetime? Or do we not know? Or is it just semantics? Would, in the absence of spacetime, my apple still fall to the earth? | I think the correct answer should be that what we call gravity is a fictional force which we experience due to living in an accelerated reference frame (as opposed to an inertial one). Unlike other forces, the force of gravity disappears by a coordinate change. If a person is in a falling elevator, they experience free fall, i.e. they feel like they are floating, and they would conclude there is no force of gravity acting on them. However we at the surface of the Earth would say that clearly the force of gravity is causing the elevator to plunge ever faster towards the ground. Of course the solution to this odd state of affairs is that gravity is not a force at all. We live in a four dimensional universe with a pseudo-Riemannian geometry in which freely falling objects move along geodesics, or lines of extremal space-time distance. Because the geometry can be intrinsically curved (like the surface of a sphere), those geodesics are not what we think of as straight lines. The person inside the elevator moves along a geodesic, while we on the surface of the Earth are accelerated and do not move along a geodesic. The space-time paths (or worldines) of the elevator and the ground underneath it are not straight lines, and so they intersect at some point. That intersection is the point in space-time at which the elevator hits the ground. One way to think of this is to consider two ants walking along lines of longitude on a globe. Lines of longitude are great circles, and are geodesics of the sphere. The two ants start at the equator on different lines of longitude both heading due north at the same speed. Their paths are initially parallel to each other, but as they move along the curved surface the distance between them shrinks until they eventually collide at the North Pole. It appears as though there is a force which is pulling them together, but in fact the force is fictitious, the reason they got closer is because on the sphere the geodesics converge and cross each other, unlike in flat space where the geodesics are straight lines which never cross. If the globe is very large, the ants will never know that they are moving on a curved surface, and so would conclude that there must be some force which attracts them. This is the fundamental picture for how "gravity" works from the perspective of General Relativity. Now to your question, the difference is subtle. While what we refer to as "gravity" is subject to semantics, there is something more profound going on. General Relativity is usually referred to as a "theory of gravity", in which case we can think of the answer as the latter: by definition, gravity is the bending of space-time. On the other hand if we think of gravity as a force, the apparent force of gravity is essentially caused by the fact that space-time is curved. But we can essentially take this logic in circles if we think too much about it, it all depends on what we define "gravity" to be. But deeper than this is the question of what causes gravity ? In classical mechanics we are told that gravity is caused by mass, in the sense that massive bodies have a gravitational field which causes them to attract. But we know that's not the right picture. So to generalize your question, is spacetime curvature caused by mass? In some sense yes, in some sense no. Einstein's equation reads $$G_{\mu\nu} = \kappa T_{\mu\nu}$$ where $\kappa$ is a constant, the tensor $G_{\mu\nu}$ is a function of the metric, which encodes the curvature of spacetime, and $T_{\mu\nu}$ is the stress-energy tensor which encodes the matter/energy content of the universe. Because the theory of General Relativity is fundamentally four dimensional, and there is no preferred direction to call "time", we must essentially solve Einstein's equation "all at once". Clearly the matter content of the universe will determine the curvature of the universe, while the curvature of the universe will tell the matter how to move. So you have a sort of chicken and egg problem: matter tells space how to bend and space tells matter how to move. There is a Hamiltonian (i.e initial value) formalism for GR which works for globally hyperbolic spacetimes (that is, it is not valid for all possible spacetimes). It is called the ADM formalism (named after Arnowitt, Deser, and Misner). It does allow one to set up initial conditions for a spacetime (initial curvature and matter/energy state) and compute the evolution of that spacetime and its matter content over "time" in a way that is generally covariant (does not violate relativity of observers). But this still does not separate the inherent link between space-time curvature and matter/energy content. As an interesting related question, one could ask whether a massive particle moving through space can interact with itself gravitationally? That is, the mass of the particle distorts space-time and therefore alters its trajectory. There is a similar question at the end of Jackson's "Classical Electrodynamics" regarding accelerating charged particles interacting with their own radiation. I believe his conclusion is that such processes are not really considered because they would create such small corrections. In the context of GR, I would guess such questions fall in the realm of Quantum Gravity. As to your last question, perhaps you meant "in the absence of space-time curvature ". In which case the answer is no, the apple would not fall, all objects would move in straight space-time paths which never intersect and so would always remain at the same distance from each other. | {
"source": [
"https://physics.stackexchange.com/questions/413846",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/187139/"
]
} |
413,862 | I am trying to do some car calculations for a game. I have a tyre (radius $r$, inertia $I$) rotating around it's axle with an angular velocity $\omega$ faster than the center of mass is moving across the surface. So the tyre is not rolling along the ground, but slipping. The surface has a friction coefficient $\mu$.
The tyre is pressed down by the car with a weight $N$ I need to calculate two things. The force the tyre exerts on the ground. The resulting (de)acceleration of the tyre. Imagine a car doing a wheelspin. I need to find the force the excess speed of the tyre transfers to the ground, giving an acceleration to the car, and thereby also de deacceleration of the tyre. I have calculations for the external forces (engine, brakes, drag, lateral slipping) and this is the final piece to complete the puzzle. So help is much appreciated, as i have been trying to solve this in a myriad of ways, with no success. //Edited - It is the wheels connected to the enigne i am trying to calculate. However i am in this case not interested in the torque coming from the engine, only actions from the extra energy stored in wheel from building up the spin. The engine provides a torque $\tau$ on the tyre | I think the correct answer should be that what we call gravity is a fictional force which we experience due to living in an accelerated reference frame (as opposed to an inertial one). Unlike other forces, the force of gravity disappears by a coordinate change. If a person is in a falling elevator, they experience free fall, i.e. they feel like they are floating, and they would conclude there is no force of gravity acting on them. However we at the surface of the Earth would say that clearly the force of gravity is causing the elevator to plunge ever faster towards the ground. Of course the solution to this odd state of affairs is that gravity is not a force at all. We live in a four dimensional universe with a pseudo-Riemannian geometry in which freely falling objects move along geodesics, or lines of extremal space-time distance. Because the geometry can be intrinsically curved (like the surface of a sphere), those geodesics are not what we think of as straight lines. The person inside the elevator moves along a geodesic, while we on the surface of the Earth are accelerated and do not move along a geodesic. The space-time paths (or worldines) of the elevator and the ground underneath it are not straight lines, and so they intersect at some point. That intersection is the point in space-time at which the elevator hits the ground. One way to think of this is to consider two ants walking along lines of longitude on a globe. Lines of longitude are great circles, and are geodesics of the sphere. The two ants start at the equator on different lines of longitude both heading due north at the same speed. Their paths are initially parallel to each other, but as they move along the curved surface the distance between them shrinks until they eventually collide at the North Pole. It appears as though there is a force which is pulling them together, but in fact the force is fictitious, the reason they got closer is because on the sphere the geodesics converge and cross each other, unlike in flat space where the geodesics are straight lines which never cross. If the globe is very large, the ants will never know that they are moving on a curved surface, and so would conclude that there must be some force which attracts them. This is the fundamental picture for how "gravity" works from the perspective of General Relativity. Now to your question, the difference is subtle. While what we refer to as "gravity" is subject to semantics, there is something more profound going on. General Relativity is usually referred to as a "theory of gravity", in which case we can think of the answer as the latter: by definition, gravity is the bending of space-time. On the other hand if we think of gravity as a force, the apparent force of gravity is essentially caused by the fact that space-time is curved. But we can essentially take this logic in circles if we think too much about it, it all depends on what we define "gravity" to be. But deeper than this is the question of what causes gravity ? In classical mechanics we are told that gravity is caused by mass, in the sense that massive bodies have a gravitational field which causes them to attract. But we know that's not the right picture. So to generalize your question, is spacetime curvature caused by mass? In some sense yes, in some sense no. Einstein's equation reads $$G_{\mu\nu} = \kappa T_{\mu\nu}$$ where $\kappa$ is a constant, the tensor $G_{\mu\nu}$ is a function of the metric, which encodes the curvature of spacetime, and $T_{\mu\nu}$ is the stress-energy tensor which encodes the matter/energy content of the universe. Because the theory of General Relativity is fundamentally four dimensional, and there is no preferred direction to call "time", we must essentially solve Einstein's equation "all at once". Clearly the matter content of the universe will determine the curvature of the universe, while the curvature of the universe will tell the matter how to move. So you have a sort of chicken and egg problem: matter tells space how to bend and space tells matter how to move. There is a Hamiltonian (i.e initial value) formalism for GR which works for globally hyperbolic spacetimes (that is, it is not valid for all possible spacetimes). It is called the ADM formalism (named after Arnowitt, Deser, and Misner). It does allow one to set up initial conditions for a spacetime (initial curvature and matter/energy state) and compute the evolution of that spacetime and its matter content over "time" in a way that is generally covariant (does not violate relativity of observers). But this still does not separate the inherent link between space-time curvature and matter/energy content. As an interesting related question, one could ask whether a massive particle moving through space can interact with itself gravitationally? That is, the mass of the particle distorts space-time and therefore alters its trajectory. There is a similar question at the end of Jackson's "Classical Electrodynamics" regarding accelerating charged particles interacting with their own radiation. I believe his conclusion is that such processes are not really considered because they would create such small corrections. In the context of GR, I would guess such questions fall in the realm of Quantum Gravity. As to your last question, perhaps you meant "in the absence of space-time curvature ". In which case the answer is no, the apple would not fall, all objects would move in straight space-time paths which never intersect and so would always remain at the same distance from each other. | {
"source": [
"https://physics.stackexchange.com/questions/413862",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/199221/"
]
} |
413,876 | Some presentations of the virial theorem are mechanical (see this page by John Baez for an example). They assume that there is a system of point particles interacting only via Newtonian gravity (along with other assumptions, e.g. that the particles don't fly away to infinity), and show $\langle T \rangle = -\frac12\langle V\rangle$. The physics that goes in is just Newton's laws. Other presentations are based on thermodynamics (see pp 81 of these notes by Mike Guidry, for example). They imagine a gas in hydrostatic equilibrium, find the pressure, and use the ideal gas law to derive the same result, $T = -\frac12 V$. The physical assumptions that go into these seem pretty different. In the mechanical case, we have only gravitational interactions. In the thermodynamic case, the interactions aren't even specified. Presumably the gas particles are bouncing off each other according to some sort of force law, but we only actually need to know that the ideal gas law holds (and use the condition for hydrostatic equilibrium). Although the theorems seem physically different, they have the same name and come to the same conclusion (except that the thermodynamic one doesn't need the time-averaging). How are these two versions of the virial theorem related to each other? Other than crunching through each proof separately, how can one see that they ought to give the same result? note: I'm asking about the special case of the virial theorem described above, not the general virial theorem for more general force laws, for example | I think the correct answer should be that what we call gravity is a fictional force which we experience due to living in an accelerated reference frame (as opposed to an inertial one). Unlike other forces, the force of gravity disappears by a coordinate change. If a person is in a falling elevator, they experience free fall, i.e. they feel like they are floating, and they would conclude there is no force of gravity acting on them. However we at the surface of the Earth would say that clearly the force of gravity is causing the elevator to plunge ever faster towards the ground. Of course the solution to this odd state of affairs is that gravity is not a force at all. We live in a four dimensional universe with a pseudo-Riemannian geometry in which freely falling objects move along geodesics, or lines of extremal space-time distance. Because the geometry can be intrinsically curved (like the surface of a sphere), those geodesics are not what we think of as straight lines. The person inside the elevator moves along a geodesic, while we on the surface of the Earth are accelerated and do not move along a geodesic. The space-time paths (or worldines) of the elevator and the ground underneath it are not straight lines, and so they intersect at some point. That intersection is the point in space-time at which the elevator hits the ground. One way to think of this is to consider two ants walking along lines of longitude on a globe. Lines of longitude are great circles, and are geodesics of the sphere. The two ants start at the equator on different lines of longitude both heading due north at the same speed. Their paths are initially parallel to each other, but as they move along the curved surface the distance between them shrinks until they eventually collide at the North Pole. It appears as though there is a force which is pulling them together, but in fact the force is fictitious, the reason they got closer is because on the sphere the geodesics converge and cross each other, unlike in flat space where the geodesics are straight lines which never cross. If the globe is very large, the ants will never know that they are moving on a curved surface, and so would conclude that there must be some force which attracts them. This is the fundamental picture for how "gravity" works from the perspective of General Relativity. Now to your question, the difference is subtle. While what we refer to as "gravity" is subject to semantics, there is something more profound going on. General Relativity is usually referred to as a "theory of gravity", in which case we can think of the answer as the latter: by definition, gravity is the bending of space-time. On the other hand if we think of gravity as a force, the apparent force of gravity is essentially caused by the fact that space-time is curved. But we can essentially take this logic in circles if we think too much about it, it all depends on what we define "gravity" to be. But deeper than this is the question of what causes gravity ? In classical mechanics we are told that gravity is caused by mass, in the sense that massive bodies have a gravitational field which causes them to attract. But we know that's not the right picture. So to generalize your question, is spacetime curvature caused by mass? In some sense yes, in some sense no. Einstein's equation reads $$G_{\mu\nu} = \kappa T_{\mu\nu}$$ where $\kappa$ is a constant, the tensor $G_{\mu\nu}$ is a function of the metric, which encodes the curvature of spacetime, and $T_{\mu\nu}$ is the stress-energy tensor which encodes the matter/energy content of the universe. Because the theory of General Relativity is fundamentally four dimensional, and there is no preferred direction to call "time", we must essentially solve Einstein's equation "all at once". Clearly the matter content of the universe will determine the curvature of the universe, while the curvature of the universe will tell the matter how to move. So you have a sort of chicken and egg problem: matter tells space how to bend and space tells matter how to move. There is a Hamiltonian (i.e initial value) formalism for GR which works for globally hyperbolic spacetimes (that is, it is not valid for all possible spacetimes). It is called the ADM formalism (named after Arnowitt, Deser, and Misner). It does allow one to set up initial conditions for a spacetime (initial curvature and matter/energy state) and compute the evolution of that spacetime and its matter content over "time" in a way that is generally covariant (does not violate relativity of observers). But this still does not separate the inherent link between space-time curvature and matter/energy content. As an interesting related question, one could ask whether a massive particle moving through space can interact with itself gravitationally? That is, the mass of the particle distorts space-time and therefore alters its trajectory. There is a similar question at the end of Jackson's "Classical Electrodynamics" regarding accelerating charged particles interacting with their own radiation. I believe his conclusion is that such processes are not really considered because they would create such small corrections. In the context of GR, I would guess such questions fall in the realm of Quantum Gravity. As to your last question, perhaps you meant "in the absence of space-time curvature ". In which case the answer is no, the apple would not fall, all objects would move in straight space-time paths which never intersect and so would always remain at the same distance from each other. | {
"source": [
"https://physics.stackexchange.com/questions/413876",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/74/"
]
} |
413,886 | I like watching different videos about space. I keep seeing all these videos saying scientists found so and so at 200 billion light years away or this happened 13 billion years ago. My question is why do scientists think that all the physics that apply in our galaxy apply in a galaxy say 200 billion light years away ? What if, say at 135 billion light years away, all of a sudden the time space relationship changes drastically and instead of linear time space relationships the difference becomes based on a "sliding scale" (to revert back to high school). What if a light they first see and estimate to be 200 billion light years away has actually been traveling for another 300 billion light years before we could detect it? Lets be serious, we can't predict the weather farther out than 10 days accurately, and usually not that long.... | I think the solution to this may be to check out Occam's razor . That leads to the idea that we accept the simplest theory which matches best with what we observe. If you're asking why we don't believe that the spacetime relationship changes drastically (among other claims), it's because: We have no reason to believe that's the case. There's no evidence which needs to be explained by such a model. No observations or reasoning suggests that other galaxies are governed by drastically different laws. We like symmetry. We have evidence that things work in a certain way around Earth and the observable universe, and hence are compelled to believe that the same laws are applicable at all scales, until we have reasons to believe otherwise. String theory predicts other situations, but those weren't observed yet in reality, and they don't emerge from a crude "Hey, why not?!" speculation. That being said, though we believe that the same laws apply, we know that there are different physical phenomena going on in other galaxies. For example, this link will show you that there are different types of galaxies which behave differently in spite of the same laws, because of different initial conditions. And to answer your reference to weather, that's chaos theory, and it deals with the dependence of the weather on extremely small factors which can't be observed reasonably. Check out the work of Edward Lorenz ( http://eaps4.mit.edu/research/Lorenz/publications.htm ). A gist of one of his most important experiments is that he ran the same weather simulator algorithm twice and got two entirely different predictions., even though he only neglected the 5th or 6th decimal place in one of input datasets. The initial conditions were different in such a minute way, but the simulation algorithm yielded incredibly different results! That doesn't seem particularly relevant to whether (no pun) or not there's a symmetry of physical
laws. We know there's a huge number of factors in the prediction of weather, so our errors are huge. But our attempts to observe what's going on at other scales and locations are relatively error-free. In one sentence: it's easy to believe in the symmetry of laws, and there's no reason yet to doubt their accuracy. | {
"source": [
"https://physics.stackexchange.com/questions/413886",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/199230/"
]
} |
414,057 | Is it true that an electric current that flows through a conductor creates a magnetic field around the conductor? If yes, then why doesn't the magnetic sensor of my mobile device react in any way to changes when I bring the device near to the wire connected to 220 V electricity at home? | There are two wires, the second wire carries the equal (!) return current. The magnetic fields from the two wires cancel out, except at very short distance. For measuring the current from the field you must clamp only one of the wires. The field at a distance can be further reduced by twisting the wires ("twisted pair") or by adopting a coaxial structure ("shielding"). There you have one of the first principles of electromagnetic compatibility: the magnetic field is proportional to the area of the current loop. Two wires close together don't form much of a loop, and that is why UTP ethernet works so well. On the other hand, if you deliberately make a large loop, say a few windings around your living room, and you feed an audio current through it, then a hearing aid in the "telephone" mode will easily pick up the audio signal. Every auditorium or theater has this service, for the hard of hearing. | {
"source": [
"https://physics.stackexchange.com/questions/414057",
"https://physics.stackexchange.com",
"https://physics.stackexchange.com/users/187947/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.