source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
669,118
While reading my physics book. I came across a line that says that: Rain drop falls with a constant velocity because the weight(which is the force of gravity acting on body) of the drop is balanced by the sum of the buoyant force and force due to friction(or viscosity )of air. Thus the net force on the drop is zero so it falls down with a constant velocity. I was not satisfied by the explanation So I searched the internet which too had similar explanations: The falling drop increases speed until the resistance of the air equals the pull of gravity, at which point the drop begins to fall at a constant speed, its terminal velocity. My confusion regarding the matter is that if the net force acting on a body (here the rain drop) is zero then it should remain suspended in air rather than falling towards the earth. So how come the rain drop keeps falling when net force acting on it becomes zero? How the air resistance and other forces stops the rain drop from acquiring accelerated downward motion?
Here is a slightly different way to think of this. If the net force is zero, the acceleration of the droplet is zero- even though its velocity is not zero. With the acceleration zero, the velocity remains constant as it falls.
{ "source": [ "https://physics.stackexchange.com/questions/669118", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
669,475
The question might have some misconceptions/ sloppy intuition sorry if that's the case (I'm not a physicist). I seem to have the intuition that given a system of $N$ charged particles in 3D space colliding (under the effect of gravitational forces and electrostatic forces) elastically with each other, then the evolution of this system is symmetric with respect to time reversal. In the sense that if I record a video of the evolution of this mechanical system and then play it backwards then the resulting video will look like something that can happen in our universe. If this intuition is correct, then it should be easy to prove mathematically from the uniqueness theorem of ordinary differential equations. I also seem to have the idea that statistical mechanics is nothing but the situation described above with $N$ being very large (particles in a gas are moving under the effect of gravitational and van der Waal forces and nothing else, no?). Thus, I would expect that the evolution of a thermodynamic system with respect to time should be symmetric with respect to time reversal. However this seems to contradict the second law of thermodynamics. Where did I go wrong? After seeing some of the responses to my question I wish to add the following : I am NOT trying to refute the second law mathematically (lol :D). As you can see above I don't provide any mathematical proofs . I specifically said "If my intuition is correct, then it should be easy to prove mathematically ". That means I am skeptical about my own intuition because: 1) I don't back it up with a proof, 2) it is in contradiction with a well established law such as the second law.
The arrow of time in thermodynamics is statistical. Suppose you have a deterministic system that maps from states that can have character $X$ or character $Y$ , to other states that can have character $X$ or character $Y$ . The system is such that, for a randomly selected state $X_n$ or $Y_n$ , the probability that the system will map it uniquely and deterministically to a state with character $Y$ is $10^9$ times larger than the probability that the system will map it uniquely to a state with character $X$ . Then, given any state $X_n$ or $Y_n$ and the number of times $N$ we have iterated the system, we can run time backward by reversing the iteration of the system and get the corresponding past state, because each state is mapped uniquely and deterministically. However, if we can only measure the character of the system, we might note that the system originated in a state with character $X$ , and, after an unknown number of iterations of the system, it was in character $Y$ . We would correctly note that states with character $X$ always evolve into states with character $Y$ if you wait a while. We could call this the "X-to-Y law" and express it mathematically. If we start with a certain number $x$ of states with character $X$ and number $y$ of states with character $Y$ , then after iterations $N$ , $x = 10^{-9N}x_0$ and $y = y_0 + x_0-x$ . However, there is no corresponding "Y-from-X law". If we don't know $N$ and $Y_n$ exactly, we can only speak statistically. And statistically, the chances are overwhelming that, given some state with character $Y$ , the state at some previous iteration also had character $Y$ . This means we can't reverse the direction of time in our mathematical expression of the "X-to-Y law". A more plain language explanation: Suppose you have an oxygen tank and a nitrogen tank in a room and their mass ratio is the same as the mass ratio as that of air. The room pressure is assumed to always equalize with ambient pressure and temperature. The 2nd law of thermodynamics says that, if you open the tanks and wait half an hour, the oxygen and nitrogen will all be out and the air will be exactly the same as it was before. The time-reversed 2nd law of thermodynamics says that, any time you're in a room with normal air in it, somebody must have opened an oxygen and nitrogen tank half an hour ago.
{ "source": [ "https://physics.stackexchange.com/questions/669475", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/21138/" ] }
669,485
If you put a non-polar gas like $\rm CO_2$ in a positive electric field, Will it get polarized? What will be the force the molecules get as its attracted to the positive field?
The arrow of time in thermodynamics is statistical. Suppose you have a deterministic system that maps from states that can have character $X$ or character $Y$ , to other states that can have character $X$ or character $Y$ . The system is such that, for a randomly selected state $X_n$ or $Y_n$ , the probability that the system will map it uniquely and deterministically to a state with character $Y$ is $10^9$ times larger than the probability that the system will map it uniquely to a state with character $X$ . Then, given any state $X_n$ or $Y_n$ and the number of times $N$ we have iterated the system, we can run time backward by reversing the iteration of the system and get the corresponding past state, because each state is mapped uniquely and deterministically. However, if we can only measure the character of the system, we might note that the system originated in a state with character $X$ , and, after an unknown number of iterations of the system, it was in character $Y$ . We would correctly note that states with character $X$ always evolve into states with character $Y$ if you wait a while. We could call this the "X-to-Y law" and express it mathematically. If we start with a certain number $x$ of states with character $X$ and number $y$ of states with character $Y$ , then after iterations $N$ , $x = 10^{-9N}x_0$ and $y = y_0 + x_0-x$ . However, there is no corresponding "Y-from-X law". If we don't know $N$ and $Y_n$ exactly, we can only speak statistically. And statistically, the chances are overwhelming that, given some state with character $Y$ , the state at some previous iteration also had character $Y$ . This means we can't reverse the direction of time in our mathematical expression of the "X-to-Y law". A more plain language explanation: Suppose you have an oxygen tank and a nitrogen tank in a room and their mass ratio is the same as the mass ratio as that of air. The room pressure is assumed to always equalize with ambient pressure and temperature. The 2nd law of thermodynamics says that, if you open the tanks and wait half an hour, the oxygen and nitrogen will all be out and the air will be exactly the same as it was before. The time-reversed 2nd law of thermodynamics says that, any time you're in a room with normal air in it, somebody must have opened an oxygen and nitrogen tank half an hour ago.
{ "source": [ "https://physics.stackexchange.com/questions/669485", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/315345/" ] }
669,502
This question involves two cases: electrons bound to a nucleus and free electrons. Bound electrons Let's consider the hydrogen atom for simplicity. As far as I know, to be able to excite the electron, the energy of the photon should be in discrete values corresponding to the difference between energy levels inside the hydrogen atom. By the way in this link , the answer states that it is not the electrons that absorbs the photon but the atom in general, which makes sense to me (please correct or clarify if wrong). The question is how long does the electron stay in that excited state, i.e. how quickly is the photon emitted back? Is it the same for all energy levels and all conditions like particle density (when many atoms together), temperature, presence of electric field, nucleus structure (neutron count) etc? Free electrons Again, according to the same link , free electrons do not absorb photons, which means they only undergo Compton scattering. Is this correct? If not, how long does it take for the photon be emitted back? Is the energy gain permanent?
The arrow of time in thermodynamics is statistical. Suppose you have a deterministic system that maps from states that can have character $X$ or character $Y$ , to other states that can have character $X$ or character $Y$ . The system is such that, for a randomly selected state $X_n$ or $Y_n$ , the probability that the system will map it uniquely and deterministically to a state with character $Y$ is $10^9$ times larger than the probability that the system will map it uniquely to a state with character $X$ . Then, given any state $X_n$ or $Y_n$ and the number of times $N$ we have iterated the system, we can run time backward by reversing the iteration of the system and get the corresponding past state, because each state is mapped uniquely and deterministically. However, if we can only measure the character of the system, we might note that the system originated in a state with character $X$ , and, after an unknown number of iterations of the system, it was in character $Y$ . We would correctly note that states with character $X$ always evolve into states with character $Y$ if you wait a while. We could call this the "X-to-Y law" and express it mathematically. If we start with a certain number $x$ of states with character $X$ and number $y$ of states with character $Y$ , then after iterations $N$ , $x = 10^{-9N}x_0$ and $y = y_0 + x_0-x$ . However, there is no corresponding "Y-from-X law". If we don't know $N$ and $Y_n$ exactly, we can only speak statistically. And statistically, the chances are overwhelming that, given some state with character $Y$ , the state at some previous iteration also had character $Y$ . This means we can't reverse the direction of time in our mathematical expression of the "X-to-Y law". A more plain language explanation: Suppose you have an oxygen tank and a nitrogen tank in a room and their mass ratio is the same as the mass ratio as that of air. The room pressure is assumed to always equalize with ambient pressure and temperature. The 2nd law of thermodynamics says that, if you open the tanks and wait half an hour, the oxygen and nitrogen will all be out and the air will be exactly the same as it was before. The time-reversed 2nd law of thermodynamics says that, any time you're in a room with normal air in it, somebody must have opened an oxygen and nitrogen tank half an hour ago.
{ "source": [ "https://physics.stackexchange.com/questions/669502", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/282521/" ] }
669,995
A while back, I was curious about how the optimal angle (ie. max range) for a projectile (with no drag and so forth, so HS physics conditions) would change if it is thrown from a height $h$ . So I started working on this, and about $1.5$ pages into my $4$ page solution [full of ugly trig and differentiation, I did not think to use Wolfram :( ...], I decided to assume $v$ (the velocity with which projectile is thrown) to be equal to $\sqrt{g}$ . I did this because this simplified the trig a bit and I thought that the optimal angle would be independent of $v$ . Because it is well known that when $h=0$ , optimal angle is $\theta=45^\circ$ irrespective of $v$ , so I thought the same would hold for any arbitratry $h$ . So after some more messy math I got optimal angle $\theta$ is: $\sin^2\theta = \frac{1}{2(h+1)} \iff \cos2\theta = \frac{h}{h+1}$ Then I looked up the solution and found this Physics Stack Exchange answer, which gave the answer: $\cos2\theta=\frac{gh}{v^2+gh}$ So my answer is the correct when $v=\sqrt g$ but I was surprised when I saw that the answer depends on launch velocity. Of course I looked at the math again (with my good friend Wolfram, this time) to see that the answer did infact depend on $v$ also. But I still do not understand why this is the case. I mean this is not the case when $h=0$ . Also, it seems to me like increasing the velocity would just scale the answer by a positive factor, so I am not sure why the velocity is important here. So, Tldr; I want to understand intuitively/qualitatively (I understand it mathematically) why the optimal angle for maximising range in a projectile is not independent of the launch velocity.
Let's take an extreme situation where $h$ is very high. According to both formula, the best angle should be zero, especially if $v$ is low, so path (1) gives the best range. If the angle were independent of velocity it would be zero degrees even if $v$ were increased. But if $v$ becomes very large, so that $h$ looks small, it's better for the angle to be like path (2). The projectile is now spending nearly all the time above $h$ . This is similar to a projectile launched from a lower height - an angle near 45 degrees would be best. So the best angle needs to depend on velocity as well as the height of projection.
{ "source": [ "https://physics.stackexchange.com/questions/669995", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/287211/" ] }
670,135
Let's say there's a puddle of water on the ground. I use a magical device to give it enough thermal energy to vaporize into water vapor. The water vapor floats up into the sky. I then use the magical device to absorb the same amount of thermal energy I previously gave it. The water vapor then condenses to water and falls back on the ground and forms the same puddle of water on the ground. My device absorbed and gave the same amount of energy, therefore the net energy in the system should be the same. However, it seems like the system's energy should increase from the water vapor floating up into the sky and producing thermal energy from friction with the air molecules. It should also produce more energy from friction with the air when it falls as rain drops towards the Earth and also when it hits the ground and disperses more thermal energy from its kinetic energy. This breaks the law of conservation of energy, but I don't see what's wrong with my model. I thought about this when I read that the rain produces a lot of thermal energy from friction with the air.
The first device is not so magical - you can accomplish the same result with a fire under a pot. The second device, on the other hand, is indeed magical: you are converting heat into usable energy without any side effects. This is prohibited by the second law of thermodynamics. You are also violating the first law, though, and the thermodynamics police are coming for you ;) The problem lies in making an unwarranted assumption. The vapour rises in the air, and you say you want to consider effects such as "friction", or the exchange of heat between water and other air molecules. A proper description of this phenomenon will show that any energy gained by the air is lost by the water, so when you activate your second-law-violating device the vapour will yield less energy than what you put in initially - unless, of course, you are also able to retrieve the energy which was lost to the air.
{ "source": [ "https://physics.stackexchange.com/questions/670135", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/315770/" ] }
671,415
Since Quantum Field Theory can't handle gravity, and gravity is mathematically equivalent to acceleration (equivalence principle), does this mean Quantum Field theory can't handle accelerated frames of reference?
No, quantum field theory is perfectly capable of handling accelerated frames of reference: Quantum Field Theory is based on special relativity. Contrary to some somewhat widespread belief, special relativity is perfectly capable of handling accelerated frames . As long as the spacetime is flat, special relativity is perfectly fine. And the curvature of spacetime doesn't depend on the frame of reference (it is a tensor, a geometric object). This is why you will often hear that the twin paradox can be solved with special relativity alone. The twin on the rocket is on an accelerated frame. But as long as they don't pass near a black hole or something like that, spacetime is flat and special relativity applies. I understand your confusion, gravity is locally equivalent to acceleration. "Locally" is the important word. This means that in the presence of a gravitational field, for every point of spacetime, you can always choose a frame of reference where in the neighborhood of that point , you are in free fall (i.e. spacetime looks flat, the frame of reference is inertial). But there is no frame of reference such that spacetime looks flat everywhere. The crucial difference between being in an accelerating frame in flat spacetime versus being in a gravitational field is that in the former case there is a global inertial reference frame, where spacetime looks flat everywhere, not just in the neighborhood of some point. So the difference between special and general relativity isn't that one treats acceleration and the other doesn't, but that one treats flat spacetime and the other treats curved spacetime. Quantum field theory is also capable of handling curved spacetime . Some things must be modified to make it work, but there aren't great issues as long as the spacetime is treated classically . A quantum field in a curved (static or time dependent) classical spacetime works well. Problems arise when you try to quantize gravity, i.e. treat it as a quantum field. A quantum field theory of gravity. That's what we have not been able to do.
{ "source": [ "https://physics.stackexchange.com/questions/671415", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/310552/" ] }
671,423
We say electrons must jump from the valence band to conduction band in order to be conductive. Why is that the case and why can't electrons in the valence bands themselves be conductive? Is it because more energy levels are available unoccupied in the conduction band and electrons literally have a lot more room to roam around? If that is the case, then why doesn't the valence band of Silicon (3p band) be conductive despite being just 33% occupied ? What core concept of conductivity and/or band theory am I missing? Please elucidate. Also can we explain the drift of electrons in a metal under an electric field in terms of transition between various close energy levels within the same conductive band?
No, quantum field theory is perfectly capable of handling accelerated frames of reference: Quantum Field Theory is based on special relativity. Contrary to some somewhat widespread belief, special relativity is perfectly capable of handling accelerated frames . As long as the spacetime is flat, special relativity is perfectly fine. And the curvature of spacetime doesn't depend on the frame of reference (it is a tensor, a geometric object). This is why you will often hear that the twin paradox can be solved with special relativity alone. The twin on the rocket is on an accelerated frame. But as long as they don't pass near a black hole or something like that, spacetime is flat and special relativity applies. I understand your confusion, gravity is locally equivalent to acceleration. "Locally" is the important word. This means that in the presence of a gravitational field, for every point of spacetime, you can always choose a frame of reference where in the neighborhood of that point , you are in free fall (i.e. spacetime looks flat, the frame of reference is inertial). But there is no frame of reference such that spacetime looks flat everywhere. The crucial difference between being in an accelerating frame in flat spacetime versus being in a gravitational field is that in the former case there is a global inertial reference frame, where spacetime looks flat everywhere, not just in the neighborhood of some point. So the difference between special and general relativity isn't that one treats acceleration and the other doesn't, but that one treats flat spacetime and the other treats curved spacetime. Quantum field theory is also capable of handling curved spacetime . Some things must be modified to make it work, but there aren't great issues as long as the spacetime is treated classically . A quantum field in a curved (static or time dependent) classical spacetime works well. Problems arise when you try to quantize gravity, i.e. treat it as a quantum field. A quantum field theory of gravity. That's what we have not been able to do.
{ "source": [ "https://physics.stackexchange.com/questions/671423", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
671,462
I know this is probably not possible to use to communicate via EPR, so my question is why? I create electron entangled pairs using pair production or some other method, (each color pair is an entangled pair) we send millions of those pairs toward two double slits separated far from each other. The person on the right side can decide to measure which slit the electron went through using detector D (which ruins the interference pattern) or they can decide to not measure which retains the interference pattern. This would seemingly also ruin or maintain the interference pattern for the person on the left side. If the source of electrons is streaming continuously, the person on the right could send a message by using dot dash for "interference on" or "interference off" during short 1 second intervals. Again, I presume this would not work, so why exactly? And please don't say "because you can't communicate faster than light." What would specifically go wrong in this set up that would not make it work as described? Thanks!
No, quantum field theory is perfectly capable of handling accelerated frames of reference: Quantum Field Theory is based on special relativity. Contrary to some somewhat widespread belief, special relativity is perfectly capable of handling accelerated frames . As long as the spacetime is flat, special relativity is perfectly fine. And the curvature of spacetime doesn't depend on the frame of reference (it is a tensor, a geometric object). This is why you will often hear that the twin paradox can be solved with special relativity alone. The twin on the rocket is on an accelerated frame. But as long as they don't pass near a black hole or something like that, spacetime is flat and special relativity applies. I understand your confusion, gravity is locally equivalent to acceleration. "Locally" is the important word. This means that in the presence of a gravitational field, for every point of spacetime, you can always choose a frame of reference where in the neighborhood of that point , you are in free fall (i.e. spacetime looks flat, the frame of reference is inertial). But there is no frame of reference such that spacetime looks flat everywhere. The crucial difference between being in an accelerating frame in flat spacetime versus being in a gravitational field is that in the former case there is a global inertial reference frame, where spacetime looks flat everywhere, not just in the neighborhood of some point. So the difference between special and general relativity isn't that one treats acceleration and the other doesn't, but that one treats flat spacetime and the other treats curved spacetime. Quantum field theory is also capable of handling curved spacetime . Some things must be modified to make it work, but there aren't great issues as long as the spacetime is treated classically . A quantum field in a curved (static or time dependent) classical spacetime works well. Problems arise when you try to quantize gravity, i.e. treat it as a quantum field. A quantum field theory of gravity. That's what we have not been able to do.
{ "source": [ "https://physics.stackexchange.com/questions/671462", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1040/" ] }
671,863
May be a silly and simple question, but I've been wondering if: The speed of light is constant, and When we're moving in the same direction (where both the emitter and the receiver move with the light direction) we would be making it take more time for the light to reach the other end. Conversely when moving in the opposite direction we'd be shortening the time it takes. Why do we not see a non-uniform speed of light caused by solar/galactic movement? Or do we?
It is not a silly question at all, and posed a problem until Einstein explained the results of Michelson–Morley experiment using special relativity in 1905. This experiment sought to measure this exact observed difference in the speed of light that you would imagine would result from the Earth's motion. They found that it remained invariant, and Einstein resolved this discrepancy by saying that the speed of light must be held constant for any observer. It’s not the easiest thing to imagine, but it has yet to be disproven and it makes many useful predictions. What does happen as a result of relative movement is called a Doppler shift. Similar to classical sound waves, the observed frequency increases as you move towards a source, and decreases as you move away. Instead of this shift corresponding to a higher/lower pitch, it results in bluer/redder light. You can then use this information to gauge the motion of celestial bodies.
{ "source": [ "https://physics.stackexchange.com/questions/671863", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/232182/" ] }
672,520
In terms of order of magnitude, how does a the energy consumption of a typical mammalian neuron (in the brain) compare with the state of the art MOSFET?
Not surprisingly, it isn't so easy to get the power consumption of a cell. What is the power consumption of a cell? makes various estimates. One estimate for a human cell is $$P_{cell} = 3 \cdot 10^{-10} W$$ When you read it note that power is measured either in Watts or ATP/sec. ATP, or Adenosine TriPhosphate is the molecule that stores energy in cells. An ATP is the amount of energy liberated by removing a phosphate group. As Martin Modrak pointed out, the brain has $2\%$ of the body's mass, but uses $20\%$ of its energy. The neurons use $80\%$ of this $20\%$ . I will estimate that the brain is $25\%$ neurons. That means neurons use roughly $32$ times more energy than a typical human cell, or $$P_{brain \space cell} = 10^{-8} W$$ More surprisingly, the power consumption of a MOSFET isn't as simple as you might expect. And not all MOSFETs are created equal. Some are intended for high voltage switching power supplies. Guide to MOSFET Power Dissipation Calculation in High-Power Supply gave an example power supply where the dissipation is $1.23 W$ . But you are probably thinking of a transistor used in a computer. I found an unsupported rough estimate in If every transistor in a modern CPU was replaced with an old vacuum tube, how much power would that CPU take? that the power of a transistor is $$P_{transistor} \approx 10^{-7} W$$ As Joao Mendez pointed out, power consumption is directly related to clock speed. This is because most of the power is used while switching between 1 and 0. This is the limiting factor of clock speed. Too much power consumption means raises the temperature of the chip too high, even with good cooling. Also, for mobile devices, it drains the battery more quickly. Keep in mind that a brain and a computer achieve immense computing power in completely different ways. A typical computer might use $10^{10}$ MOSFETs in the CPU and GPU, and > $10^{11}$ in a large bank of RAM. A typical clock speed is > $10^9$ Hz. It might run hundreds of threads "in parallel" using $\approx 10$ processors. From Transistor count , On the other hand, a brain has about $10^{11}$ neurons Are There Really as Many Neurons in the Human Brain as Stars in the Milky Way? . It also has about 3 times that many glial cells, Neuroglial Cells . It has what might loosely be called a "clock speed" of about $ 5 - 80$ Hz, What is the clock speed equivalent of the human brain? , and is massively parallel. MP 2Ring, Joe, and Stephan Matthiesen point out that a neuron has many dendrites, is much more complex than a transistor, and therefore a more powerful computing element. This is true, but a transistor is much faster and can do many operations in the time a neuron can do one. I have no good way of defining computing power that would apply to both, and much less hope of comparing them. A brain and a computer each can do things the other can't touch. Anything simple, like comparing clock speeds and dendrite counts, is surely misleading.
{ "source": [ "https://physics.stackexchange.com/questions/672520", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/193421/" ] }
672,978
According to Bernoulli's principle and for a given angle of attack would that not lower the lift force of the airplane and increase its drag and therefore increasing its demand in thrust and fuel consumption? This is anti-economic and puts the airplane under unnecessary mechanical stress. I am not referring to the angle of attack adjusted by the pilot during flight or to any preinstalled by the manufacturer angle of incidence on the wings to support lift but solely to the shape of the wings. For example aerobatic airplanes (i.e. stunt planes) and most modern fighter jets, they don't rely at all on wing shape for lift. Their wings are evenly shaped flat upside and downside (i.e. instead of being usually in other airplanes, curved upside and flat downside). Therefore, their flat evenly shaped wings do not support additionally to the angle of attack and incidence angle the lift of the plane. Why not this additional feature of upside curved wing shape is not present in this kind of airplanes? Is there any particular reason(s)? What is the trade-off here? A curved wing creates lower resistance on the upside and therefore faster air speed and lower air pressure (Bernoulli's principle) upside. Thus, increases lift in addition to the main lift generated by the angle of attack and incidence angle. It would be logical to make use of this feature in fighter jets and aerobatic airplanes to generate larger lift with less thrust and less fuel consumption.
The "equal-time fallacy" is alive and kicking, as shown in your question and the other answers. Look, here is the best explanation I've seen about how wings work and airplanes fly: https://www.av8n.com/how/ The "equal-time fallacy" says when two molecules of air get separated at the leading edge of a wing, they must come together at the trailing edge. So in order to get faster flow over the top (for lift), there must be a longer curve over the top. WRONG. The molecules do NOT meet up. The upper one gets there first. The result is a vortex in which air is pushed DOWN, and that makes the lift. Wings lift because they push the air down, not because they are shaped a particular way, but because they fly at an angle-of-attack (AOA). The shape is just an optimization for typical flight. Wings on aerobatic airplanes are symmetrical because they typically fly inverted just as well as upright. Is Bernoulli wrong? No. Bernoulli is absolutely true. What is wrong is the typical explanation, which is still being taught to kids. Again, read https://www.av8n.com/how/
{ "source": [ "https://physics.stackexchange.com/questions/672978", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/183646/" ] }
673,695
Why can a dipole not have two unequal charges separated by a distance? Is there any significance for the dipole being defined as electrically neutral?
The concept of a dipole moment , and other moments such as a monopole, quadrupole, etc, comes from the process of writing a field as a sum of components called multipoles . This is known as a multipole expansion of the field. The reason we do this is that it can make calculations quicker and easier because it allows us to approximate a complicated field by a simpler sum of multipoles. A single isolated point charge produces a field that is a pure monopole field, and two equal and opposite charges produce a field that is approximately a dipole field (it is exactly a dipole only in the limit of the distance between the charges becoming zero). So if you add a single charge to a pair of equal and opposite charges you get a total field that is a sum of monopole and dipole fields. And this is what happens in the example you give. Suppose we have two charges $+2Q$ and $-Q$ , then this is equivalent to a single charge $+Q$ and a pair of charges $+\tfrac32 Q$ and $-\tfrac32Q$ . The field from the charges would be the vector sum of a monopole field from the $+Q$ charge and a dipole field from the $\pm\tfrac32Q$ pair. So the reason a dipole cannot have two unequal charges is simply because such an arrangement would be a sum of a monopole and dipole, and not just a dipole.
{ "source": [ "https://physics.stackexchange.com/questions/673695", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/317547/" ] }
673,705
Recently, I study quantum optics and deal with quantization of EM field in a cavity. We know we can express/quantize vector potential in terms of $\hat{a},\hat{a}^{\dagger}$ to get a quantized EM field in a cavity. $$ \vec{A}(\vec{r},t)=\sum_{n,\sigma}\sqrt{\frac{\hbar}{2\epsilon_0\omega_n V}}\vec{e}_{n,\sigma}\Big[\hat{a}_{n,\sigma}e^{i(\vec{k}_n\cdot\vec{r}-\omega_nt)}+\hat{a}_{n,\sigma}^{\dagger}e^{-i(\vec{k}_n\cdot\vec{r}-\omega_nt)}\Big] $$ The quantized Hamiltonian is: $$ \hat{H}=\sum_{k}\hbar\omega_k(\hat{n}_k+\frac{1}{2}) $$ The eigenstate of quantized Hamiltonian is: $\left| n_1,n_2,n_3,... \right>=\left|n_1\right>\otimes\left|n_2\right>\otimes\left|n_3\right>...$ The state means there are $n_1$ photons in the first mode and $n_2$ photons in second mode and so on... So every mode has it own number of photons and photons in the different modes are not at the same frequencies. But why do we take photons as indistinguishable particles?
The concept of a dipole moment , and other moments such as a monopole, quadrupole, etc, comes from the process of writing a field as a sum of components called multipoles . This is known as a multipole expansion of the field. The reason we do this is that it can make calculations quicker and easier because it allows us to approximate a complicated field by a simpler sum of multipoles. A single isolated point charge produces a field that is a pure monopole field, and two equal and opposite charges produce a field that is approximately a dipole field (it is exactly a dipole only in the limit of the distance between the charges becoming zero). So if you add a single charge to a pair of equal and opposite charges you get a total field that is a sum of monopole and dipole fields. And this is what happens in the example you give. Suppose we have two charges $+2Q$ and $-Q$ , then this is equivalent to a single charge $+Q$ and a pair of charges $+\tfrac32 Q$ and $-\tfrac32Q$ . The field from the charges would be the vector sum of a monopole field from the $+Q$ charge and a dipole field from the $\pm\tfrac32Q$ pair. So the reason a dipole cannot have two unequal charges is simply because such an arrangement would be a sum of a monopole and dipole, and not just a dipole.
{ "source": [ "https://physics.stackexchange.com/questions/673705", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/315111/" ] }
673,836
Assume I live in a location where at any time of day and any time of year, I need to heat my house. Assume further that I have a room with no windows. In this case, does it make sense for me to buy efficient light bulbs, considering that any inefficiency in converting electricity to visible light simply leads to more heat being added to the room, which in turn, results in less heat being output by the heater to maintain constant room temperature. Although these are somewhat idealized conditions, I don't think they are too far off from being realistic. For example, say you live near the arctic circle, it might be smart not to have many windows due to heat loss, and it seems reasonable that in such a climate, heating will be required at all times of the day and year. Assuming I haven't missed something, it seems to me, somewhat unintuitively, that buying efficient light bulbs is not a logical thing to do. Is this the case?
Yes, it does make sense to buy efficient light bulbs in this case. Here is why: Modern state-of-the-art heating systems use heat pumps to do the heating work. Rather than just "burning" the electricity in resistive heating elements to "make" heat, a heat pump uses electricity to move heat from the outside of the house to the inside- and any waste heat generated in this process is exhausted to the inside of the house along with the heat drawn from outside the house. This process is inherently much more efficient than using $P=I^2R$ losses to furnish heat.
{ "source": [ "https://physics.stackexchange.com/questions/673836", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/122661/" ] }
673,851
Let's consider two overlaping particles of mechanical wave, since it's a mechanical wave, we can think of it as spring-like so energy of particle $ E= \frac{1}{2} k A^2 $ where A is the amplitude of the particle. Let y1 and y2 be displacement of two particles, then energy of first particle $E_1 = \frac{1}{2}ky_1^2$ energy of second particle $E_2 = \frac{1}{2}ky_2^2$ total energy $E = E_1 + E_2$ according to superposition total displacement $y = y_1 + y_2$ so total energy $E = \frac{1}{2}ky^2$ $= \frac{1}{2}k(y_1 + y_2)^2 $ $= \frac{1}{2}ky_1^2 + \frac{1}{2}ky_2^2 + ky_1y_2 $ $E = E_1 + E_2 + ky_1y_2 $ So value of total energy is different so how does energy conserve?
Yes, it does make sense to buy efficient light bulbs in this case. Here is why: Modern state-of-the-art heating systems use heat pumps to do the heating work. Rather than just "burning" the electricity in resistive heating elements to "make" heat, a heat pump uses electricity to move heat from the outside of the house to the inside- and any waste heat generated in this process is exhausted to the inside of the house along with the heat drawn from outside the house. This process is inherently much more efficient than using $P=I^2R$ losses to furnish heat.
{ "source": [ "https://physics.stackexchange.com/questions/673851", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/317626/" ] }
674,089
From what I've learned, fusion reactions are not currently economically viable as of right now because the energy required to start the reaction is more than the energy actually released. However, in stars they have immense pressures and temperatures which are able to allow these reactions to take place. However, if these reactions are considered endothermic for us, how are they exothermic in stars? i.e. how are stars able to release energy? Moreover, why are such fusion reactions for us endothermic in the first place? Given we are fusing elements smaller than iron, wouldn't the binding energy per nucleons products be higher and hence shouldn't energy be released?
It is the fact that fusion reactions are very exothermic that makes them so hard to control. Coal releases its chemical energy so slowly that a coal fire does not need any confinement - it does not blow itself apart. Refined hydrocarbons, such as petrol, release energy more violently, but the walls of a metal cylinder are strong enough to contain the explosion (e.g. in an internal combustion engine) or at least direct the explosion products in a useful direction (jet engines, rocket engines). We have various ways of achieving nuclear fusion (see this Wikipedia article for an overview) but they all either blow themselves up (thermonuclear devices), require too much power to achieve confinement (magnetic and inertial confinement devices) or produce a low density of reactions (colliding beam devices).
{ "source": [ "https://physics.stackexchange.com/questions/674089", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/314843/" ] }
674,101
Work - energy principle states that work done by net force acting on the body equals the change in kinetic energy of the body. We are talking about continuum mechanics. This principle is usually introduced in the mechanics of solid bodies. For us to describe the motion of the body, it is enough to know how the center of mass of the body moves in time and space. For example, we can conclude that body accelerates, if its center of mass has a different velocity at two points in time. I am not sure how to adequately apply this principle to fluids for reasons I'll explain. Consider fluid flowing in a pipe like in a scheme. When fluid starts entering a narrower section of the pipe, it accelerates. Newton's 2nd law states that in that case, resultant force must act on the fluid. We can see that this force originates from the difference in pressure of the surrounding fluid or pressure gradient. If we take some volume of fluid between two cross sections of a non-equal area in a narrowing region of the pipe (control volume), we can draw a free-body diagram to show all forces acting on that control volume. Doing so comes with a problem because the fluid doesn't move like a solid body, it flows. The concept of drawing all forces acting on control volume seems to have no sense in fluids because control volume doesn't move in space and time as a solid body does. Its center of mass doesn't move in space and time like in solid bodies, but rather fluid has different velocities on different cross sections or points in the pipe. If this is true, how should we apply Newton's 2nd law or work-energy principle to fluids? On what fluid element or control volume should we draw force diagrams and apply Newton's 2nd law? I am thinking we should probably take some differential volume element and if we want to know how much its velocity changes between two cross-sections (for inviscid fluid), we would need to calculate the line integral of pressure gradient along the streamline or fluid element path. Usually, the pressure gradient is constant, so the line integral is equal to the pressure gradient times the distance between two cross - sections. What is strange to me is that Bernoulli's equation/principle is commonly derived by work - the energy principle where force diagrams are drawn for some fluid volume of finite size. This derivation seems wrong to me given what I said above and force balance in fluids can only be done for differential volume elements on a particular streamline. Do you agree? What are your thoughts?
It is the fact that fusion reactions are very exothermic that makes them so hard to control. Coal releases its chemical energy so slowly that a coal fire does not need any confinement - it does not blow itself apart. Refined hydrocarbons, such as petrol, release energy more violently, but the walls of a metal cylinder are strong enough to contain the explosion (e.g. in an internal combustion engine) or at least direct the explosion products in a useful direction (jet engines, rocket engines). We have various ways of achieving nuclear fusion (see this Wikipedia article for an overview) but they all either blow themselves up (thermonuclear devices), require too much power to achieve confinement (magnetic and inertial confinement devices) or produce a low density of reactions (colliding beam devices).
{ "source": [ "https://physics.stackexchange.com/questions/674101", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/294799/" ] }
674,415
On the basis of the observations , Rutherford drew the following conclusions regarding the structure of an atom: Most of the space in the atom is empty as most of the alpha particles passed through the foil undeflected. A few positively charged alpha particles were deflected. The deflection must be due to enormous repulsive force showing that the positive charge of the atom is not spread throughout the atom as Thomson had presumed . The positive charge had to be concentrated in a very small volume that repelled and deflected the positively charged alpha particles. Calculations by Rutherford showed that the volume occupied by the nucleus was negligibly small compared to the total volume of the atom. The radius of the atom is about $\mathbf{10^{–10}\,\mathrm{m}}$ , while that of the nucleus is $\mathbf{10^{–15}\,\mathrm{m}}$ I know this model is unsatisfactory, but how did Rutherford calculate the radius of the atom to be $10^{-10}\,\mathrm{m}$ ?
Rutherford probably estimated the size of gold atoms as already sketched by @AndrewSteane in his comment. The density of gold is $\rho=19.3\text{ g/cm}^3$ . The molar mass of gold was known from chemistry: $m_\text{mol}= 197 \text{ g/mol}$ . From this you get the molar volume $$V_\text{mol}=\frac{m_\text{mol}}{\rho}$$ Early estimations of Avogadro's constant (i.e. the number of atoms per mol) were already known from physical experiments before Rutherford's time. Later experiments refined this value: $$N_A=6.02\cdot 10^{23}\text{/mol}$$ Using this you get the volume per atom $$V_\text{atom}=\frac{V_\text{mol}}{N_A}$$ Let us assume the gold atoms form a cubic lattice (this is wrong, but good enough for an estimation). Then each atom occupies a cube of edge length $$d=\sqrt[3]{V_\text{atom}}$$ Doing the calculation we get $$\begin{align} d&=\sqrt[3]{V_\text{atom}} =\sqrt[3]{\frac{V_\text{mol}}{N_A}} =\sqrt[3]{{\frac{m_\text{mol}}{\rho\ N_A}}} \\ &=\sqrt[3]{{\frac{197 \text{ g/mol}}{19.3\text{ g/cm}^3 \cdot 6.02\cdot 10^{23}\text{/mol}}}} \\ &=\sqrt[3]{1.70\cdot 10^{-23}\text{cm}^3} =\sqrt[3]{1.70\cdot 10^{-29}\text{m}^3} =2.6\cdot 10^{-10}\text{ m} \end{align}$$ And the radius of an atom is half of this cube edge length $$r=\frac{d}{2}=1.3\cdot 10^{-10}\text{ m}$$
{ "source": [ "https://physics.stackexchange.com/questions/674415", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/274481/" ] }
674,567
I had a problem in my book stating: A boy lifts a ball with mass $m$ with force of constant magnitude $F$ to a height $h$ . Calculate the force magnitude $F$ acting on the ball . I know this is a very silly question. But I am screwing up in understanding the answer in my book. Please help me! Here,the answer given in my book is $mg$ . But I think the force must be greater than that. To explain my answer, I have taken the scenario as follows: If the boy is holding the ball in his hand, then the force of gravity and the force by his hand counter balance each other as a result of which ball stays stationary. Now, if the boy has to move the ball to a greater height, he must apply an additional force along with the force that keeps the ball stationary. Thus, my answer. So, can the force being equal to the weight raise the ball up to a height $h$ , or does the force need to be larger than this?
This is a horribly written question (the exercise, not your post). You are correct in your reasoning. If the ball starts at rest, the force $F$ needs to be larger than $mg$ in order for it to begin moving upwards. However, if the ball already starts with some initial upwards velocity, a force $F=mg$ would be enough to get the ball to keep moving upwards, and any force larger than $mg$ still gets the job done. With a sufficiently large initial velocity, the force $F$ could actually be smaller than $mg$ , since all we require is the ball move to a height $h$ rather than move to a height $h$ and then still keep moving upwards. So, to summarize, $F>mg$ always works, and $0<F\leq mg$ works for a sufficiently large initial upward velocity $^*$ . The issue with the problem is that it doesn't specify the initial conditions of the ball, and even if they had been specified there is not a unique answer; there is just a minimal force needed to raise the ball a height $h$ . So yeah, ignore the fact that you didn't get the answer and instead focus on the concepts that you have correct. $F=mg$ means no acceleration, and if the ball starts at rest this means, in this case, it could not reach height $h$ with only $F$ and the weight acting on it. $^*$ F could technically be negative with a sufficiently large initial upward velocity, but since we are told the ball is being lifted we will assume the force has to be upwards.
{ "source": [ "https://physics.stackexchange.com/questions/674567", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/312497/" ] }
674,908
Suppose, I'm on earth and my brother is moving away from earth at a constant speed, $v=0.8c$ . Now, if 5 seconds $(t_0)$ pass for me, the amount of time that will pass for my brother according to me will be $t$ : $$t=\frac{t_0}{\sqrt{1-\frac{v^2}{c^2}}}$$ $$t=\frac{5}{\sqrt{1-(0.8)^2}}$$ $$t=8.33s$$ So, if $5$ decades pass for me, $8.33$ decades will pass for my brother. He will experience rapid aging according to me. So, why is everyone saying time will go slower for him when the case is the exact opposite?
You have applied the equation incorrectly. This is because $t$ is the time you observe (the dilated time) on your brothers clock and $t_0$ is the proper time , or the time inside your brother’s frame of reference. That is, if $5$ seconds elapsed on your brother's clock as measured from your frame , then the elapsed time on his clock in his frame is $t_0$ where $$5=\frac{t_0}{\sqrt{1-\frac{v^2}{c^2}}} \\ \rightarrow t_0=5\sqrt{1-0.8^2}=3\ \ \text{seconds}$$ It's interesting to note that since your brother is also observing you to be moving away from him at $v=0.8c$ , if $5$ seconds passes for you inside your frame, then your brother will observe your clock to take $$t=\frac{5}{\sqrt{1-(0.8)^2}}=8.3\ \ \text{seconds}$$ This is probably how you meant to apply the equation. When you observe his clock you will see a dilated time and when he observes your clock, he to will also see a dilated time. So who is correct? The solution to this apparent contradiction "the twin paradox" is addressed here .
{ "source": [ "https://physics.stackexchange.com/questions/674908", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167872/" ] }
675,075
I'm having a little trouble trying to put to words my problem and I apologize in advance for any causation of trouble in trying to interpret it. We define periodic events as those events that occur over equal intervals of time. But, don't we use periodic events themselves to measure time (like a pendulum or the SI unit definition of transition frequency of Cesium)? Then how is it we know we have equal intervals of time? Another way to put my problem would be: We metaphorically describe time in terms of the physical idea of motion, i.e., 'time moves from a to b', but how do we deal with how fast it moves because to know how fast it moves, we must know its rate and to know its rate is like taking the ratio of time with time? This is all very confusing. I apologize again for any problem in trying to understand.
Yes, we measure time as multiples of periodic events, such as the ticking of a clock, the rotation of the Earth, or the period of radiation from caesium, and we assume that each of those events has an unvarying duration. In a thought experiment, you might imagine whether it was possible for time to expand or contract, so that, for example, a second today might actually be a different duration compared with a second last week. The key question then might be, how would you know? If the duration of everything changed by the same amount- the ticking of all clocks, all natural processes- you would have no way to tell that time had expanded or contracted. Or, to put it another way, it would make strictly no difference to anything, so the question of whether it occurred would be utterly irrelevant.
{ "source": [ "https://physics.stackexchange.com/questions/675075", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/285255/" ] }
675,373
I was in a question in Electronics SE, and a lot of people said something that I'm sure is wrong. They said, "The voltage between two points that are not part of the same circuit is undefined ". Or in other words, "if I have a circuit that is fully isolated ("floating"), then the voltage between any point in it and the ground, is undefined ". Now, I'm 100% sure that the voltage between any two points in the universe is well defined. It doesn't matter if they are in different circuits, same circuits, or even the same galaxies. It may be hard or impossible to calculate, but it is not undefined ! This was driving me crazy so I ask, am I wrong? Are they wrong? A follow up question. If two pieces of the same metal, are under the same external electric fields (which may be zero), and carry the same excess charge (which may be zero), then the voltage between them must be zero , even if one piece is in Jupiter and the other one in an electric circuit in my back yard. Is this wrong?
The voltage between to points in the universe is always defined. But the voltage between two circuits is undefined because (at least from electronics point of view) circuits are abstract models of closed systems . Suppose we have two devices that consist of a battery connected a resistor - one resting on the ground - the other suspended from a balloon. It would be possible to measure a real potential difference between any two points in those devices. And yet if we model these devices as different circuits any voltage between them cannot be calculated and is undefined because we haven't included any information about how those circuits are related. What we can do is refine these circuits by including parameters such as resistance, capacitance and inductance of air between the devices. But when we include such information they cease to be two distinct circuits and become a single more complex circuit where voltage between any two points can be calcuated. Note: Here I used simple lumped circuit approximation to illustrate the point, but the principle remains the same even for more complex models.
{ "source": [ "https://physics.stackexchange.com/questions/675373", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/123183/" ] }
675,632
I was introduced to the term half life as the time it takes for the number of radioactive nuclei to become half of its initial value in a radioactive sample. But there is a question in "Concept of Physics by HC Verma}" which says that a free neutron decays with a "half life" of 14 minutes . Now this is really confusing. Here it is : What does the term half life even mean for a single radioactive nucleus or for a free neutron ? Does it mean that the neutron is only "half transformed" (into a proton and the beta particle) by the given time ?
For a single free neutron that exists a half-life of 14 minutes would mean that, over a timespan of 14 minutes, measured in the neutron's rest frame, there is a 50% chance that it will decay into a proton, an electron (beta particle), and an electron antineutrino. (As @PM 2Ring notes in their comment on the original question , the half-life of a free neutron in reality is about 10 minutes, and the book's question mistakenly substituted in the value of the free neutron mean lifetime .)
{ "source": [ "https://physics.stackexchange.com/questions/675632", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/271783/" ] }
675,754
As per my understanding, temperature is the movement of particles in an environment. A highly energetic environment where particles possess high energy has a high temperature, and low energy means low temperature. Wind in general means speed and energy in my opinion, so what does cold wind mean? How can it exist? Is it a bunch of super stable particles moving together? Does it move slowly? Air conditioning transfers heat okay, but what happens at the particle level? Do particles suddenly stop moving/less movement when they hit the heat transferring element-how?
Technically, it all is the same motion. The difference is magnitude and direction and how you separate out the superposition of them. Temperature is a result of the components of motion (vectors) of each individual air molecule with a net translational movement of zero over time. This motion is random and all over the place in many directions as they move around, collide and bounce off each other and objects. This speed is apparently around 500m/s, just to give you an idea: https://pages.mtu.edu/~suits/SpeedofSound.html Wind is the collective motion of a mass of randomly moving air molecules: the average velocity of the actual motion of the individual air molecules. Averaging their velocities cancels out the vectors that point in "random" directions which lead to net zero motion, leaving only the collective translational motion remaining. Obviously, wind is usually not 500m/s. So, typically, the random net-zero impact vectors between air molecules and objects are much higher speed than those due to the collective/average motion of the air molecules (wind). This means the net-zero impact vectors dominate over the translational impact vectors in kinetic energy transfer (heat) in either direction into or out of your body. But if you're an SR-71 flying through the air at Mach 3, the relative wind speed is so high that the vectors of the translational motion of the air molecules is much larger than that of the net zero, random motion. The result is a net heating of the aircraft skin due to impacts from the wind even if the random velocities of the individual air molecules make for very cold air that would otherwise cool down the airplane's skin. So a cold wind would be one where the kinetic vibratory motion (thermal motion) of the molecules in your body is higher than the random motion of the individual air molecules such that energy is transferred out of your body into the air from the collisions rather than into it, and impacts vectors for the average motion of the air (wind) is too slow to to transfer energy from the air into your body and only replaces the air molecules heated up by your body with fresh, cold air molecules.
{ "source": [ "https://physics.stackexchange.com/questions/675754", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/169703/" ] }
675,824
A question that popped into my head: if I see a picture of the sun close to the horizon, in an unknown place, can I know if it was taken at sunset or sunrise? Do sunrises and sunsets look the same in a still image? Can one tell them apart?
If you have a sufficiently advanced camera, then you can distinguish a sunrise from a sunset from a still frame. I will assume that the Sun and the horizon are visible. The Sun is rotating at roughly 2 km/s at the equator. This rotation imparts Doppler shifts in the light from the Sun, even in a still frame. So with a sufficiently advanced camera which can detect those Doppler shifts (for example, an IFU ). Then you can measure the rotation axis of the Sun. Now, depending on where on Earth you are (latitude and whether this is sunrise or sunset) the rotation axis of the Sun will appear at a different angle relative to the horizon. So, not only could you theoretically you figure out whether it is sunrise or sunset, but you could also measure the rotation speed of the Sun and your present latitude. The Sun and the Earth rotate in the same direction (counterclockwise when viewed from above the North Pole). Thus, the top of the Sun relative to the horizon will rotate in the same direction as the observer. I will use this fact to describe what you would see due to the Sun's rotation: During sunset, you must be on the side of the Earth moving away from the Sun. From this angle, the top of the Sun (relative to the horizon, not necessarily North!) will appear blueshifted relative to the bottom of the Sun, since the apparent top of the Sun has a velocity towards Earth (same direction as your side of the Earth). During sunrise, you must be on the side of the Earth moving towards the Sun. From this angle, the top of the Sun will appear redshifted instead. Side note: I think in a statistical sense you could distinguish between sunrises and sunsets based on effects mentioned in other answers/comments: temperature of the air, stillness of the air, presence of particulates. Statistical meaning you could theoretically do better than a 50/50 guess with a single image, and better if you allow multiple images taken at the same place or with other variables controlled for. So although those answers/comments do not provide a sure way to distinguish sunrise from sunset, I think they suffice to show that the two phenomena are different on some level.
{ "source": [ "https://physics.stackexchange.com/questions/675824", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/298938/" ] }
676,131
According to what I have read, stars are formed due to the accumulation of gas and dust, which collapses due to gravity and starts to form stars. But then, if space is a vacuum, what is that gas that gets accumulated?
Space is not a full vacuum. It's mostly a vacuum, and it's a better vacuum than the best vacuums that can be achieved in a laboratory, but there's still matter in it. See interstellar medium . In all phases, the interstellar medium is extremely tenuous by terrestrial standards. In cool, dense regions of the ISM, matter is primarily in molecular form, and reaches number densities of $10^6$ molecules per $\mathrm{cm}^3$ (1 million molecules per $\mathrm{cm}^3$ ). In hot, diffuse regions of the ISM, matter is primarily ionized, and the density may be as low as $10^{−4}$ ions per $\mathrm{cm}^3$ . Compare this with a number density of roughly $10^{19}$ molecules per $\mathrm{cm}^3$ for air at sea level, and $10^{10}$ molecules per $\mathrm{cm}^3$ (10 billion molecules per $\mathrm{cm}^3$ ) for a laboratory high-vacuum chamber.
{ "source": [ "https://physics.stackexchange.com/questions/676131", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/318676/" ] }
676,168
What part of the photons emitted from a star are from black body radiation and what part originate from fusion reactions? To my understanding these are the two sources of luminosity for a star, so I'm just wondering which phenomena accounts for the majority of photons that come from a star.
Fusion reactions produce high energy gamma radiation. None of those photons reach the surface of the star directly. Over timescales of $10^4-10^5$ years, they scatter around as their energy propagates towards the surface. Thus, all photons from a star are from blackbody radiation. Some of these blackbody photons get absorbed by atoms or ions in the atmospheres of the star, and get re-emitted. This re-emission shows up as absorption lines in the spectrum. However, since re-emission goes in all directions, some of the photons we see are from such atomic recombination processes. You can get a qualitative understanding of the numbers when you look at any absorption spectrum from a star. The overall smooth distribution is the blackbody radiation. Absorption lines are subdominant to that, and only a tiny fraction of the difference between an absorption line and the thermal spectrum at that wavelength is going into your direction from re-emission. Hence, overall, that becomes pretty negligible and one can say that essentially all photons are blackbody radiation.
{ "source": [ "https://physics.stackexchange.com/questions/676168", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/314849/" ] }
676,994
I was reading about the gaseous state when this question struck my mind: What made us assume that, at every point inside the container, a gas exerts equal pressure? When one brings a barometer, is it true it measures the same pressure at every point inside? Is this applicable to both ideal and real gases?
An imbalance of pressure would itself cause an internal flow in the gas. So if the gas has reached equilibrium the pressure must be the same everywhere. The above is for a gas in ordinary circumstances, without any applied field such as a gravitational field. If there is such a field then the gas flows until the pressure gradient provides a force which just balances the effects of the field. To calculate these effects more fully one can use the concept of chemical potential and the second law of thermodynamics. There remains the fact that thermodynamic quantities such as pressure also undergo fluctuations. The above comments about uniformity apply to the time-averaged pressure at any point. Generalization to fluids The arguments above apply to fluids more generally, not just to gases (and therefore it is not restricted to ideal gas). As long as the fluid can flow then any pressure gradient will cause a flow so when a fluid reaches equilibrium in a closed container the pressure must be uniform.
{ "source": [ "https://physics.stackexchange.com/questions/676994", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/297949/" ] }
677,041
I am a complete novice in physics beyond the high school level, so please excuse anything wrong in my question. So I have recently read that according to General Relativity, the presence of mass in spacetime causes spacetime to become curved and that is how gravity arises. Does vacuum spacetime have an inherent curvature? What I mean to say is that if we remove all kinds of matter and energy from the universe (somehow), and then try to determine the curvature of spacetime, will it be flat or will it be curved? And if vacuum spacetime does have an inherent curvature, why or how does that curvature arise, given that nothing possessing energy or mass is present in the universe I have described above.
The Einstein equation allows spacetime to have an inherent curvature, but this is an adjustable parameter. That is, general relativity does not predict what the inherent curvature would be, only that it could exist and could take any value. The only way we can tell whether spacetime has an inherent curvature or not is by observation. One of the terms allowed in the Einstein equation is a cosmological constant , normally written as $\Lambda$ . If $\Lambda$ is non-zero then spacetime has a scalar curvature (the Ricci scalar) given by: $$ R = 4\Lambda $$ And this is exactly the sort of curvature that you are asking about because it is "built in" to spacetime and exists even in a universe completely empty of matter or energy. In fact does appear that the universe could have a cosmological constant. When we observe the motion of supernovae in the universe it looks as if the universe is expanding faster than it should, and this could be due to a cosmological constant. The trouble is that it could also be due to a form of energy called dark energy , and at the moment we cannot tell which (if either) of these is the case. One way to tell would be to see if the effect changes with the age of the universe. A cosmological constant would be ... well ... constant , while dark energy could change with time. However at the moment our measurements are not precise enough to tell if the effect has changed over the age of the universe.
{ "source": [ "https://physics.stackexchange.com/questions/677041", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/318567/" ] }
677,906
Long ago, my high school teacher wrote the popular question on board, "Why doesn't a bird sitting on a live wire get electrocuted?" He gave us four options (I don't remember all of them) among which was the obvious "since the bird's feet don't touch the ground" and naturally we chose this one. He told us that this answer was actually not satisfactory (or rather incomplete) since the current in the wire is not a direct current but an alternating current. The bird's body should be treated as a capacitor (since the resistance of the bird owing to its small longitudinal extent can be ignored as both feet are at almost the same voltage) which for small frequencies offers large impedance . Because of this the current through the bird's body is negligible and it doesn't get shocked. Now, the following answers and links therein: Why do birds sitting on electric wires not get shocked? Birds on a wire (again) - how is it that birds feel no current? They are just making a parallel circuit, no? suggest that its actually the no grounding that prevents the bird from getting fried. (Along with having both feet in effectively the same place) Most people in the previous answers seemed to not have mentioned anything about the alternating nature of the current and impedances etc generated in the bird's body due to that. Which explanation is more correct? P.S: The explanation which had been given by my teacher seems more plausible to me.
Which explanation is more correct? The answer to the second question you cite is the best one. In order to be "electrocuted" a non-trivial amount of current must flow through the body. The amount of current that flow is a function of the impedance of the bird and the voltage difference between the two contact points. The second point is crucial here. The voltage difference the two contact points is essentially zero. A bird's feet are maybe a few centimeters apart and they touch THE SAME wire. The only voltage difference between the two feet comes from losses in the wire itself and these are minimal over such a short distance. Power line wires are specifically designed to have as little losses as is practical! Large birds do indeed get electrocuted occasionally. That's simply because some part of their body touches (or gets too close) to something that's NOT the same wire their feet are on and that are at different potential. It doesn't need to be ground, any other phase wire will do the trick just as well (if not better). The AC argument doesn't hold much water. The bird has indeed a capacitance but it's small and the AC line frequency is very low. If we assume a capacitance of maybe 50 pF for the bird (a human as about 100 pF) and 50 Hz, that comes out to an reactance of 16 $M\Omega$ as compared to a few $k\Omega$ for the resistive impedance of the bird. So the capacitance contributes only about 1/1000th of the total current.
{ "source": [ "https://physics.stackexchange.com/questions/677906", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/266050/" ] }
678,462
Consider a slab made of two walls separated by air. Why do we need insulation material between the two walls. Air thermal conductivity is lower than most thermal conductivities of insulating material and convection cannot be an issue in the enclosed volume: hot air rises, so what? it won't go any further than the top of the cavity.
You can think of thermal conductivity as a measure of how readily heat will flow through the material while it is stationary . The low thermal conductivity of air means that it takes a long time for heat to diffuse through an air pocket. If the air is permitted to move, however, this intuition goes out the window. The air in contact with one wall gets warm and rises, and the resulting circulation causes it to be brought into contact with the other wall. In this way, the heat doesn't need to diffuse through the air, as it's being transported by bulk air flow. Insulating materials such as blown fiberglass (or a wool sweater) are good insulators precisely because they trap many small pockets of air, which shuts down convection and forces the heat to flow diffusively. Once there's no convection, the low thermal conductivity of the air pockets makes the material a good insulator. You're right that the thermal conductivity of the trapping material is usually higher than the thermal conductivity of the air itself, but that's the (fairly modest) price we have to pay for killing the convection.
{ "source": [ "https://physics.stackexchange.com/questions/678462", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/153420/" ] }
678,755
For a spacecraft in orbit with radius $r$ with speed $v$ around a planet, centripetal force $F_C$ is provided by gravity: $$\frac{GmM}{r^2}=\frac{mv^2}{r},$$ which simplifies to $$\frac{GM}{r}=v^2.$$ This means that orbits closer to the planet are required to have greater speeds. However, if we want to move a spacecraft to a higher orbit, we have to increase the semimajor axis (adding energy to the orbit) by increasing velocity (source: FAA). How is this reconciled with the above equation?
The equation you have written there applies only for circular orbit but the orbit is not circular during the time the spacecraft is climbing to higher orbit. As the spacecraft climbs towards the higher orbit its initially increased velocity slows down as kinetic energy is transformed to potential energy.
{ "source": [ "https://physics.stackexchange.com/questions/678755", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/124750/" ] }
679,165
The average person consumes 2000 kcal a day, which is equal to ~100 W. Furthermore, if one uses the Stefan–Boltzmann law to calculate how much someone loses heat due to radiation, it can be seen that it equals $$Q=\sigma T^4 \varepsilon A$$ $$Q\approx1000\ W$$ Considering a surface area of ~ 2 m² , an emissivity of 0.98 and a temperature of 36.5 °C. However, this is clearly much greater than the maximum possible heat output of a human body, and that doesn't even consider convection and conduction, which would make heat loss even greater. So what is wrong with this analysis?
Your calculation of the radiation power emitted by the human body is correct. But you forgot, that the human also absorbs radiation from the environment. The walls and all the things in your room probably have a temperature around 20 °C, and therefore emit radiation. The radiation power absorbed by the human body is roughly $$Q_\text{absorbed}=\sigma T_\text{environment}^4 \varepsilon A \approx 840 \text{ W}$$ This absorbed power partially compensates for the emitted power. The net radiation power is $$Q_\text{net} = Q_\text{emitted} - Q_\text{absorbed} \approx 1000 \text{ W} - 840 \text{ W} = 160 \text{ W}$$
{ "source": [ "https://physics.stackexchange.com/questions/679165", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/275186/" ] }
680,101
I was reading my old physics textbook (from middle school), and it mentioned something about the idea of having non-existing attractive forces between particles like air. "We would live in a very dull world." This made me wonder, what would've happened if there are no bonds between air particles, or what if air particles stop moving entirely one day? Will all the air particles just sink to the ground? (pulled by gravity) Hence, the question: how do air particles "stay afloat" in the first place?
I will list your questions and answer them one by one. what if air particles stop moving entirely one day? This scenario is what happens when the temperature is very low. For really no motion at all you would need absolute zero temperature. But well before you get to absolute zero you get to another case: the gas turns to liquid, and then, when colder still to solid (except for special cases such as helium). Forming a liquid usually involves the attractive forces between molecules, but even if there were no attractive forces, the gas would eventually form a type of liquid. It would then lie in a big pool on the ground (while we all die for lack of oxygen). Will all the air particles just sink to the ground? (pulled by gravity) yes, see previous ans. Hence, the question: how do air particles "stay afloat" in the first place? They stay afloat through collisions. All the particles are indeed falling down owing to gravity, but they also bump into one another. You might guess that after a while they would on average sink lower and lower, but what happens instead is that there are more particles, that is, a higher density, at the bottom than at the top. And the ones at the very bottom do not sink any lower because they bounce off the ground. If they stuck to the ground then the whole atmosphere would itself fall and fall until it was all stuck to the ground. But they bounce off, and thus they provide a layer of gas near the ground. This layer then supports the one above it, because of collisions: the particles arriving from above get bounced back up again. And that layer in turn supports the one above it. And so on. So the whole atmosphere is dynamic: between collisions every particle has a downward acceleration. During collisions the two particles bounce off one another. There is a higher density lower down, which results in more upward-directed collisions for a downward-moving particle than an upward-moving one. All this can be captured precisely in equations, but I guessed you preferred the picture in words. 3B. But what if the molecules in the air did not collide with one another, only with the ground. Would the atmosphere fall down then? This is an added paragraph suggested to me by some helpful comments by nanoman. He points out that in the scenario where the molecules do not collide with one another, they would still fly up high into the atmosphere after bouncing off the ground, following huge parabolas around 10 kilometres high, and overall the density distribution would still be the same! In this case the atmosphere thins as you go up because there are fewer molecules with enough energy to get that high. The above discussion in terms of layers is appropriate for the actual atmosphere because on average the molecules only travel tiny distances (less than a micron) before colliding. P.S. I would like to add that the word 'bounce' is not quite right for what happens when air molecules hit the ground. In fact they mostly arrive and stick for a very short time called the 'dwell time', and then they get kicked or shaken off and zoom off in a random direction. The energy of the molecules coming away from this process is on average equal to the thermal equilibrium energy with which they arrived. So after averaging over time the net effect is like bouncing.
{ "source": [ "https://physics.stackexchange.com/questions/680101", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/267515/" ] }
680,380
So it seems like that Einstein returned to the concept of aether, but from the point of view of a general relativistic framework. https://mathshistory.st-andrews.ac.uk/Extras/Einstein_ether/ This is often overlooked by modern theoretical physicists. Does the introduction of this aether from a general relativistic viewpoint have anything to say about the null result of the Michaelson-Morley experiment? If such an aether is hypothesized to exist, would it not cause a non-null result for the Michaelson-Morley experiment?
What Einstein proposed is to reuse the word , not to reuse the historical concept . Let me make a comparison. Caloric theory was a theory of what heat is. For a material to become hot it was supposed that a massless substance was diffusing into that material. This massless carrier of heat was called Caloric . Later the concept of caloric was abandoned. Now, by that time the scientists had become accustomed to defining 'heat' in terms of presence of Caloric. If the scientist of the time would retain the definition of heat in terms of Caloric then they would have to avoid the word 'heat'. If heat = Caloric, and Caloric does not exist, then you can't use the word 'heat'. Of course, what had prompted the scientists to abandon Caloric theory was that a new way of conceptualizing 'heat' had been developed: statistical mechanics: thinking of heat in terms of motion of the atoms/molecules that make up matter. The concept of 'heat' had been redefined. In that 1920 talk, Ether and the theory of relativity Einstein points out that Lorentz had already stripped the concept of Ether of almost all physical properties. Quote: As to the mechanical nature of the Lorentzian ether, it may be said of it, in a somewhat playful spirit, that immobility is the only mechanical property of which it has not been deprived by H A Lorentz. Subsequently Einstein points out that relativistic physics goes a fundamental step further: We may assume the existence of an ether; only we must give up ascribing a definite state of motion to it, i.e. we must by abstraction take from it the last mechanical characteristic which Lorentz had still left it. In terms of relativistic physics: whatever experimental setup you are using: there is no such thing as attributing a velocity vector of the experimental setup with respect to the relativistic Ether. As we know, the physics community of the time didn't follow Einstein's suggestion to reuse the word 'Ether'. Sometimes a new word is adopted, or constructed, sometimes an old word is reused, in a completely new meaning. Example of reuse: There are theories of physics that involve the existence of something that is referred to as ' quintessence '. The fact that the word 'quintessence' is reused does not carry any meaning. It's just a way of expressing that the description of this Quintessence requires a high level of abstraction. [Later addition] The underlying point is that relativistic physics attributes physical properties to spacetime. John Wheeler expressed that as follows. "Matter/Energy is telling spacetime how to curve, curved spacetime is telling matter/energy how to move." (Or words to that effect; there are a lot of paraphrasings in circulation) In terms of relativistic physics spacetime is a participant in the physics taking place; spacetime acts upon and is being acted upon . Conversely, asserting that spacetime cannot have physical properties would deprive one of the means to formulate GR. Maxwell's luminiferous aether is an entity that is always present, everywhere, existing in a neutral state. In that neutral state Maxwell's aether does not affect motion of material objects. Maxwell's aether has internal degrees of freedom, and these internal degrees of freedom are such that Maxwell's aether supports propagation of electromagnetic waves. As we know, electromagnetic waves affect motion of matter, which we take advantage of in radio receivers, and of course in our microwave ovens. As we know, Maxwell's aether and Lorentz' aether have been abandoned, but of course we still need a way to account for the propagation of electromagnetic waves. (Comparison: the phenomenon of heat is something that must be accounted for. You cannot not have a theory of heat. Caloric theory was abandoned because a better way of accounting for heat had been developed: the kinetic theory of heat.) In terms of quantum theory there is an entity that is present everywhere, and photons are described as excitations of that omnipresent entity. In quantum theory the expression 'electromagnetic field' is a reused expression. In terms of quantum theory the meaning of 'electromagnetic field' is fundamentally different than in classical mechanics. Rather than coming up with a new expression the existing expression was reused, in a fundamentally different meaning. In terms of relativistic physics: In the absence of a source of gravitational interaction spacetime is in a neutral state, and in that neutral state the direction of motion of material objects is not affected. The presence of a source of gravitational interaction sets up a bias in the state of spacetime, and that biased state acts as the mediator of gravitational interaction. As to propagation of gravitational waves: Back in 1920 it was very much unclear whether or not GR was a field theory that allowed/implied propagation of gravitational waves. In order to support propagation of waves a medium must provide the following properties: -it must support a biased state (a state away from neutral state) -the medium by itself tends to return to the neutral state -when the state is changing at a particular rate it tends to keep changing at that rate Comparison to harmonic oscillation: An elastic string under tension will support vibration because: -there is an elastic force that tends to return the string to the least stretched configuration -the string has inertia, so that when it has a velocity it will overshoot As we know: it has been confirmed beyond reasonable doubt that GR spacetime supports propagation of gravitational waves.
{ "source": [ "https://physics.stackexchange.com/questions/680380", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/287821/" ] }
680,685
How close does a photon have to get to a black hole to do ONE full loop? By full loop I mean it curves once around the black hole and ends up on the same trajectory as it was one before it approached the black hole. Like this: How many time the $R_s$ does it have to approach?
The motion of a photon in a Schwarzschild spacetime is described by $$ \frac{1}{L^2} \dot{r}^2 + V _{\text{eff}} (r) = \frac{1}{b^2}\,, $$ where $$V _{\text{eff}}(r) = \frac{1}{r} \left(1 - \frac{2GM}{r}\right) \qquad \text{and} \qquad b^2 = \frac{L^2}{e^2} $$ with $$ e = \left(1 - \frac{2GM}{r}\right) \dot{t} = \text{constant} $$ and where finally all dots denote derivatives with respect to an arbitrary parameter $\lambda$ for the photon trajectory, and $L = r^2 \dot{\varphi}$ is the angular momentum of the photon (which is conserved due to rotational symmetry). The fixed numbers $e$ and $L$ are the integrals of motion corresponding to the $\partial_t$ and $\partial_\varphi$ Killing vectors. The quantity $b$ can be shown to be the impact parameter , the distance between the BH and the asymptotic incoming photon trajectory. With this all set up, the question can be reframed as such: what is the value of $b$ such that the total variation of $\varphi$ is $3 \pi$ ? ( $1 \pi$ would correspond to going straight, it is the result you get with $M=0$ ). Further, how small is the minimum value of $r$ in this orbit? Perhaps there is a clever analytical solution to the problem, but I'll just solve the ODE numerically. After some manipulations, the problem can be reframed as $$ \frac{\mathrm{d}^2 u}{\mathrm{d} \varphi^2} = - u + 3 GM u^2, $$ where $u = 1/r$ . The impact parameter is used here as an initial condition: the initial value problem is $u(0) = 0$ (meaning infinite radius, concretely some small number is set for the integration to work) and $u_\varphi (0) = 1/b$ . For convenience, I express radii in units of $GM$ . The ODE is quite "finnicky", in that the configuration you are looking for only happens in a very specific range for $b$ , which (if I didn't mess up the integration) is around $b \approx 5.2GM$ . This is what is required for a full loop, but doing $n$ loops does not move you far from that value of $b$ : you only approach the critical value where the photon asymptotically gets closer to the photon sphere. For the specific answer to the question, the configuration yielding $\Delta \varphi = 3\pi$ seems to be $b = 5.203GM$ , and the minimum radius reached by the orbit is about $3.09GM$ (just above the photon sphere!).
{ "source": [ "https://physics.stackexchange.com/questions/680685", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/263465/" ] }
680,924
The old Usenet Physics FAQ, which is a page originally by Matt Austern , says that all protons have identical rest masses, and so do all electrons, and so do all neutrons; these masses can be looked up in a table. As the particle is accelerated to ever higher speeds, its relativistic mass increases without limit. However, writing in 2011 in the “Chemistry” section of PNAS, Hammes-Schiffer says : This work calls into question the traditional definition of proton transfer as the movement of a hydrogen nucleus between a donor and acceptor atom. According to this perspective, a proton transfer reaction is defined in terms of the change in electronic configuration rather than the movement of the hydrogen nucleus. In this case, a proton transfer reaction is defined as the movement of electronic charge density from the donor–hydrogen covalent bond to the acceptor–hydrogen covalent bond. Such a change in electronic configuration without the motion of the hydrogen nucleus is special to optically induced processes, where the optical excitation is much faster than nuclear motion. The proton is in a highly excited vibrational state immediately after optical excitation and subsequently relaxes to its equilibrium state." If a proton can be in excited state or not, it can have more mass or not, I would have thought, which would contradict the widely held belief that all protons have the same rest mass.
The word proton is used differently in these two quotes. Physicists use proton only for the three-quark baryon itself. Chemists sometimes use the term proton as a shorthand for a chemically bonded hydrogen group. That is the excited state to which the second quote refers, not the state of the proton internally. One way to tell is by its mention of "optical excitation," which means electron-bond energy levels. Exciting a proton internally -- which again is not what the chemistry quote meant -- requires orders of magnitude more energy. As others have noted, the result is so different from a proton that it has its own name, the $\Delta^+$ . The $\Delta^+$ is unstable, more massive, and has spin $\frac{3}{2}$ versus the proton's spin $\frac{1}{2}$ . Addendum 2021-12-07.12:16 EST Tue: This is only for folks interested in the in-depth particle physics aspects of this question. While this is more a matter of opinion in definitions, I would say that the resonance (which means "a short-lived particle") that best qualifies as an excited proton is not $\Delta^+$ but the one-positive-electric-charge $uud$ $N(1680)({\frac{5}{2}}^+)$ resonance. So little emphasis is placed on this resonance that I could not find a standard notation to distinguish between the neutron-like no-electric-charge $udd$ version and the proton-like one-positive-electric $uud$ variants of this resonance. The plus sign at the end of its designation means "even parity," not a positive charge. That accidental pre-emption of the plus sign likely makes adding an electric-charge " $+$ " a bit tricky, so perhaps folks just don't bother? The $uud$ $N(1680)({\frac{5}{2}}^+)$ resonance is more like a proton because two of its three valence (non-virtual) quarks spin in the same direction while the other has reverse spin, just as in a proton. In sharp contrast, all four $\Delta$ baryons (plus their Regge trajectory excitations ; if you don't immediately get a page image on that link, try refreshing the page) have valence quark spins in the same direction. The Regge trajectory resonances add spin in units of $+2$ to each of these configurations using orbital momentum. If you think of this added orbital momentum of the quarks as the angular momentum of the quarks swinging around each other at higher speeds, bola style, that's actually a pretty good heuristic model for comprehending what's going on. You can use that mental model because the strong force behaves remarkably similar to a bungee cord, increasing the force of its binding as it gets stretched. That's why protons and neutrons have well-defined surfaces. It also means that, if someone could measure it (not easy!), the higher Regge excitations should all have larger diameters than protons and neutrons. I have no idea if that has ever been tried. (On a related note: The critical need for a force with a bungee-cord-like behavior to get string-like vibrations is also why what's now called "string theory" (it was at first called super string theory) should have died after no more than a handful of papers. String theory unavoidably assumes that some variant of gravity is the binding force, and no variant of gravity shows the bungee-cord-like containment needed to make the hypothesized string vibrations physically meaningful. It's really sad that super string theory didn't get wrapped up quickly for just that reason, since instead, it went on for half a century, eating up enormous funding and wasting research careers on unproductive speculations that predicted nothing.) Adding orbital angular momentum arguably changes the character of the nucleon far less than flipping one of the valence quark spins relative to the other two valence quarks since the latter operation profoundly changes the nature of the nucleon and enables entirely new (and quite weird) particles such as $\Delta^{++}$ . Terminology note: My thanks to user Christoffer for replacing my "canonical quarks" phrase with "valence quarks," a phrase I did not know before. Standard terms are always better!
{ "source": [ "https://physics.stackexchange.com/questions/680924", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/295887/" ] }
680,964
Sigh. Approaching retirement age and still deeply confused about something I first encountered in highschool 40 years ago. Consider the usual double slit experiment. Make the light source be a laser with a beam 1 mm wide. And put it 5 meters away from the slits. On the other side of the slits the photon shows the well known diffraction pattern, alternating dark and light bands. Good, very tidy. But consider that single photons can diffract. So a single photon comes down the beam. The beam is 1mm wide with very little scatter, and 5 m long. So the momentum of a single photon is very tightly bounded. And moving objects near but not in the beam don't change things on the other side of the slits. Objects such as students doing the experiment in highschool, for example. If you don't get any red on you, you won't change the pattern or the brightness. On ther other side of the screen the photon can turn quite a corner, for example 30 degrees. The energy does not change very much, since it is still the same pretty red color from the laser. How does it manage to turn this corner and conserve momentum?
The slits themselves receive a tiny impulse from each photon. If a photon is diffracted to the left, the slits get nudged to the right. Every time a photon changes direction, it requires something else to gain momentum in the opposite direction, whether a solar sail or a star bending light by gravity. Since the slits are usually anchored to the ground and the impulse is so small, the effect is not observable. Your question actually came up in a series of debates between Albert Einstein and Niels Bohr on whether quantum mechanics made any sense. Einstein argued that the impulse of a photon on the slits would allow the measurement of the photon's position and momentum at the same time, contrary to quantum theory. Bohr replied that the necessary precision of the slit momentum measurement would--through Heisenberg's Uncertainty Principle--make the slit's position uncertain enough to destroy the interference pattern, negating any measurement of the photon's position.
{ "source": [ "https://physics.stackexchange.com/questions/680964", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/318761/" ] }
681,793
I am just curious because sound is longitudinal waves, meaning the energy is passed from one particle to another nearby particle and produces a wave that continues on until it reaches our ear as a sound wave or hits our skin as a pressure wave, so is this considered quantum because of atoms and molecules?
The detailed physics of sound wave transmission through air were mathematically worked out and found to be accurate before the invention of QM. No quantum-mechanical effects need to be taken into account to accomplish this; it is solidly in the domain of classical physics.
{ "source": [ "https://physics.stackexchange.com/questions/681793", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75502/" ] }
682,380
solar day = time between solar noons sidereal day = period of Earth's spin Wikipedia says "relative to the stars, the Sun appears to move around Earth once per year. Therefore, there is one fewer solar day per year than there are sidereal days." Shouldn't it be relative to the Earth instead of the relative to the stars? I'm having trouble following this argument. Can someone please explain it in more detail?
Shouldn't it be relative to the Earth instead of the relative to the stars? We need some reference background to plot the "movement" of the Sun. If we could see the stars during the day, and we were to go to a fixed point on the equator and mark the location of the Sun each day at noon on a star chart, this point would move in a circle through the stars once per year. The Sun rotates around the Earth more slowly than the stars do, so the number of solar rotations is one fewer than the number of sidereal rotations. Imagine walking counterclockwise around a circular track, facing North the whole time. Suppose there's a light in the middle of the track. If you start out in the Eastern part of the track, the light will start out on your left. Once you get to the Northern part of the track, the light will be at your back. When you get to the Western part, it will be on your right. At the Southern part, it will be in front of you. So the light will appear to rotate around you counterclockwise. So if the Earth didn't rotate at all, the Sun would appear to rise and set once over the course of the year. This one circuit due to the revolution around the Sun cancels out one of the 366 circuits due to the rotation of the Earth, leaving only 365 solar cycles.
{ "source": [ "https://physics.stackexchange.com/questions/682380", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/288361/" ] }
682,546
Most of the existing telescopes are located on Earth since it is easier and cheaper to construct, build and operate on Earth. Space launches are very expensive, and, moreover, if there is some problem with the telescope, orbiting in space, it is quite complicated to fix, since a team of astronauts has to be sent and work in open space is much more complicated than on Earth. However, telescopes that operate in space offer a lot of benefits, comparatively to those on Earth, since the precision of the Earth's telescopes is limited by the atmosphere. Hubble was a real breakthrough at the time of launch and discovered a lot of things and extended our knowledge about the space and Universe. As far as I understand, this progress would be impossible with any of the existing telescopes, located on Earth. The upcoming launch of the Webb telescope is supposed to reveal a lot of facts about the early universe. In comparison to Hubble, this telescope has larger reflector: $~7$ meters for Webb vs $2.4$ meters for Hubble. In addition, it will be orbiting on a distant orbit far from Earth, with a radius of 1.5M kilometers. Therefore, there would be even less noise and hindrances in the observations. At the same time, there are actively developing projects for construction of very large stationary telescopes: The extremely large telescope and Thirty meter telescope . Why do we actually need these, if they are located on Earth and will be limited in precision, comparing to the Hubble and Webb telescope? Is it the case, that they are solving somehow different tasks, and the huge diameter of the reflector will allow them to observe something that cannot be seen by either Hubble or Webb?
Disclaimer, I'm no expert on the details, but I know the general idea. Large ground telescopes are great, and often superior to orbital telescopes. The reason is as you said - it's cheaper to build ground telescopes, which means for the same budget you can build a bigger, more powerful ground telescope. It's true that space-based telescopes don't have to deal with the Earth's atmosphere obscuring things, but then there are also so-called adaptive optics that mitigate this advantage. Three of the biggest advantages of ground telescopes are: You can make them very large. Because of the way optical resolution works ( $\theta = 1.22 \lambda/D$ ), big telescopes have a fundamental advantage over small telescopes, and you can make bigger ground telescopes for the same price. You can actually see this in your numbers. The James Webb telescope has a 7m mirror, while the Thirty Meter Telescope is four times as big. In the same way, because they are larger, they can collect more light in a given amount of time. To get the same amount of light with a smaller telescope, you need to observe for longer, which is bad (telescope time is at a premium in astronomy; most of the time astronomers need to apply for time). They are easy to repair. If something in the Hubble Space Telescope breaks, you might need to send up astronauts and conduct a space walk, which is obviously very expensive. Comparatively even the most inaccessible ground locations (like the South Pole ) can be reached for a fraction of the price of going to space. The discrepancy is actually such that many space probes aren't worth funding because one should just build a bigger, more powerful ground telescope. Given the above one could flip the question around and ask, why bother with space-based telescopes then? There are reasons, some of the most important being: The atmosphere obscures some wavelengths of light. If you want to observe in those wavelengths you must go to space. Ground telescopes are susceptible to local weather conditions. If it's raining or cloudy, you can't observe. Space telescopes can observe in all directions all the time. On the ground, you can only observe at night, and even then you can only observe half the celestial sphere at best (because the Earth is in the way of the other half). See e.g. this source for more details.
{ "source": [ "https://physics.stackexchange.com/questions/682546", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/261877/" ] }
683,949
Imagine a simple setup: me, a long cylindrical straw, a party cup filled with water. When I drink from the straw, I like to know if the water inside the straw is being pushed or pulled? Which action is more dominant in this case? Is it the suction from my mouth to draw the water from the straw or the atmospheric pressure pushing on the water in the cup?
Those are just different ways to name the same thing. Ultimately it is atmospheric pressure that pushes the liquid up the straw but normally the atmosphere wouldn't do that: the reason the water moves is because you created a low-pressure zone in your mouth which allowed atmospheric pressure to push the water up. If you define sucking as creating a low-pressure zone to move liquids or gasses then both options are correct: the water is being sucked up by you and it is pushed up by the atmosphere.
{ "source": [ "https://physics.stackexchange.com/questions/683949", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75502/" ] }
684,411
I have heard that there is less oxygen as you go higher (that's what my teacher told me). A reason that supports that is, as you go to higher altitudes, it becomes more and more difficult to breathe. But, I also read on the internet, that the amount of oxygen remains same but the air pressure drops, making it difficult for us to breathe. Which one is correct? Why?
For elevations less than about 100 km (for reference, the peak of Mt. Everest is about 8.8 km above sea level), the relative concentration of oxygen in the air is fairly constant at about 21%. Source It's true that there's less oxygen (more specifically,the partial pressure of oxygen is lower) with increasing altitude - and this is simply because there is less gas overall. Source The reason it's difficult to breathe at higher altitudes is that the ability of your lungs to oxygenate your blood depends on the partial pressure of O $_2$ in your lungs when you take a breath. At sea level and under ordinary conditions, the partial pressure of O $_2$ in your lungs is approximately $21\% \times 100\ \mathrm{kPa} \approx 21\ \mathrm{kPa}$ . This defines normal, at least in a limited sense. If you're breathing pure oxygen, then you could potentially have an O $_2$ partial pressure of $100\ \mathrm{kPa}$ , which can help compensate for damage to the lungs (e.g. from scarring) which reduces their ability to oxygenate blood - though as user Arsenal points out in a comment, under ordinary circumstances this would induce hyper oxia, which is bad news. On the other hand, at the top of Mt. Everest the partial pressure in your lungs would drop to approximately $21\% \times 30\ \mathrm{kPa} \approx 6\ \mathrm{kPa}$ - nowhere near enough to sustain for extended periods, especially under increased physical stress. Breathing pure oxygen from a tank boosts this number to closer to $30\ \mathrm{kPa}$ , which is why most climbers take their own oxygen with them. However, above an altitude of 12 km (roughly the altitude at which commercial airliners fly) the pressure drops below $20\ \mathrm{kPa}$ , which means that even breathing pure oxygen won't give you the normal required partial pressure of O $_2$ , and you will risk hypoxia. That is why planes which fly above this altitude have positive-pressure respirators for pilots in case of emergency. Rather than being simply a higher concentration of oxygen, the gas in a positive-pressure respirator is (as the name suggests) actively pressurized above the ambient atmospheric pressure to force the required amount of oxygen into your lungs. See e.g. page 4 of this booklet from the US FAA and the associated regulations.
{ "source": [ "https://physics.stackexchange.com/questions/684411", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/318676/" ] }
684,631
Whenever I see a 2D drawing of dispersion occurring when light travels through a solid prism, I see the rays get bent downwards on entry and downwards on exit again. For example here: https://www.wikiwand.com/en/Dispersion_(optics) To my understanding of optics when entering a medium with a higher optical density, the ray should get bent towards the normal of the surface, rotated CW and CCW when entering one with a lower IOR. However, the drawings suggest that it gets bent in the same direction upon entry and exit.
The normals in consideration for the incident and emergent rays are different. For simplicity, take a monochromatic beam of light incident on a prism, as shown in this figure: When light is incident on a medium with a higher index of refraction ( $n$ ), it bends towards the normal. When light is incident on a medium with a lower $n$ it bends away from the normal. In reference to this figure, the incident ray should bend towards the normal, which would mean a clockwise rotation ( $\phi_1 < \theta_1$ ) And the ray within the prism would bend away from the new normal at the new interface, corresponding to another clockwise rotation. ( $\phi_2 < \theta_2$ ) For a beam of light, dispersion will cause different wavelengths of light to bend in different angles, but they will all bend in the same sense. Hope this helps. Image source.
{ "source": [ "https://physics.stackexchange.com/questions/684631", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/323467/" ] }
684,736
I'm learning about rotational motion and the moment of inertia. Unlike inertia that I learned before, there is a formula to calculate rotational inertia. I'm having trouble understanding why it's possible to calculate inertia for a rotating object, but not a regular moving object. After doing research, not only is there no formula for normal objects, but different interpretations of what inertia is. Is there a fundamental difference between the moment of inertia and the inertia of an object, or am I misunderstanding something?
Classically, the inertia of something is just its mass. If you want an analogous equation, just integrate the mass density $\rho$ of the object over the volume of the object: $$m=\iiint \text dm=\iiint\rho\,\text dV$$ Compare this to what you usually see in introductory physics as $$I=\iiint r^2\,\text dm=\iiint r^2\rho\,\text dV$$ which, for a given axis, is one element of the moment of inertia tensor . Is there a fundamental difference between the moment of inertia and the inertia of an object? Yes. The inertia of an object does not depend on where the mass is within the body, only on how much mass there is. The moment of inertia about a given point does depend on how that mass is distributed about the point / axis you are calculating the moment of inertia about.
{ "source": [ "https://physics.stackexchange.com/questions/684736", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/243228/" ] }
684,991
Our teacher suggested that Newtonian Mechanics only applies in cartesian coordinates. Is this true? He gave this example. Suppose there a train moving with constant velocity $\vec{v}=v_0\hat{x}$ , with initial position vector $\vec{r}=(0, y_0)$ , where $v_0,y_0$ are constants. He argued that Newton's second law would not hold in polar coordinates. Any ideas? (We can assume 2D or 3D cases as well, so spherical or polar, it doesn't really matter)
Your teacher is incorrect. $\vec F = m \vec a$ is valid in any inertial (non-accelerating) coordinate system. You must account for the fact that the unit vectors for position in some coordinate systems (polar for example) do not have constant direction and change with time. See a good physics mechanics text, such as Symom Mechanics, for the correct acceleration $\vec a$ in such coordinate systems, where the time derivatives of the unit position vectors are correctly accounted for.
{ "source": [ "https://physics.stackexchange.com/questions/684991", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/180589/" ] }
685,559
It's cold outside right now, and the biggest river in the country has frozen over. We're talking about a minimum of 500m in width, and I've no idea how deep (but some pretty big ships can sail there). And this got me wondering - how does a big river like that freeze over? When water freezes, it starts off small - thin membranes of ice, tiny grains, etc. But the water is constantly moving. Any paper-thin layer ice that could form would be broken apart immediately. And yet, given the right conditions, it can somehow freeze over thickly enough that a car can be safely driven over. So how does this process happen? How can a large, moving, undulating river just freeze over without the ice breaking apart as it does so?
You know that ice is less dense than water. Then, water that freezes will stay at the surface. Also, take in mind that water will only freeze on the surface. Then as you said, any ice that forms will break apart. But there are places on the river where these tiny pieces of ice can accumulate, forming mushy ice. There are many studies of mushy ice and ice formation on the north pole if you want to check. If the temperatures stay under the fusion point of the river water, the mushy ice will eventually become layers of ice. Once the layers form in the low-velocity stream places they will start growing and expanding. Because ice is solid, water will doge the ice, then, if the layer is thick enough water won't break it and will eventually manage to cover the whole river. Only the surface will freeze. Beneath the ice water still flows normally. You'll be surprised at how thin can the ice layer be without breaking. To simulate this, I'll start with the Navier-stokes equations for the water motion. Then, you have to add some advection-diffusion heat transport equations. To couple them to the fluid just make the density of water a function of temperature. You could try to make viscosity also a function of temperature but that can result in some numerical instabilities. For the phase change, the enthalpy method is quite simple to implement. You know that most of the water on the river will be almost at the fusion point but won't lose enough energy to become solid. Just the water on top will. Then, you have to take into account that you have 3 mediums. Ice, water, and air. All those have different densities and heat capacitances that are important for the simulation. You'll notice that ice will become some sort of heat insulator preventing the underneath water to freeze after a certain thickness. Then the difficult part. You want the ice to move and stay on top. So, you can have 2 approaches. The first one is to use some boundary tracking method to separate ice from water. Then, calculate what the drag force would be and the buoyancy so the ice floats and apply that to the boundary to move it around. As you probably noticed, the problem with this approach is that the boundary has to move, and that can be tricky. The second way is using some Brinkmann penalization for the fluid but you'll need an extra auxiliary field for the ice, and somehow, you have to calculate the forces from the fluid velocity field and pass them over to the auxiliary field. At last, you have to implement a fracture model for the ice. This model won't be that hard because ice is very well studied and there are plenty already proposed. I hope you got a general idea of the physical phenomena. If you have any questions I'm happy to answer them.
{ "source": [ "https://physics.stackexchange.com/questions/685559", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/3381/" ] }
685,570
Let us consider a $+q$ charge and we are trying to find out electric intensity $E$ at a distance $r$ from $+q$ .The conventional way is this: We take a gaussian sphere of radius $r$ . We know the electric flux of this sphere is $\frac{q}{\epsilon_0}$ . Then they use the integral definition of flux $\int E dS$ and they say that $E$ here is same due to symmetry.BUT while deriving flux of sphere using integration,we say $E$ is same because of formula $\frac{kq}{r^2}$ which is what we want to prove now . So we are using circular logic here. I need to know how electric field is constant as mentioned in the conventional solution without using symmetry(i dont understand why electric field has to be constant for symmetric figures).
You know that ice is less dense than water. Then, water that freezes will stay at the surface. Also, take in mind that water will only freeze on the surface. Then as you said, any ice that forms will break apart. But there are places on the river where these tiny pieces of ice can accumulate, forming mushy ice. There are many studies of mushy ice and ice formation on the north pole if you want to check. If the temperatures stay under the fusion point of the river water, the mushy ice will eventually become layers of ice. Once the layers form in the low-velocity stream places they will start growing and expanding. Because ice is solid, water will doge the ice, then, if the layer is thick enough water won't break it and will eventually manage to cover the whole river. Only the surface will freeze. Beneath the ice water still flows normally. You'll be surprised at how thin can the ice layer be without breaking. To simulate this, I'll start with the Navier-stokes equations for the water motion. Then, you have to add some advection-diffusion heat transport equations. To couple them to the fluid just make the density of water a function of temperature. You could try to make viscosity also a function of temperature but that can result in some numerical instabilities. For the phase change, the enthalpy method is quite simple to implement. You know that most of the water on the river will be almost at the fusion point but won't lose enough energy to become solid. Just the water on top will. Then, you have to take into account that you have 3 mediums. Ice, water, and air. All those have different densities and heat capacitances that are important for the simulation. You'll notice that ice will become some sort of heat insulator preventing the underneath water to freeze after a certain thickness. Then the difficult part. You want the ice to move and stay on top. So, you can have 2 approaches. The first one is to use some boundary tracking method to separate ice from water. Then, calculate what the drag force would be and the buoyancy so the ice floats and apply that to the boundary to move it around. As you probably noticed, the problem with this approach is that the boundary has to move, and that can be tricky. The second way is using some Brinkmann penalization for the fluid but you'll need an extra auxiliary field for the ice, and somehow, you have to calculate the forces from the fluid velocity field and pass them over to the auxiliary field. At last, you have to implement a fracture model for the ice. This model won't be that hard because ice is very well studied and there are plenty already proposed. I hope you got a general idea of the physical phenomena. If you have any questions I'm happy to answer them.
{ "source": [ "https://physics.stackexchange.com/questions/685570", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/311011/" ] }
685,916
The fact that two balls charged with 1 coulomb each would repel/attract each other from a distance of 1 metre with a force sufficient to lift the Seawise Giant would suggest me otherwise, but has anyone ever charged an object with 1 coulomb of net charge? Why was such a ridiculously large charge chosen as the unit of charge? Or better, why did we give the Coulomb constant such a big value instead of using a value in the same order of magnitude of the Newton constant ( $10^{-11}$ )? EDIT For the historical reasons that explain why the coulomb was chosen as the unit of charge please refer to the good answers given to this question. After a bit of research I have found that the highest voltage ever created is $32\,\mathrm{MV}$ at the Oak Ridge National Laboratory . With such a voltage the best we can do is charging a copper sphere the size of a basketball with around 424 microcoulombs: $$Q = 32 \times 10^6\,\mathrm{V} \times 4\pi\epsilon_0 \times 0.119\,\mathrm{m} = 4.237 \times 10^{-4}\,\mathrm{C}$$ Such a sphere, when placed at a distance of $1\,\mathrm{m}$ from the surface of a similarly charged sphere, would experience a repulsion of $1052\,\mathrm{N}$ (the force needed to lift $107\,\mathrm{kg}$ ). If the maximum voltage we can access is $32\,\mathrm{MV}$ and we want to charge a sphere with $1\,\mathrm{C}$ , all we need is a sphere $561.7\,\mathrm{m}$ in diameter. It might often snow on the top.
Actually the ampere (SI unit for electric current) was defined first (in 1881, see Wikipedia: Ampere - History ). They chose this size for $1$ ampere, probably because at this time such a current could be produced with a decent electrochemical battery, and was easily measurable with a galvanometer by its magnetic effect. The natural consequence of this is: A flowing charge of $1$ coulomb (i.e. a current of $1$ ampere flowing for $1$ second) is also a convenient unit, neither ridiculously large nor ridiculously small. The fact, that a static charge of $1$ coulomb is a really big thing, is an entirely different story, which has to do with the large electric force between charges.
{ "source": [ "https://physics.stackexchange.com/questions/685916", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/264705/" ] }
686,588
I've seen that in the case of concave mirrors if the object is between focus and the pole - the reflected rays diverge and never meet. But if the object is at the focus, it's defined to be meeting at infinity. Why is it so?
If you align your viewing direction parallel to some set of parallel lines, you will visually see them ending at some "point" at infinite distance. The typical example is railroad tracks. If you take lines that are not parallel, then no matter what perspective you take, the visual point of intersection (if there is one) will always be a finite distance away, and thus not at infinity. E.g. the pole in the image is skew to the rails of the track, so no matter how you orient your view they will never appear to intersect at all, whether at infinity or not. Or take the rails and the wooden rail ties. They make right angles at points in real space, and no matter how you orient yourself you can never make them appear to intersect anywhere but at those points. Non-parallel lines are defined not to meet at infinity because our vision tells us they don't meet at infinity. Also note that there are different points at infinity. The point at infinity at which the railroad tracks intersect is visually different from the one at which all the vertical lines in this photo intersect. And both are different from the one at which the horizontal wooden ties intersect. This is in disagreement with @nu's answer. This is because there are many ways to mathematically construct points at infinity given a suitable definition of "real space". My definition corresponds to projective space , instead of a one-point compactification. The usage of many different points at infinity is justified by our visual intuition, and also by optical intuition. E.g. we usually idealize stars as point sources at infinity. But there are many stars that visually appear at different places in the sky. This is hard to make sense of if there is only one point at infinity, but if you instead construct many points at infinity, each star can get its own. Similarly, if you have a beam of parallel light rays and stick your eye in the beam, you will see the light as a "star" at the one point at infinity at which the parallel rays intersect, and not at a different point at infinity. If the rays instead intersect at some finite point, you will see a light source at that point, and not at any point at infinity.
{ "source": [ "https://physics.stackexchange.com/questions/686588", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/318769/" ] }
687,335
This question is inspired by a question about oven lightbulbs over on the DIY stack. It spawned a lengthy comment discussion about whether an incandescent lightbulb with a color temperature of 2500 K actually has a filament at a temperature of 2500 K. The articles I could Google are focused on explaining how other types of bulbs like LEDs are compared to an idealized blackbody to assign a color temperature, which makes sense to me. I couldn't find one that plainly answers my more basic question: Does any component in an incandescent lightbulb actually reach temperatures in the thousands of degrees? If so, how are things like the filament insulated from the filament leads or the glass, which stay so (comparatively) cool? Is this still true of bulbs with crazy high 20000 K color temp such as metal halide-aquatic? Do they actually sustain an arc that hot?
Does any component in an incandescent lightbulb actually reach temperatures in the thousands of degrees? Yes, the filament. This is why the filaments are made of tungsten, which has a melting point of 3695K and can comfortably tolerate the temperature. The actual limitation is not the melting point but the evaporation rate of tungsten which becomes significant if you try to run it much hotter. This quickly turns the bulb into a mirror - a can which is kicked down the road by the halogen bulb and cycle . If so, how are things like the filament insulated from the filament leads or the glass, which stay so (comparatively) cool? Heat experiences thermal resistance so the heat from the filament takes time to push its way down the glass encapsulation around the leads. It's also leaking heat from a very small mass (the filament) into a large mass (the glass, base, etc), and it has only a very small contact area where heat can conduct down to the base, effectively not much more than the cross-section of the filament, which is extremely thin. Remember that incandescent filaments are usually coiled coils (double-coiled), so they look much thicker than the actual wire used to construct them. Stretched out, a 60W incandescent filament would be over half a meter long (or about 20 inches) and is only 46 microns in diameter, which is about the thickness of a human hair. Image : Дагесян Саркис Арменакович, Wikipedia CC BY-SA 4.0 In reality, the rest of the bulb is such a good heatsink compared to the filament that it is the filament that changes temperature most dramatically where it contacts the much heavier power leads and the rest of the bulb. The centre of the filament, away from the ends, reaches 2700K but the ends of the filament are cooler and dim . Temperature is a measure of the density of heat in a material, so the same amount of heat in two objects of different mass will produce two different temperatures (with the smaller mass being hotter). Even though the filament is very hot it holds a comparatively small amount of heat, so the amount of heat it loses to the rest of the bulb is small enough, due to its low mass, and happens slowly enough, due to the tiny contact area for conduction, that it can be shed from the bulk bulb components to the environment through normal processes (conduction, convection, radiation) without excessively raising their temperature. Is this still true of bulbs with crazy high 20000 K color temp such as metal halide-aquatic? Do they actually sustain an arc that hot? No, metal halide arc lamps, like fluorescent tubes, produce light by fluorescence of an ionized gas (the plasma arc), sometimes also with a phosphor. The colour temperature there is dictated by the mix of colours produced by the various elements in the gaseous medium and/or the phosphors used to convert fluorescent UV photons to lower energy visible ones. It is possible, of course, to produce arcs with a real temperature in the 20,000K range, but in halide arc lamps the temperature of the arc is closer to about 1300K. There are types of arc lamp that use hotter arcs, of course, such as xenon arc lamps which burn an arc at over 10000K. Even with such a hot arc, the colour temperature of the emitted light is lower, around 6200K for a pure xenon bulb since the emission, like in the halide arc lamp, is not purely blackbody but also includes emission lines. A metal halide arc lamp, for example, will have a bluer colour (a higher effective colour temperature) when run underpowered below its nominal voltage since, at this lower physical operating temperature, the halide salts responsible for the warmer colours (reds, yellows) do not fully ionize and so emit less light at these frequencies.
{ "source": [ "https://physics.stackexchange.com/questions/687335", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/133662/" ] }
688,085
I read, I think, some time ago that the "weight" of photons from the Sun hitting an area the size of a football field at noon on a sunny day would be about the "weight" of a dime? Would appreciate it someone could flesh that out, verify if correct or false?
Photons are massless so their weight is 0. However, photons do have momentum so they can exert force. This force is due to their momentum and would occur even in the absence of gravity, so it is not a weight. The solar irradiance during peak hours is approximately $1000 \mathrm{ \ W \ m^{-2}}$ and the size of a football field is about $7200 \mathrm{ \ m^2}$ for a total radiant power of $7.2 \mathrm{ \ MW}$ . Since $p=E/c$ and $F=\frac{dp}{dt}$ we get that the force from this energy is $(7.2 \mathrm{\ MW})/c = 0.024 \mathrm{\ N}$ . In comparison, a dime has a mass of $2.268 \mathrm{\ g}$ which on the earth turns into a gravitational force, or weight, of $0.022 \mathrm{\ N}$ . So the force of the sunlight on a football field during peak solar hours is close to the weight of a dime.
{ "source": [ "https://physics.stackexchange.com/questions/688085", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/325073/" ] }
688,461
If we sprinkle iron particles on a cardboard where a bag magnet is kept and tap the board gently then the particles get arranged in a way that they look like field lines. But I am confused why do we have to tap on the board? Why won't it get arranged like that normally? (Sorry for this stupid question, I have stated studying proper magnetism recently.)
It’s like shaking a measuring cup half full of sugar to make it level out—in both cases there’s an energetically favored configuration you’re trying to reach, but without agitation, friction prevents the grains from moving to that configuration. Each time you tap the cardboard or shake the cup, you give the grains a new opportunity to settle in a new position, and the magnetic/gravitational forces, though not strong enough to overcome friction on their own, determine the end configuration.
{ "source": [ "https://physics.stackexchange.com/questions/688461", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/324377/" ] }
688,598
Imagine I am inside an isolated rocket of arbitrarily small size, and I have a spinning flywheel right next to me. Now suppose my rocket passes through the event horizon / Schwarzschild radius of a simple Schwarzschild black hole. By the equivalence principle, I should not notice myself and the rocket passing through the event horizon. However, since classically no object can escape the black hole once it passes the event horizon, it seems as though the flywheel should break as it passes through the event horizon, because for every piece going one way, the antipodal piece of it goes the opposite direction. Once the flywheel is half-way through the event horizon, the part of the flywheel inside the black hole cannot come out even though it must rotate, so it seems as though a part of the flywheel would split in half . How does this square with the equivalence principle? I am aware that the equivalence principle only applies locally in the limit of smaller and smaller regions. For example, tidal effects can allow you to distinguish regions with gravity and regions without gravity. However, I don't think that's enough to resolve my quandary. We can assume the black hole is sufficiently large so that no issues of tidal effects or spaghettifications occur. We can make the black hole as large as we like and the rocket as small as we like to remove second-order gravitational effects, and it seems like my paradox involving the flywheel crossing the Schwarzschild radius still exists. Am I wrong in this assertion?
since classically no object can escape the black hole once it passes the event horizon, it seems as though the flywheel should break as it passes through the event horizon, because for every piece going one way, the antipodal piece of it goes the opposite direction. This analysis is incorrect. The event horizon is a lightlike surface. In a local inertial frame it moves outward at c. So while it is true that there is an antipodal piece going the other way it doesn’t matter. The antipodal piece is going slower than c in the local inertial frame. So the horizon is going faster and the antipodal piece cannot possibly cross back through the horizon. The flywheel continues spinning without interruption and without risk of crossing the horizon backwards.
{ "source": [ "https://physics.stackexchange.com/questions/688598", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/123113/" ] }
689,057
What kind of matter is positronium ? Normal matter, antimatter, exotic matter or something else? We know: Matter is made up of electrons, protons and neutrons. Antimatter is made up of positrons , antiprotons and antineutrons . But positronium is made up of an electron and a positron. Where an electron is a normal particle and a positron is an antiparticle . So, how do we classify positronium? As normal matter or antimatter or a hybrid of both?
You should consider positronium as evidence that your partition of the universe into matter and antimatter is overly simplistic. You write that Matter is made up of electron, proton, and neutron. But that only describes stable matter, and only stable matter that happens to interact with electromagnetism. A census of stable matter should also include the neutrino; we leave it out of chemistry classes because it doesn’t form molecules. (However it is possible that neutrinos indirectly influence the stability of molecules .) An inventory of “electromagnetic matter” should also include the mu and tau leptons, and baryons like the lambda and sigma which contain heavy quarks. If you complain that particles from the second and third generations are exotic because they are unstable … well, the neutron is unstable. Should your list of “matter” be “electrons, protons, and nuclei”? Do unstable nuclei like tritium and carbon-14 count as “normal” matter, or are they “exotic” until they decay into helium-3 or nitrogen-14? (And of course those decays send an anti -neutrino off in the process, to be someone else’s classification problem.) In order to compute the energy eigenstates of positronium, and their properties like spin, parity, and lifetime, we use the same tools that intro-quantum students use to describe the hydrogen atom. So in the “quacks like a duck” sense, it makes sense to call positronium an “atom.” Because positronium doesn’t really fit on the periodic table, and because it doesn’t make a big contribution to chemistry, you might call it an “exotic” atom. But “exotic” might give you the mistaken impression that positronium is uncommon or hard to find, in the same way that I’m probably never going to encounter a rhinoceros or a scarlet macaw unless I make some special arrangements. That’s not really the case. Formation of positronium is a normal step in the annihilation of fast positrons in matter , and is therefore no less common than positrons. If you are a human person who is partially made of potassium from Earth, you contain positronium many times per hour. In quantum electrodynamics, we learn that the electron and positron are really excitations of the same field: a four-component spinor with two charge states and two spin states. In a very real sense, positronium is the simplest “matter-ful” state of electromagnetism, and is mathematically simpler than the more familiar state where there are lots of electrons and not very many positrons. This perspective has important physical consequences. The force-carrying photon can be said to “spend part of its time” as a virtual electron-positron “loop.” The loop corrections to electromagnetism are related to observables like the change in the effective electron charge at short distances (more frequently called the “running of the fine-structure constant” due to “vacuum polarization”), as well as corrections to the magnetic moment of an electron at rest. I disagree with your other answer and your comments that the classification is “duzzit matter” or that your question is silly. That punts on the issue. Near the middle of his excellent Ancestor’s Tale , Dawkins writes about “the tyranny of the discontinuous mind.” Categorizations are extremely useful, he says; but sometimes categorization is impossible, or categories which are useful in one context are useless in another. Recognizing when this has happened is an opportunity to learn interesting things.
{ "source": [ "https://physics.stackexchange.com/questions/689057", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/325462/" ] }
689,129
This is a follow-up question based on some discussions in one of my other questions posted here . Imagine a standard Schwarzschild black hole of sufficiently large size so that tidal forces are negligible at the event horizon. The question is, when a spatially extended object enters the black hole, does the event horizon surface sweep past the object or do all parts of the object fall into the black hole at the same instant? In one of the answers, Dale says, The event horizon is a lightlike surface. In a local inertial frame it moves outward at c. So while it is true that there is an antipodal piece going the other way it doesn’t matter. The antipodal piece is going slower than c in the local inertial frame. So the horizon is going faster and the antipodal piece cannot possibly cross back through the horizon. However, in the comments of my question, safesphere says, The flaw in your question is assuming that things cross the horizon gradually. For example, if you fall feet forward, you assume your feet cross the horizon before your head. This is incorrect. This thinking follows the intuition based on the flat spacetime. The horizon is not a place, it is not spacelike, but lightlike. This means that your head and your feet cross at the same instant. The entire flywheel, no matter how large, crosses the horizon all at once. There is no “front” or “back” when you cross. There is no direction in space pointing “back” to the horizon from the inside. [...] See Kevin Brown: “ One common question is whether a man falling (feet first) through an even horizon of a black hole would see his feet pass through the event horizon below him. As should be apparent from the schematics above, this kind of question is based on a misunderstanding. Everything that falls into a black hole falls in at the same local time, although spatially separated, just as everything in our city is going to enter tomorrow at the same time. ” - Falling Into and Hovering Near A Black Hole In coordinates of any external observer, no matter where he is or how he moves, nothing crosses or touches the horizon ever. Thus in the coordinates of your head, your feet don’t cross for as long as your head is outside. Therefore your feet and your head cross at the same proper time of your head. Also, everything that ever falls crosses at the same coordinate time of r=rs where r is the coordinate time inside the horizon in the Schwarzschild coordinates. Note that in these coordinates everything becomes infinitely thin in the radial direction near the horizon and crosses all at once. Perhaps the best description is as follows, because it is both intuitive and coordinate independent. Consider two objects falling along the same radius one after the other. Consider the events of these objects crossing the same given radius. These events are timelike separated outside, spacelike separated inside, and lightlike separated at the horizon. So the spacetime interval between the events of two sides of your flywheel crossing the horizon is zero (or “null” as it is commonly called) in any coordinate system. These two quotes seem like they contradict each other. Which one is correct? Questions I want to do some analysis of my own involving Eddington-Finkelstein coordinates and Kruskal–Szekeres coordinates, but I am short on time today. I intend to update my post with more info later. In order to even make the claim that all parts of an object pass the event horizon at the same instant, we need a notion of local simultaneity. What notion of local simultaneity is safesphere using in their claim? User safesphere says, " This means that your head and your feet cross at the same instant. The entire flywheel, no matter how large, crosses the horizon all at once. " But I don't understand this claim. If you send an arbitrarily long pole, say extending from Earth to Sagittarius A*, this claim seems to imply that if one end of the stick crosses the Schwarzschild radius, then entire pole would be instantaneously sucked into the black hole at once. This violates the speed of light limit and causality. Doesn't this thought experiment demonstrate that your head and feet don't cross the horizon all at once? Is the conflict between the two quotes a matter of using different coordinates? Could both be true? Or is there something deeper? Edit: The comments on that question have been moved to the discussion here . Please help me understand.
Dale is right. Falling feet-first through an event horizon means the horizon sweeps over you in the feet-to-head direction at the speed of light, which means your feet cross first. The event of your feet crossing is in the causal past of the event of your head crossing. safesphere writes a lot of comments on questions and answers related to general relativity, despite not seeming to understand the subject well. I don't see any interpretation of the comment that can make it correct. Rather than trying to answer your three questions, I'll respond to parts of the comment. if you fall feet forward, you assume your feet cross the horizon before your head. This is incorrect. This thinking follows the intuition based on the flat spacetime. The intuition based on flat spacetime is correct, because the spacetime in the region of interest is close to flat (if we're talking about a human being falling into a stellar-mass black hole, at least). In coordinates of any external observer, no matter where he is or how he moves, nothing crosses or touches the horizon ever. Thus in the coordinates of your head, your feet don’t cross for as long as your head is outside. Coordinate systems are just ways of assigning numeric labels to spacetime points, and can only be more or less useful, not more or less correct, for any observer. There is no such thing as the coordinate system of your head. Judging from this and some other comments, safesphere believes that different observers occupy different "private universes," and it can be true in one observer's coordinate system that an object crosses the horizon and in another's that it never does. That simply isn't true. Schwarzschild coordinates don't cover the event horizon. $r=2M$ is not the horizon, but a coordinate singularity. If you plot a worldline that crosses the horizon on a Schwarzschild chart, it has to leave the chart to reach the horizon, and the actual crossing can't be seen. This has no bearing on whether the worldline crosses the horizon on the physical manifold (which it does by assumption). Also, everything that ever falls crosses at the same coordinate time of r=rs where r is the coordinate time inside the horizon in the Schwarzschild coordinates. Note that in these coordinates everything becomes infinitely thin in the radial direction near the horizon and crosses all at once. Nothing crosses the horizon in Schwarzschild coordinates at any $t$ or $r$ . The limit that safesphere is trying to take here doesn't make sense; the coordinates don't behave sensibly in this limit. Perhaps the best description is as follows [...] So the spacetime interval between the events of two sides of your flywheel crossing the horizon is zero (or “null” as it is commonly called) in any coordinate system. Most of this paragraph is technically correct. The crossing events are lightlike separated. It doesn't really make sense to talk about the interval between them, except in the flat-space limit, nor to say "in any coordinate system" since this isn't related to coordinates in the first place. But none of that really matters. Lightlike separated or not, one of the events causally precedes the other. This is no different from the causal relationship of events at $(x,t)=(0,0)$ and $(x,t)=(1,1)$ in Minkowski space.
{ "source": [ "https://physics.stackexchange.com/questions/689129", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/123113/" ] }
689,253
I have seen books and papers mentioning "In the semiclassical limit, $\hbar$ tends to zero", "the scaled Planck's constant goes as $1/N$ where $N$ is the Hilbert space dimension" etc. Could anyone explain these variable values taken by Planck's constant? What is the idea behind it? For all I know, $\hbar$ is a constant equal to $6.6 \times10^{-34} \ [\mathrm{m^2 \cdot kg/s}].$
In a purely classical (Newtonian) universe, quantum effects would be absent, and the way to pretend this is true mathematically is to allow Planck's constant to approach zero, and see what the consequences are. Similarly, in a classical universe, relativistic effects would not exist, and this would be expressed by letting c approach infinity. This does not mean that $\hbar$ or c are not constants: it means we can see the effects of removing quantum mechanical or relativistic considerations from our mathematical description of the world by assigning a value of zero to those constants, and seeing what happens.
{ "source": [ "https://physics.stackexchange.com/questions/689253", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/304347/" ] }
690,066
Can antiparticles only be created in pair production ? How/which laws of physics prohibit direct conversion of say an electron to a positron? A neutron to an antineutron? I have seen a comment that it is thermodynamically impossible. True? How exactly? Any other ways it is theoretically impossible?
It depends on the particular kind of particle. Assuming the Standard Model holds, then: Electrons can't convert to positrons because that would violate conservation of charge. Neutrons can't convert to antineutrons because that would violate conservation of baryon number. When permitted by conservation laws, any particle can and generally does convert to its antiparticle. Examples include neutral kaons , $D^0$ mesons , and $B^0$ mesons . Kaon oscillations in particular have been measured with exquisite sensitivity, and provide some of the strongest known constraints on physics beyond the Standard Model. The story for neutrinos is more complicated, but to oversimplify it a bit, we still don't know whether they can or can't convert to their antineutrinos.
{ "source": [ "https://physics.stackexchange.com/questions/690066", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/174726/" ] }
690,259
I know that an object can become net negative or net positive by losing or gaining electrons, and having more or fewer protons than electrons but why can't protons be transferred too?
If an atom gained a proton, it would become a different atom. For example, if a hydrogen atom gained a proton, it would become a helium atom (for a sec forget that helium which you find in nature has also 2 neutrons). Having this in mind, it is perfectly possible to have a process that changes the number of protons, but as a result, we get a different atom (a particle with a different name). It is also important to note that it is far easier to exchange electrons than protons. This is because we extract an electron by overcoming the Coulomb force, while the proton is bounded by a nuclear force (thus, the processes in which this occurs are called nuclear processes).
{ "source": [ "https://physics.stackexchange.com/questions/690259", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/325926/" ] }
690,267
From my understanding of quantum mechanics, when a wavefunction is observed, it collapses into a single state instantaneously (or at least in the length of a Planck time.) Is there a reason it has to take no time? Could it be briefly observed in a different state before settling into it's collapsed state?
It’s important to remember that quantum mechanics is a tool that we use to describe the world — it is not the same as the world. For all that we love to talk about wavefunctions, it’s not clear at all that the wavefunction is a real thing. Questions like “can the wavefunction collapse instantaneously?” highlight this. Let’s consider a specific example where nonrelativistic quantum mechanics and the Copenhagen probability interpretation of the wavefunction are both useful: a neutron interferometer. The fundamentally quantum-mechanical feature of any interferometer is that incident particles take two paths to the same destination, which you can demonstrate using interference effects. Neutron interferometers have the pedagogical advantage that, in every existing implementation , the number of neutrons in the interferometer at once is always zero or one. A neutron in an interferometer has an intrinsic (strong-interaction) radius of about a femtometer. Its wavevector along its direction of motion is characterized by its de Broglie wavelength, typically a few angstroms. Perpendicular to its direction of motion, in a slice through the middle of the interferometer, the neutron’s probability distribution corresponds to a two-well potential: the neutron may be found in this arm or that arm of the interferometer, but not in the middle. The forbidden middle can be macroscopic. In a neutron interferometer it is centimeters. (In the LIGO optical interferometers, each laser’s path is several kilometers, and the probability of detecting a laser photon in the Louisiana swampland outside of the vacuum system is zero.) You can establish this neutron wavefunction by solving the Schrödinger equation, $$ \left( \frac{\hat p{}^2}{2m} + V(\vec x) \right)\psi(\vec x, t) = \hat E\psi(\vec x, t) $$ for a three-dimensional potential $V(\vec x)$ . Your potential must correspond to the material which forms the interferometer (a lattice of silicon nuclei), plus any “sample” present in one of the arms, plus, ending somewhere upstream, a two-dimensional infinite well to explain that neutrons cannot originate from outside of the beamline. Solving this equation gives the probabilities that neutrons will be observed in each of the detectors downstream from the interferometer, and predicts how those probabilities change if the “sample” is adjusted. An interferometer experiment measures those probabilities by counting lots of neutrons at those downstream detectors, then adjusting the sample and measuring the new distributions. When we say that “an observation collapses the wavefunction,” what we usually mean is the following: if we were to stick detectors inside the interferometer, where we have demonstrated that a single neutron follows both paths, we would never observe the same neutron being “classically detected” in both arms. Even in the case where the detection opportunities are spacelike-separated, so that relativity suggests they cannot influence each other, the probability of “detecting” the same neutron in both arms is zero. So the Copenhagen interpretation makes an ad-hoc adjustment to account for this by introducing the idea of collapse. There are some kinds of interactions which change the wavefunction “instantaneously,” using a mechanism to be worked out later. Pop-science books make a lot of hay out of instantaneous collapse because, in relativity, nothing is instantaneous. This is a little silly, because the Schrödinger equation is non-relativistic. An early hint at a way out of this puzzle came from Mott, 1929 ( see also ): if the detection events are also quantum-mechanical, the behavior of “classical” measurements becomes a question of correlated probabilities. There has been an enormous amount of research on the subject, especially in the last twenty years, using relativistic quantum mechanics, “weak measurements,” and sneaky business about entanglement. You currently have another answer which links to two papers from 2019 and 2020, and suggests that the mystery has mostly been wrung out of this historic puzzle.
{ "source": [ "https://physics.stackexchange.com/questions/690267", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/321878/" ] }
691,489
Would the emission lines of a very distant galaxy show not many traces of heavier elements as that part of universe was very young? Or was there enough time for an abundance of heavier elements?
The short answer is yes, but the longer answer is more complex. There is a general trend that more distant galaxies are poorer in metals. However, there is a large scatter in such a relationship because age is not the only important parameter. Figure 1, taken from Maolino & Mannuci (2018) , but originally appearing in Zahid et al. (2014) shows the general trend that in well-defined redshift samples, more distant galaxies of the same mass are poorer in metals. i.e. You have to compare galaxies of similar mass. This is important because galaxies are probably built up from smaller building blocks and the average galaxy in the past is smaller than the average galaxy now and the overall mass of a galaxy appears to play a role (or perhaps the mergers of small galaxies play a role) in generating star formation and metal enrichment. Figure 1: The dependence of oxygen abundance (on a logarithmic scale and as a proxy for metallicity) on galaxy mass for samples at increasing reshift. The data have been binned in mass producing rather smooth relationships. (From Zahid et al. 2014 ). Taking studies to higher redshifts is difficult for several reasons. The galaxies are faint and adequate diagnostics of the metallicity, that can be observed consistently over a wide-range of redshifts, are hard to find. Additionally, it matters where in a galaxy you make the measurement, since there are gradients of metallicity within a galaxy. Observations of unresolved, distant galaxies can be difficult to compare with closer galaxies, especially if different tracers are being used because of the different redshifts. Some progress can be made by studying Damped Lyman alpha Absorbers (DLAs) - clouds of gas/forming galaxies seen along sight lines to very high redshift quasars. Some of these have very low inferred metal abundances (almost down to a thousandth of solar at redshifts $>3$ ) and what is present appears to more enriched in the "alpha elements" (O, Mg, Si) that are primarily produced by short-lived massive stars and their type II supernovae explosions. Figure 2, taken from Berg et al. (2016) , shows the logarithmic metal abundances (with respect to the Sun) of a set of DLAs from their work and the literature. Figure 2: The metallicities (determined from a variety of diagnostics) for individual damped Lyman alpha absorber systems as a function of their redshift (from Berg et al. 2016 ). A problem with trying to resolve the very beginnings of star formation and metal enrichment is that it tends to be massive star formation that makes it possible to see a galaxy. And because the first stages of enrichment, caused by the first short-lived massive stars and their supernovae, are fast, it is difficult to catch galaxies "in the act". Thus I think the answer to your question is that the observations of very high redshift galaxies ( $z>5$ ) where we might hope to see the beginnings of enrichment with metals are still not good enough to trace this evolution. This is an area where the James Webb Space Telescope (JWST) is expected to make a significant contribution. Its mid-infrared spectroscopy can probe metallicity indicators in distant galaxies with $z>5$ using diagnostics at the same rest-frame wavelengths that are probed by optical/near-IR telescopes at lower redshifts (e.g. the OIII and NII rest-frame optical emission lines) and with excellent spatial resolution and sensitivity (e.g. see this Science Case "white paper" ).
{ "source": [ "https://physics.stackexchange.com/questions/691489", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/253499/" ] }
691,509
I know two different descriptions of how light propagates in space; (1) like particles traveling and reflecting in straight lines. And (2) like waves spreading and interfering in space. And that both these descriptions are true. It seems to me that scenario (1) is how I perceive the world. I can see things from which the light is reflected into my eyes in a straight line, but I cannot see behind opaque objects, around corners etc. But if scenario (2) is an equally or more correct description of how light behaves, spreading like waves, filling space, interfering etc.: how come the light hitting my eyes is not equally likely to have travelled from behind objects and around corners? I.e. if this is the true description, all I would expect see is a bright blur, with no way of telling from where the light hitting my eyes have originated. Any enlightening (zing!) answers much appreciated. Edit: maybe a clearer way to phrase my question is: can light change direction in empty space by interacting with itself?
The bending of waves around corners is known as “diffraction,” and its natural length scale is the wavelength of the diffracted wave. So if you want to block the sound from a speaker playing a middle C, with a wavelength in air of about a meter, then you need an obstacle which is many meters across. (A building is a good size.) But to block visible light, with sub-micron wavelength, a millimeter-scale object is a sufficiently enormous obstacle.
{ "source": [ "https://physics.stackexchange.com/questions/691509", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/231168/" ] }
691,553
Some time ago, I read about a certain isotope that is stable when neutral but decays with electron emission (beta) when being completely ionized, but I can't find which one it was. Which isotope decays when fully ionized?
I guess you have read about an isotope decaying via electron capture (which is kind of an inverse $\beta$ decay). There are quite many radio-isotopes decaying in this way, for example: $${}^{59}\text{Ni} + e^- \to {}^{59}\text{Co} + \nu$$ $${}^{40}\text{K} + e^- \to {}^{40}\text{Ar} + \nu$$ In this decay the atomic nucleus captures one of the surrounding electrons (usually one of the innermost $K$ shell). Of course this process can only happen if the atom or ion has at least one electron. It cannot occur if the ion is completely ionized, i.e. it has no electrons at all. Another quite rare phenomenon is bound-state $\beta^-$ decay . Here the created anti-neutrino takes almost all of the decay energy, and the created electron gets very little energy so that it fails to escape the atom, and instead integrates into the atomic orbital. You probably have read about the completely ionized rhenium ion which decays quickly (with half-life $32.9$ years) by bound-state $\beta^-$ decay $${}^{187}\text{Re}^{75+} \to {}^{187}\text{Os}^{76+} + e^- + \bar{\nu}$$ whereas the neutral rhenium atom is almost stable (with half-life $42$ billion years) $${}^{187}\text{Re} \to {}^{187}\text{Os}^+ + e^- + \bar{\nu}$$ This particular rhenium isotope ${}^{187}$ Re has a very small $\beta^-$ decay energy (only $3$ keV). This energy (or more precisely: the energy part delivered to the electron, not to the antineutrino) is not enough for the electron to escape the neutral atom. And it cannot find a place in the shell because all electron orbits of the atom are already occupied. But when the ion is fully ionized (i.e. all electrons stripped off), then the energy is enough for the electron to reach a low electronic orbital of the ion. See also the original article by Bosch et al. (1996) "Observation of Bound-State $\beta^-$ Decay of Fully Ionized ${}^{187}$ Re: ${}^{187}$ Re- ${}^{187}$ Os Cosmochronometry" .
{ "source": [ "https://physics.stackexchange.com/questions/691553", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/10606/" ] }
691,966
I am wondering why do we really need the concept of tensor. I think it is like vectors, just as a notation of a set of related parameters. I could write the Navier–Stokes equations with scalars, or vectors, or tensors. If this is the case, we struggle to learn the algebra and calculus rules of tensors only to simplify the notation of a complex equation. My question is: Are there some examples to show the power of tensors instead of just simplifying the notation? I know that when I use tensors of rank 3 or higher, it is often hard to use a matrix (equivalent to a rank-2 tensor). Tensors do make life easier in this kind of situation. Here is a quote from the preface to Wilhelm Flügge's Tensor Analysis and Continuum Mechanics : Many of the recent books on continuum mechanics are only "tensorized" to the extent that they use cartesian tensor notation as a convenient shorthand for writing equations. This is a rather harmless use of tensors. The general, noncartesian tensor is a much sharper thinking tool and, like other sharp tools, can be very beneficial and very dangerous, depending on how it is used. Much nonsense can be hidden behind a cloud of tensor symbols and much light can be shed upon a difficult subject. The more thoroughly the new generation of engineers learns to understand and to use tensors, the more useful they will be.
It is well known that the exercise of logic never adds to our knowledge: its role is to make a certain aspect of that knowledge clearer or more explicit, while keeping all the rest conveniently out of our sight. This is a quote by Tommaso Toffoli, in "Entropy? Honest!". Entropy 18 , 247 (2016). doi: 10.3390/e18070247 . Reading your question reminded me of it because it is pretty much the point: we don't need to write our equations in tensorial form, we could indeed just write them in terms of each individual component (important note: those would not be scalars, for they wouldn't transform as scalars). However, this often will hide interesting properties of what we are dealing with that might make our life considerably easier (see, e.g., this brilliant answer by Terence Tao on a similar question in Math Overflow). In the case of tensors, our main interest is that their components have well defined transformation properties under changes of coordinates and are, deep down, geometrically invariant, and that allows you to see and comprehend much more than you otherwise would. For example, take the expression $\mathbf{F} = m \mathbf{a}$ in Classical Mechanics. This is written in terms of rank-1 tensors (vectors), but we could write it as three equations in components. It would read $$ \left\lbrace \begin{aligned} F_x &= m a_x, \\ F_y &= m a_y, \\ F_z &= m a_z, \end{aligned} \right. $$ where I employed Cartesian coordinates. Let us suppose you already measured $F_x$ , $F_y$ , and $F_z$ experimentally and now wants to determine the acceleration in each direction. However, you notice you messed up your setup and actually you wanted the components of the acceleration in a slightly different angle, rotated in $45º$ around the $z$ axis with respect to the system you chose. Then you compute the transformation and finds that the previous equations read now, after we transform them, $$ \left\lbrace \begin{aligned} \frac{\sqrt{2}}{2} F_x - \frac{\sqrt{2}}{2} F_y &= m a_{x'}, \\ \frac{\sqrt{2}}{2} F_x + \frac{\sqrt{2}}{2} F_y &= m a_{y'}, \\ F_z &= m a_{z'}, \end{aligned} \right. $$ assuming I didn't get anything wrong. Had you chosen to work with vectors, the equation would read $$\mathbf{F} = m \mathbf{a},$$ because while vector components transform under a change of basis, vectors themselves are geometrical objects which do not depend on the basis. Of course, this is just an example, you'd have to compute the components to get to your final computation either way. However, notice how writing in vector notation employs automatically the invariance properties of vectors. You can choose to write component-wise, but that won't destroy the vectors that lie beneath your computations, it just hides them. The components still transform as vectors and the vectors are still there, you are just not looking at them, and, as a consequence, you are missing some information that could make things way simpler. The point is not that they are a way of organizing calculations (you'll often have to work with the components sooner or later), but that there are "hidden" symmetries and properties that become explicit once you cast the formulae into an appropriate notation. At this point, it is worth pointing out the comment made by Dvij D.C. (which I'm adding here in case it is deleted in the future) Tensors not only make the underlying symmetries more manifest but they are the only basis-independent/coordinate-independent way of expressing physics. The tensor-components $T_{\mu\nu}$ transform under a change of basis but the tensor itself $\textbf{T} = T_{\mu\nu} \textbf{e}^{\mu} \otimes \textbf{e}^{\nu}$ remains invariant. Notice how this is similar to us writing Newton's second law in components and in vector form. Vectors are simply rank-1 tensors, and when employing them our equations automatically are expressed in an invariant way under rotations. As a second example, consider the expression $\mathbf{p \cdot q}$ written in two other different notations. The first one, in Einstein notation, which would read $$\mathbf{p \cdot q} = p_i q^i,$$ and the second being the expression in Cartesian coordinates, $$\mathbf{p \cdot q} = p_x q_x + p_y q_y + p_z q_z.$$ Both expressions are tautological. Neither of them adds any intrinsic knowledge to us. However, the expression in Einstein notation makes it clear that the object is invariant under rotations, while the second one has a lot of objects that transform in a sort of complicated way, and you are not sure whether the expression would change if you chose to work with different coordinates. In principle, one can component-wise everything we do with tensors. It is similar to searching for a needle in a haystack: you can do it by looking carefully and eventually you'll find it, but you can also use the extra knowledge that the needle is made of iron and use a magnet. Extra Examples Quantum Mechanics One other example of usage of tensors in Physics, although in a particularly specific notation, is within Quantum Mechanics. For simplicity, I'll stick to the rotation example. Suppose you made a measurement, noticed you should have rotated your coordinates before, and so on. If you denote the states in terms of wavefunctions, you'll figure out that the probability a particle prepared in the state $\psi(x)$ to be measured in the state $\phi(x)$ is $$P(\phi|\psi) = \left\vert \int \phi^*(x) \psi(x) \mathrm{d}^3{x} \right\vert^2$$ before the rotation and, after the rotation, $$P'(\phi|\psi) = \left\vert \int \phi'^*(x) \psi'(x) \mathrm{d}^3{x} \right\vert^2,$$ where $\psi'$ is the wavefunction $\psi$ after the rotation of the system of coordinates. Opening up the appropriate expression for $\psi'$ and $\phi'$ , one will eventually find out that the probability is, of course, the same. In Dirac notation, however, which employs the fact that the objects we're dealing with are vectors, one would write $$P(\phi|\psi) = \vert\langle \phi \vert \psi \rangle\vert^2$$ before the rotation and $$P'(\phi|\psi) = \vert\langle \phi' \vert \psi' \rangle\vert^2.$$ Since rotations are implemented in Quantum Mechanics by means of unitary operators, one has then that $\vert\psi'\rangle = U \vert\psi\rangle$ for some unitary operator $U$ . Hence, the rotated version is $$P'(\phi|\psi) = \vert\langle \phi' \vert \psi' \rangle\vert^2 = \vert\langle \phi \vert U^\dagger U \vert \psi \rangle\vert^2 = \vert\langle \phi \vert \psi \rangle\vert^2 = P(\phi|\psi),$$ and the result is immediate using properties of linear algebra, without the need to manipulate anything with calculus. Another example is how one can solve the quantum harmonic oscillator in terms of ladder operators instead of solving differential equations. It exploits the linear structure hidden in the wavefunctions and allows one to get a solution with much less labor. Relativity Expressions written in terms of tensors in Relativity are assured to work in all reference frames, since tensors are geometrical objects. This is similar to my previous example of how working in vector notation made taking the rotations into account much easier. Take, for example, the famous expression $$E^2 = p^2 c^2 + m^2 c^4.$$ In this form, it is not obvious that this formula holds in any inertial frame (unless, of course, you are already well acquainted with Relativity and know the fact by heart). However, the same formula can be written as $$p^\mu p_\mu = m^2 c^2,$$ where $p^\mu = (\frac{E}{c}, \mathbf{p})^\intercal$ , which explicitly shows the expression is invariant. Another example is the fact that the relativistic Doppler effect for a source in uniform linear motion boils down to noticing that $k^\mu u_\mu$ is an invariant, $k^\mu = (\frac{\omega}{c}, \mathbf{k})^\intercal$ being the wave's four-momentum and $u^\mu$ being the source's four-velocity. By computing the invariant in two different frames, one of which has the source at rest, one arrives at the expression for the frequency shift in an incredibly straightforward way. Indeed, suppose that in the frame of rest of the source the wave has $k^\mu = (\frac{\omega_0}{c}, \mathbf{k}_0)^\intercal$ , while in some other frame the source has $u^\mu = (\gamma c, \gamma \mathbf{v})^\intercal $ and the wave has $k^\mu = (\frac{\omega}{c}, \mathbf{k})^\intercal$ . Then $$ \begin{align} k^\mu u_\mu\vert_{\text{rest}} &= k^\mu u_\mu\vert_{\mathbf{v}}, \\ - \omega_0 &= \gamma(-\omega + \mathbf{k \cdot v}). \end{align} $$ Let's write $\mathbf{k} = \frac{\omega}{c} \mathbf{\hat{n}}$ , where $\mathbf{\hat{n}}$ is a unit vector (the fact that this is possible comes from usual wave mechanics). Then $$ \omega_0 = \omega\gamma\left(1 - \frac{\mathbf{n \cdot v}}{c}\right), $$ and we conclude that $$\frac{\omega}{\omega_0} = \frac{\sqrt{1 - \frac{v^2}{c^2}}}{1 - \frac{\mathbf{n \cdot v}}{c}}.$$ This derivation just comes in such a direct way because we noticed from the start that $k^\mu u_\mu$ is a relativistic invariant, something that is favored by tensor notation: the absence of "free" indices tells us that the quantity is an invariant.
{ "source": [ "https://physics.stackexchange.com/questions/691966", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/287975/" ] }
692,338
To understand how gravity influence objects, time and space, I have been thinking of how a planets shape would change the orbits of its moons. More specifically: can I design a planet whose moon move in a square orbit? Below is a diagram of my first intuitive try. For simplicity I imagine a two-dimensional shape extruded with the moon moving around it. 1. Is there in theory a shape that would create a square orbit for objects moving around it? 2. If yes, what is that shape?
Let us consider what forces are needed for a square orbit. As Newton pointed out, as long as there are no forces it will move in a straight line... so there must be no gravity along the sides. Then suddenly the moon turns 90 degrees, which implies a lot of force accelerating it. So there must be an enormous force just near the corners, and not along the sides. This is awkward to achieve with real gravity. The gravitational force is a central force: each particle with mass exerts a $GMm/r^2$ force directed towards it, and you cannot shield from this by putting over masses in front: all the contributions from different mass particles sum together. So you cannot have just gravity bending the trajectory 90 degrees at the corner, since the gravitation from there will also affect the trajectory along the edge. A general thing is that a complex shape of a planet produces a gravitational field that can be expressed using spherical harmonics . These tend to decay fast with distance if they are high frequence/sharp ("higher order"): weird planet shapes only affect very nearby orbits. A four planet trick If you allow for four fixed pointlike "planets" in a square I think one can prove there exists a nearly square orbit. Think of the moon approaching one of them, with the impact parameter $b$ (how far from the straight collision course it starts) a free variable. If $b$ is too large, the trajectory will just bend slightly and sweep past. If $b$ is too small you get more than a 90 degree turn. By continuity there is some $b$ that gives an exact 90 degree turn. That means, by conservation of energy, that it will move with exactly the same speed as it started when it gets far away from the planet. So, we can arrange that it does the same with the next, next and next planets and return to the starting point. The result is an orbit that is like a smoothed square. But it is not so much an orbit around a planet. There is a subtlety above: the influence of the other three mass points will be felt at all places, so the gravitational bending is not going to be the perfect 2-body encounter I am assuming. Inside the Hill sphere of the planet its gravity dominates over all others and 2-body dynamics is a good approximation. However, proving the existence of the closed orbit requires more. Fortunately this is a continuous situation: if we color the starting points of the moon by how close they get to the second planet at closest approach, there will be some point that reaches the right distance to do a perfect 90 degree turn. Near that point, if we instead color by distance to the third planet, there will be an optimal point that causes three near-90 degree turns. Using the same method for the last planet and the starting point I think one can convince oneself that such an orbit must exist. Things are slightly trickier, since we should also do the same for how velocities change: we want to find a fixed point of the mapping $f:(x_{start},v_{start})\rightarrow (x_{end},v_{end})$ created by the planets so that $f(x,v)=f(x,v)$ (technically, finding a fixed point of the Poincare map ). Doing this analytically is likely a nightmare, but one can use software optimization methods. My plot above was made by starting with a region of initial values selected by hand, finding the one trajectory that got closest to its initial condition (in terms of position and velocity), zooming in on that to find even better starting values, and so on.
{ "source": [ "https://physics.stackexchange.com/questions/692338", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/231168/" ] }
692,341
When describing rigid body dynamics during collision, one can make use of impulse and energy equations to describe the velocities of both objects after the collision. Below are 2 equations, with 2 unknowns ( $v_{1,new}$ and $v_{2,new}$ ), which can thus be solved. In case of an added rotational degree of freedom, two additional degrees of freedom emerge ( $\omega_{1,new}$ and $\omega_{2,new}$ ). However, I still have only 2 equations below, and thus I don't know how to solve this. Which two equations am I missing to solve the problem? Or is there another approach to this problem?
Let us consider what forces are needed for a square orbit. As Newton pointed out, as long as there are no forces it will move in a straight line... so there must be no gravity along the sides. Then suddenly the moon turns 90 degrees, which implies a lot of force accelerating it. So there must be an enormous force just near the corners, and not along the sides. This is awkward to achieve with real gravity. The gravitational force is a central force: each particle with mass exerts a $GMm/r^2$ force directed towards it, and you cannot shield from this by putting over masses in front: all the contributions from different mass particles sum together. So you cannot have just gravity bending the trajectory 90 degrees at the corner, since the gravitation from there will also affect the trajectory along the edge. A general thing is that a complex shape of a planet produces a gravitational field that can be expressed using spherical harmonics . These tend to decay fast with distance if they are high frequence/sharp ("higher order"): weird planet shapes only affect very nearby orbits. A four planet trick If you allow for four fixed pointlike "planets" in a square I think one can prove there exists a nearly square orbit. Think of the moon approaching one of them, with the impact parameter $b$ (how far from the straight collision course it starts) a free variable. If $b$ is too large, the trajectory will just bend slightly and sweep past. If $b$ is too small you get more than a 90 degree turn. By continuity there is some $b$ that gives an exact 90 degree turn. That means, by conservation of energy, that it will move with exactly the same speed as it started when it gets far away from the planet. So, we can arrange that it does the same with the next, next and next planets and return to the starting point. The result is an orbit that is like a smoothed square. But it is not so much an orbit around a planet. There is a subtlety above: the influence of the other three mass points will be felt at all places, so the gravitational bending is not going to be the perfect 2-body encounter I am assuming. Inside the Hill sphere of the planet its gravity dominates over all others and 2-body dynamics is a good approximation. However, proving the existence of the closed orbit requires more. Fortunately this is a continuous situation: if we color the starting points of the moon by how close they get to the second planet at closest approach, there will be some point that reaches the right distance to do a perfect 90 degree turn. Near that point, if we instead color by distance to the third planet, there will be an optimal point that causes three near-90 degree turns. Using the same method for the last planet and the starting point I think one can convince oneself that such an orbit must exist. Things are slightly trickier, since we should also do the same for how velocities change: we want to find a fixed point of the mapping $f:(x_{start},v_{start})\rightarrow (x_{end},v_{end})$ created by the planets so that $f(x,v)=f(x,v)$ (technically, finding a fixed point of the Poincare map ). Doing this analytically is likely a nightmare, but one can use software optimization methods. My plot above was made by starting with a region of initial values selected by hand, finding the one trajectory that got closest to its initial condition (in terms of position and velocity), zooming in on that to find even better starting values, and so on.
{ "source": [ "https://physics.stackexchange.com/questions/692341", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/327094/" ] }
693,507
I am a qft noob studying from Quantum Field Theory: An Integrated Approach by Fradkin, and in section 13 it discusses the one loop corrections to the effective potential $$U_1[\Phi] = \sum^\infty_{N=1}\frac{1}{N!}\Phi^N\Gamma^{N}_1(0,...,0)$$ And how the first $D/2$ terms are divergent where $D$ is the dimensionality. The book then discusses that the solution to this is to define the renormalized mass $\mu^2 = \Gamma^{(2)}(0)$ and renormalized coupling constant $g = \Gamma^{(4)}(0)$ , and then expressions relating the bare mass to the renormalized mass, and expressions relating the bare coupling constant to the renormalized constant are found, where the integrals now only run to a UV cutoff $\Lambda$ . The effective potential is then written in terms of the renormalized mass and renormalized coupling constant, and the result is magically finite. Somewhere in this process I am a bit lost. First , I am not really intuitively seeing the motivation for defining the renormalized mass and coupling constant as the two- and four-point vertex function at zero external momenta. What is the motivation behind this? Second I feel I am a bit lost about how the resulting effective potential after all of this becomes finite. I suppose I can see mathematically that the result is finite, but I do not at all understand it. At what point in our scheme does the finiteness really come out of? What is the point in all of this?
Here's the thing: renormalization and divergences have nothing to do with each other. They are conceptually unrelated notions. Renormalization Simply put, renormalization is a consequence of non-linearities. Any theory (other than those that are linear) requires renormalization. Even in classical Newtonian mechanics. Renormalization means: when you change an interaction, the parameters of your theory change. The simplest example is the classical (an)harmonic oscillator. Say you begin with $L=\frac12m\dot q^2-\frac12kq^2$ . If you prepare this system in a laboratory, you will observe that $q$ oscillates with frequency $\omega^2=k/m$ . Now, say you add the (nonlinear) interaction $\gamma q^4$ to your Lagrangian. The frequency that you will measure in a laboratory is no longer $\omega^2=k/m$ , but rather $\omega^2\sim k/m+\gamma$ . This trivial phenomenon also occurs in quantum mechanics, in particular QFT. You typically have a set of measured parameters, such as $\omega$ above, and a set of coefficients in your Lagrangian, such as $k,m$ . The latter are not directly observable. Solving your theory gives you a function $\omega=f(k,m,\dots)$ for some $f$ . You can use the measured value of $\omega$ , and the calculable function $f$ , to fix the value of your Lagrangian parameters $m,k,\dots$ . If you change your Lagrangian by adding a new term, the function $f$ will change, and therefore the value of your parameters $m,k,\dots$ will change. Of course, $\omega$ stays the same, as this is something you measure in a laboratory (and does not care about which Lagrangian you are using to model the dynamics). For example, say you have a QFT that describes a particle. The system will have several parameters $c_1,c_2,\dots$ , which multiply the different terms in your Lagrangian, say $L=(\partial\phi)^2-c_1\phi^2-c_2\phi^4+\cdots$ . This Lagrangian predicts that there is a particle with some mass $m=f(c_1,c_2,\dots)$ , where $f$ is a function that you obtain by solving the theory (say, using Feynman diagrams). The value of $m$ is fixed by experiments, you can measure it in a lab. From this measured value, and the known form of the function $f$ , you can fix the value of your model-dependent parameters $c_i$ . For example, if you begin with the model $L=(\partial\phi)^2-c_1\phi^2$ , then the function $f$ takes the form $m=c_1$ , and therefore the value of $c_1$ is identical to what you measure $m$ to be. If you take, instead, $L=(\partial\phi)^2-c_1\phi^2-c_2\phi^4$ , then you now have $m=c_1+\frac{3}{16\pi}c_2+\mathcal O(c_2^2)$ . Using this (and some other measured observable, such as a cross section), you can fix the value of $c_1,c_2$ . Note that $c_1$ will not in general take the same value as before, namely $c_1\neq m$ . This is what we mean by "interactions renormalize your couplings". We just mean that, by adding interactions to your model, the numerical value of your coupling constants will change. This is all true even if your theory is finite. Theories without divergences still require renormalization, i.e., the value of the coupling constants are to be determined by comparing to observable predictions, and changing interactions changes the numerical value of your coupling constants. (The exception is, of course, linear theories and some other special cases such as those involving integrability). Renormalizing a theory typically means: fix the value of your model-dependent parameters $\{c_i\}$ as a certain function of the measurable parameters, such as $m$ and other observable properties of your system. The value of $m$ is fixed by nature, and we have no control over it. The value of $\{c_i\}$ depends on which specific Lagrangian we are using to model the physical phenomenon, and it changes if we change the model. Divergences In QFT divergences are the result of mishandling distributions. In $d>1$ , quantum fields are not functions of spacetime, but rather distributions. As such, you cannot manipulate them as if they were regular functions, e.g. you cannot in general multiply distributions. When you do, you get divergences. This is manifested in the fact that the function $f$ from before typically has divergent terms inside. The cute thing is: for a large class of theories, if you write $c_i=\tilde c_i+\delta_i$ , with $\delta_i$ a (specifically constructed) divergent contribution, and $\tilde c_i$ a finite part, then the relation $m=f(c_i)=\tilde f(\tilde c_i)$ is rendered finite, i.e., the function $\tilde f$ no longer has divergent terms inside. But note that renormalization did not actually cure the divergences. The divergences were eliminated by writing $c_i=\tilde c_i+\delta_i$ and carefully compensating the divergences with another divergent part, $\delta_i$ . This is not what renormalization is. Renormalization is the statement that, if you were to change the model, the numerical value of the constants $c_i$ (and, by extension, that of $\tilde c_i$ ), change accordingly. For a general theory, you need to perform both steps: 1) cancel the divergences by splitting your model-dependent parameters as $c_i=\tilde c_i+\delta_i$ , with finite $\tilde c_i$ and a suitable divergent $\delta_i$ , and 2) renormalize your theory, i.e., fix the value of your model-dependent parameters $\tilde c_i$ as a function of observable parameters such as $m$ . n-point functions What measurable parameters can we use beyond $m$ ? in general there will be multiple parameters $c_i$ , so you need as many observables in order to fix the former as a function of the latter. For a completely general theory, it is hard to come up with a concrete list of observable parameters. For specific systems this is easy, e.g. for a thermodynamic material we could use the susceptibility at criticality, while for QCD we could use the pion decay constant. Both of these are measurable in a laboratory, and they can both be predicted as a function of the parameters in the Lagrangian. But what if we are dealing with a more general QFT, one for which we do not have a clear picture of what it is describing in real life? Which observables can we use then, if we don't even know what the theory is modeling in the first place? In this situation, it is convenient to use correlation functions at specific values of the external momenta as "observables". So for generic theories, instead of using a decay constant as a measureable parameter, we use $\Gamma^{(n)}(0)$ , as if this were something we could measure. Often enough, it actually is, but this really depends on which QFT you are working with.
{ "source": [ "https://physics.stackexchange.com/questions/693507", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/279299/" ] }
693,512
Say I have a wavefunction and a potential energy function that satisfy the time-independent schrodinger equation. I now change $x$ to $-x$ in both functions, does the time-independent schrodinger equation still hold for them? Do I still have a valid potential energy function and a valid wavefunction that satisfy the differential equation?
Here's the thing: renormalization and divergences have nothing to do with each other. They are conceptually unrelated notions. Renormalization Simply put, renormalization is a consequence of non-linearities. Any theory (other than those that are linear) requires renormalization. Even in classical Newtonian mechanics. Renormalization means: when you change an interaction, the parameters of your theory change. The simplest example is the classical (an)harmonic oscillator. Say you begin with $L=\frac12m\dot q^2-\frac12kq^2$ . If you prepare this system in a laboratory, you will observe that $q$ oscillates with frequency $\omega^2=k/m$ . Now, say you add the (nonlinear) interaction $\gamma q^4$ to your Lagrangian. The frequency that you will measure in a laboratory is no longer $\omega^2=k/m$ , but rather $\omega^2\sim k/m+\gamma$ . This trivial phenomenon also occurs in quantum mechanics, in particular QFT. You typically have a set of measured parameters, such as $\omega$ above, and a set of coefficients in your Lagrangian, such as $k,m$ . The latter are not directly observable. Solving your theory gives you a function $\omega=f(k,m,\dots)$ for some $f$ . You can use the measured value of $\omega$ , and the calculable function $f$ , to fix the value of your Lagrangian parameters $m,k,\dots$ . If you change your Lagrangian by adding a new term, the function $f$ will change, and therefore the value of your parameters $m,k,\dots$ will change. Of course, $\omega$ stays the same, as this is something you measure in a laboratory (and does not care about which Lagrangian you are using to model the dynamics). For example, say you have a QFT that describes a particle. The system will have several parameters $c_1,c_2,\dots$ , which multiply the different terms in your Lagrangian, say $L=(\partial\phi)^2-c_1\phi^2-c_2\phi^4+\cdots$ . This Lagrangian predicts that there is a particle with some mass $m=f(c_1,c_2,\dots)$ , where $f$ is a function that you obtain by solving the theory (say, using Feynman diagrams). The value of $m$ is fixed by experiments, you can measure it in a lab. From this measured value, and the known form of the function $f$ , you can fix the value of your model-dependent parameters $c_i$ . For example, if you begin with the model $L=(\partial\phi)^2-c_1\phi^2$ , then the function $f$ takes the form $m=c_1$ , and therefore the value of $c_1$ is identical to what you measure $m$ to be. If you take, instead, $L=(\partial\phi)^2-c_1\phi^2-c_2\phi^4$ , then you now have $m=c_1+\frac{3}{16\pi}c_2+\mathcal O(c_2^2)$ . Using this (and some other measured observable, such as a cross section), you can fix the value of $c_1,c_2$ . Note that $c_1$ will not in general take the same value as before, namely $c_1\neq m$ . This is what we mean by "interactions renormalize your couplings". We just mean that, by adding interactions to your model, the numerical value of your coupling constants will change. This is all true even if your theory is finite. Theories without divergences still require renormalization, i.e., the value of the coupling constants are to be determined by comparing to observable predictions, and changing interactions changes the numerical value of your coupling constants. (The exception is, of course, linear theories and some other special cases such as those involving integrability). Renormalizing a theory typically means: fix the value of your model-dependent parameters $\{c_i\}$ as a certain function of the measurable parameters, such as $m$ and other observable properties of your system. The value of $m$ is fixed by nature, and we have no control over it. The value of $\{c_i\}$ depends on which specific Lagrangian we are using to model the physical phenomenon, and it changes if we change the model. Divergences In QFT divergences are the result of mishandling distributions. In $d>1$ , quantum fields are not functions of spacetime, but rather distributions. As such, you cannot manipulate them as if they were regular functions, e.g. you cannot in general multiply distributions. When you do, you get divergences. This is manifested in the fact that the function $f$ from before typically has divergent terms inside. The cute thing is: for a large class of theories, if you write $c_i=\tilde c_i+\delta_i$ , with $\delta_i$ a (specifically constructed) divergent contribution, and $\tilde c_i$ a finite part, then the relation $m=f(c_i)=\tilde f(\tilde c_i)$ is rendered finite, i.e., the function $\tilde f$ no longer has divergent terms inside. But note that renormalization did not actually cure the divergences. The divergences were eliminated by writing $c_i=\tilde c_i+\delta_i$ and carefully compensating the divergences with another divergent part, $\delta_i$ . This is not what renormalization is. Renormalization is the statement that, if you were to change the model, the numerical value of the constants $c_i$ (and, by extension, that of $\tilde c_i$ ), change accordingly. For a general theory, you need to perform both steps: 1) cancel the divergences by splitting your model-dependent parameters as $c_i=\tilde c_i+\delta_i$ , with finite $\tilde c_i$ and a suitable divergent $\delta_i$ , and 2) renormalize your theory, i.e., fix the value of your model-dependent parameters $\tilde c_i$ as a function of observable parameters such as $m$ . n-point functions What measurable parameters can we use beyond $m$ ? in general there will be multiple parameters $c_i$ , so you need as many observables in order to fix the former as a function of the latter. For a completely general theory, it is hard to come up with a concrete list of observable parameters. For specific systems this is easy, e.g. for a thermodynamic material we could use the susceptibility at criticality, while for QCD we could use the pion decay constant. Both of these are measurable in a laboratory, and they can both be predicted as a function of the parameters in the Lagrangian. But what if we are dealing with a more general QFT, one for which we do not have a clear picture of what it is describing in real life? Which observables can we use then, if we don't even know what the theory is modeling in the first place? In this situation, it is convenient to use correlation functions at specific values of the external momenta as "observables". So for generic theories, instead of using a decay constant as a measureable parameter, we use $\Gamma^{(n)}(0)$ , as if this were something we could measure. Often enough, it actually is, but this really depends on which QFT you are working with.
{ "source": [ "https://physics.stackexchange.com/questions/693512", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/290438/" ] }
693,684
First, I would like to clarify that I know why a mirage is formed, what I want to know is why is it that when a mirage is formed it appears that a pool of water is present. Like for a palm tree in a desert, an inverted image is formed accompanied by a virtual pool of water. But where does that pool of water come from? The only object whose image is inverted is the tree isnt it?
The reflected image of the palm tree is accompanied by the reflected image of the sky above and surrounding it. so in the reflection, you see the palm tree and the sky. There is no pool of water: the light coming towards you from the palm tree and the sky behind it is reflected up away from the ground by a hot layer of air right next to the ground, which acts like a mirror and inverts the image of the palm tree. Note that at a shallow angle close to sea level, the sea far from you also acts like a mirror, reflecting the blue of the sky, when the surface of the sea is calm and nearly flat. The reflections swirl around a bit because of small waves that "bend" the mirror surface around. Because there are weak air currents next to the hot ground in the mirage case, the reflections from distant objects get swirled around in a manner that looks like the surface of a pond with small waves in its water. These effects are easy to demonstrate if you put a table tennis table out in the sun and at one end, you install miniature palm trees. Somewhere near the middle of the table you place a large flat mirror and next to it a very shallow plate with a thin layer of water in it. Then go to the opposite end of the table, get down on your knees, and put your eye right next to the edge of the table and look towards the little palm trees, moving your head until the reflectors are in your line of sight. With care you can do this experiment with a big flat plate of hot metal in place of the mirror but you might start the table on fire.
{ "source": [ "https://physics.stackexchange.com/questions/693684", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/263560/" ] }
693,691
A Rod has length of 50cm and radius of 5cm with sides names say A and B . CASE 1 A bag is hanged in MIDDLE of the Rod name that C . Now a person holds that Rod with one hand from edge B . Now , the bag and Rod weighs 2Kg togather So , total force applied downwards would be almost 20N (20 newtons) which means the person has to apply only 20N force Upwards to hold that happens aswell the person applies 20N and is able to hold it in air easily. CASE 2 Now, the Rod and bag are same and the person is also same holding the Rod with a bag hanged on Rod from edge B BUT, the Bag is not in the middle of Rod the Bag is not on C it's position from Rod has been moved and its almost at the edge of rod Almost at A so the bag has moved and is at opposite side from the person's hand . Now the observation has been made that person has to put SO much MORE effort to hold the ROD (with bag hanged) in Air IN CASE 2 Than CASE 1 So, how much force is needed to hold that Rod in air in CASE 2 A) more than 20N force. B) only 20N force C) less than 20N force D) non of the above ? AND, why is it more difficult to Lift the Rod in CASE 2 compared to CASE 1 even though mass is same only the bag is moved from point C to A
The reflected image of the palm tree is accompanied by the reflected image of the sky above and surrounding it. so in the reflection, you see the palm tree and the sky. There is no pool of water: the light coming towards you from the palm tree and the sky behind it is reflected up away from the ground by a hot layer of air right next to the ground, which acts like a mirror and inverts the image of the palm tree. Note that at a shallow angle close to sea level, the sea far from you also acts like a mirror, reflecting the blue of the sky, when the surface of the sea is calm and nearly flat. The reflections swirl around a bit because of small waves that "bend" the mirror surface around. Because there are weak air currents next to the hot ground in the mirage case, the reflections from distant objects get swirled around in a manner that looks like the surface of a pond with small waves in its water. These effects are easy to demonstrate if you put a table tennis table out in the sun and at one end, you install miniature palm trees. Somewhere near the middle of the table you place a large flat mirror and next to it a very shallow plate with a thin layer of water in it. Then go to the opposite end of the table, get down on your knees, and put your eye right next to the edge of the table and look towards the little palm trees, moving your head until the reflectors are in your line of sight. With care you can do this experiment with a big flat plate of hot metal in place of the mirror but you might start the table on fire.
{ "source": [ "https://physics.stackexchange.com/questions/693691", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/327770/" ] }
693,694
The parallel axis theorem states that you can relate the moments of inertia defined with the center of mass as the origin to the moments of inertia defined with respect to some other origin. It is summarized in this equation: $I=I_c+Mh^2$ For a cone, the center of mass is $1/4$ of the height from the base so we can define $I_{base}$ using the parallel axis theorem. However, since the parallel axis theorem does not care for the sign of $h$ , the moments of inertia defined at $h/4$ above the center of mass are equivalent to the base. The moment of inertia can be thought of as the opposition that a body exhibits to having its speed of rotation about an axis altered by the application of a torque. This means that turning the cone about $I_1$ in the base will require the same force as it takes to rotate it about $I_1$ which is at the center of the cone. Intuitively, I would say that is harder to rotate the cone about its base than it is to spin it around its center so the fact that they have the same moments of inertia is confusing to me. Can someone please point out the flaw in my logic? Also, as a bonus question, is the parallel axis theorem still valid when you extend the origin outside of the shape?
The reflected image of the palm tree is accompanied by the reflected image of the sky above and surrounding it. so in the reflection, you see the palm tree and the sky. There is no pool of water: the light coming towards you from the palm tree and the sky behind it is reflected up away from the ground by a hot layer of air right next to the ground, which acts like a mirror and inverts the image of the palm tree. Note that at a shallow angle close to sea level, the sea far from you also acts like a mirror, reflecting the blue of the sky, when the surface of the sea is calm and nearly flat. The reflections swirl around a bit because of small waves that "bend" the mirror surface around. Because there are weak air currents next to the hot ground in the mirage case, the reflections from distant objects get swirled around in a manner that looks like the surface of a pond with small waves in its water. These effects are easy to demonstrate if you put a table tennis table out in the sun and at one end, you install miniature palm trees. Somewhere near the middle of the table you place a large flat mirror and next to it a very shallow plate with a thin layer of water in it. Then go to the opposite end of the table, get down on your knees, and put your eye right next to the edge of the table and look towards the little palm trees, moving your head until the reflectors are in your line of sight. With care you can do this experiment with a big flat plate of hot metal in place of the mirror but you might start the table on fire.
{ "source": [ "https://physics.stackexchange.com/questions/693694", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/327756/" ] }
693,955
Could mass just be perceived as light moving along a geodesic through an additional spatial dimension (either invisible or somehow curled up into itself)? Since the light would be moving in another dimension, to us its velocity will be less than the speed of light, giving the illusion of mass.
In order to illustrate the difficulties associated with such an approach, I will mention an example. One way to obtain a toy model according to your requirement is Kaluza-Klein theory, which assumes space-time is 5-dimensional, and which kind-of obtains Maxwell's equations (classical electrodynamics) from Einstein's field equations in 5D. The reason I say "kind of" is that it depends on what letters of the alphabet you give the variables in the obtained equations. The mere fact that some equations of a synthetic theory formally correspond to equations that are already known to satisfy experiments, does not automatically imply that this theory is useful. Only if the theory does not "predict" a lot of quite arbitrary things that have never been observed, AND it predicts the things that are already known, it can be considered useful. To give you a quick access to what Kaluza-Klein-theory predicts here, let's consider the special case where the 5D spacetime is (approximately) completely flat (i.e. there is no curvature/gravity) and one of its dimensions is "rolled up" on a small scale. You can imagine the 5th dimension together with some other spatial dimension (e.g. "x") as a piece of paper, which you literally roll up to a tube with your hands. Since this nowhere requires the paper to stretch or compress, the tube remains geometrically flat. Then, rolling up the 5th dimension is equivalent to having a dimension that has finite extent and periodic boundary conditions (due to the tube-like topology). If you want to explore what happens to a (massless) wave that propagates in this 5D spacetime, you could for example write down the d'Alembert equation for a wave function $\psi$ , which can be considered representing one component of electromagnetic waves (e.g. the electric field strength in x-direction): $$\frac{1}{c^2}\frac{\partial^2 \psi}{\partial t^2}-\frac{\partial^2 \psi}{\partial x^2}=\frac{\partial^2 \psi}{\partial w^2}$$ where $w$ shall denote the coordinate in the 5th dimension, and $x$ may represent one of the usual macroscopic spatial dimensions. The fact that the $w$ -term on the right-hand side has positive sign (it would have obtained a negative sign as well if we had written it on the left-hand side) indicates that this is an additional spatial dimension (as opposed to an additional temporal dimension). Now, due to the periodic boundary conditions, the coordinate value $w+L$ is equivalent to $w$ where $L$ is the circumference of the rolled-up 5th dimension. For the wave function $\psi$ this means $$\psi(w+L)=\psi(w)$$ A function with this periodic property can generally be written as a Fourier series: $$\psi(w)=\sum_{k=-\infty}^\infty \psi_k \cdot \exp(i 2\pi k w/L)$$ Consider only one component of this Fourier series, for example $$\psi(w)=\psi_{k_0} \cdot \exp(i 2\pi k_0 w/L)$$ Then the second order derivative on the right-hand side of the wave equation becomes $$\frac{1}{c^2}\frac{\partial^2 \psi}{\partial t^2}-\frac{\partial^2 \psi}{\partial x^2}=\frac{\partial^2 \psi}{\partial w^2}=-\frac{4\pi^2 k_0^2}{L^2}\psi$$ The factor on the right-hand side before $\psi$ is what is called an eigenvalue (in this case of the second order derivative under the assumed periodic boundary conditions). Sounds good, eigenvalues... quantum mechanics... seems we are on the right track. So, let's see where this approach can carry us with respect to mass. Up to now, we have only considered massless particles ("light"). One of the quantum-mechanical equations that might describe massive particles relativistically is Klein-Gordon equation, which is $$\frac{1}{c^2}\frac{\partial^2 \psi}{\partial t^2}-\frac{\partial^2 \psi}{\partial x^2}=-\frac{m^2c^2}{\hbar^2}\psi$$ If we compare our massless wave equation in 5D with the Klein-Gordon equation, it seems pretty obvious that we "should" identify the mass term according to $$\frac{m^2c^2}{\hbar^2}=\frac{4\pi^2 k_0^2}{L^2}$$ or finally $$m=k_0\frac{h}{cL}$$ We have identified mass as resulting from the finite circumference of a rolled up dimension! And moreover, it does what you think it should do: if you carefully consider what the above Fourier component part of the wave function does, you will see that it rotates in the fifth dimension, and if there is also motion in the other spatial dimensions, this circular motion will become a spiralling motion at the speed of light in 5D (since we have started with a wave equation, there is no doubt about this). If we look at this spiralling motion macroscopically, we will be blind for the tiny extra dimension and see only the averaged motion in the ordinary dimensions, which will have a speed $<c$ . Another interesting point about this is that $L$ can be considered the Compton wavelength of a particle with mass $m$ , at least for $k_0=1$ . If that doesn't sound interesting, then what does? But wait, $k_0=1$ ? And what about $k_0=2,3,\dots$ ? If $k_0=1$ shall represent the electron (and probably $k_0=-1$ the positron, yeah!), then what do the higher integer numbers represent? Duh! There is simply no particle with twice or three times the mass of the electron! This integer spectrum of an infinite number of masses/particles is called the Kaluza-Klein tower. It illustrates that a naive interpretation of seemingly corresponding equations in toy models can be problematic. As a theoretical physicist, you have the responsibility to assign a meaning to every quantity in a new model, that allows it to be tested against experiments. And the problems don't even stop at the Kaluza-Klein tower. Next question is, why there exists a rolled-up dimension of length $L$ in the first place? Well that may probably be explained by boundary conditions of the universe around us ("tube in, tube out"). But electrons are Fermions, and don't even satisfy Klein-Gordon equation (like assumed above), but rather Dirac's equation. And moreover, we have supposed that the wave, that propagates into the 5th dimension and thereby gets mass, is light, but light (photons) is actually massless in reality. At least there, we might try to weasel out by saying that the photon is represented by the component $k_0=0$ which is massless and hence, always travels at the speed of light. Then, finally, we have more than one known elementary particle besides the electron, so this is probably not making it easier to model all of them by extra dimensions and simultaneously answer all of the above questions. And so on and so forth. That is why, as of today, nobody can claim to have found something better than the standard model, where masses (or rather coupling constants) are more or less designed-in from the start.
{ "source": [ "https://physics.stackexchange.com/questions/693955", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/327405/" ] }
693,964
Is a photon emitted beyond the Earth cosmological event horizon towards Earth actually moving away from Earth due to space expansion? Is that the reason why we can't see beyond the horizon?
In order to illustrate the difficulties associated with such an approach, I will mention an example. One way to obtain a toy model according to your requirement is Kaluza-Klein theory, which assumes space-time is 5-dimensional, and which kind-of obtains Maxwell's equations (classical electrodynamics) from Einstein's field equations in 5D. The reason I say "kind of" is that it depends on what letters of the alphabet you give the variables in the obtained equations. The mere fact that some equations of a synthetic theory formally correspond to equations that are already known to satisfy experiments, does not automatically imply that this theory is useful. Only if the theory does not "predict" a lot of quite arbitrary things that have never been observed, AND it predicts the things that are already known, it can be considered useful. To give you a quick access to what Kaluza-Klein-theory predicts here, let's consider the special case where the 5D spacetime is (approximately) completely flat (i.e. there is no curvature/gravity) and one of its dimensions is "rolled up" on a small scale. You can imagine the 5th dimension together with some other spatial dimension (e.g. "x") as a piece of paper, which you literally roll up to a tube with your hands. Since this nowhere requires the paper to stretch or compress, the tube remains geometrically flat. Then, rolling up the 5th dimension is equivalent to having a dimension that has finite extent and periodic boundary conditions (due to the tube-like topology). If you want to explore what happens to a (massless) wave that propagates in this 5D spacetime, you could for example write down the d'Alembert equation for a wave function $\psi$ , which can be considered representing one component of electromagnetic waves (e.g. the electric field strength in x-direction): $$\frac{1}{c^2}\frac{\partial^2 \psi}{\partial t^2}-\frac{\partial^2 \psi}{\partial x^2}=\frac{\partial^2 \psi}{\partial w^2}$$ where $w$ shall denote the coordinate in the 5th dimension, and $x$ may represent one of the usual macroscopic spatial dimensions. The fact that the $w$ -term on the right-hand side has positive sign (it would have obtained a negative sign as well if we had written it on the left-hand side) indicates that this is an additional spatial dimension (as opposed to an additional temporal dimension). Now, due to the periodic boundary conditions, the coordinate value $w+L$ is equivalent to $w$ where $L$ is the circumference of the rolled-up 5th dimension. For the wave function $\psi$ this means $$\psi(w+L)=\psi(w)$$ A function with this periodic property can generally be written as a Fourier series: $$\psi(w)=\sum_{k=-\infty}^\infty \psi_k \cdot \exp(i 2\pi k w/L)$$ Consider only one component of this Fourier series, for example $$\psi(w)=\psi_{k_0} \cdot \exp(i 2\pi k_0 w/L)$$ Then the second order derivative on the right-hand side of the wave equation becomes $$\frac{1}{c^2}\frac{\partial^2 \psi}{\partial t^2}-\frac{\partial^2 \psi}{\partial x^2}=\frac{\partial^2 \psi}{\partial w^2}=-\frac{4\pi^2 k_0^2}{L^2}\psi$$ The factor on the right-hand side before $\psi$ is what is called an eigenvalue (in this case of the second order derivative under the assumed periodic boundary conditions). Sounds good, eigenvalues... quantum mechanics... seems we are on the right track. So, let's see where this approach can carry us with respect to mass. Up to now, we have only considered massless particles ("light"). One of the quantum-mechanical equations that might describe massive particles relativistically is Klein-Gordon equation, which is $$\frac{1}{c^2}\frac{\partial^2 \psi}{\partial t^2}-\frac{\partial^2 \psi}{\partial x^2}=-\frac{m^2c^2}{\hbar^2}\psi$$ If we compare our massless wave equation in 5D with the Klein-Gordon equation, it seems pretty obvious that we "should" identify the mass term according to $$\frac{m^2c^2}{\hbar^2}=\frac{4\pi^2 k_0^2}{L^2}$$ or finally $$m=k_0\frac{h}{cL}$$ We have identified mass as resulting from the finite circumference of a rolled up dimension! And moreover, it does what you think it should do: if you carefully consider what the above Fourier component part of the wave function does, you will see that it rotates in the fifth dimension, and if there is also motion in the other spatial dimensions, this circular motion will become a spiralling motion at the speed of light in 5D (since we have started with a wave equation, there is no doubt about this). If we look at this spiralling motion macroscopically, we will be blind for the tiny extra dimension and see only the averaged motion in the ordinary dimensions, which will have a speed $<c$ . Another interesting point about this is that $L$ can be considered the Compton wavelength of a particle with mass $m$ , at least for $k_0=1$ . If that doesn't sound interesting, then what does? But wait, $k_0=1$ ? And what about $k_0=2,3,\dots$ ? If $k_0=1$ shall represent the electron (and probably $k_0=-1$ the positron, yeah!), then what do the higher integer numbers represent? Duh! There is simply no particle with twice or three times the mass of the electron! This integer spectrum of an infinite number of masses/particles is called the Kaluza-Klein tower. It illustrates that a naive interpretation of seemingly corresponding equations in toy models can be problematic. As a theoretical physicist, you have the responsibility to assign a meaning to every quantity in a new model, that allows it to be tested against experiments. And the problems don't even stop at the Kaluza-Klein tower. Next question is, why there exists a rolled-up dimension of length $L$ in the first place? Well that may probably be explained by boundary conditions of the universe around us ("tube in, tube out"). But electrons are Fermions, and don't even satisfy Klein-Gordon equation (like assumed above), but rather Dirac's equation. And moreover, we have supposed that the wave, that propagates into the 5th dimension and thereby gets mass, is light, but light (photons) is actually massless in reality. At least there, we might try to weasel out by saying that the photon is represented by the component $k_0=0$ which is massless and hence, always travels at the speed of light. Then, finally, we have more than one known elementary particle besides the electron, so this is probably not making it easier to model all of them by extra dimensions and simultaneously answer all of the above questions. And so on and so forth. That is why, as of today, nobody can claim to have found something better than the standard model, where masses (or rather coupling constants) are more or less designed-in from the start.
{ "source": [ "https://physics.stackexchange.com/questions/693964", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/253499/" ] }
694,432
I'm comparing lightning and fire - both are related to ionisation of air but lightning happens so fast in a blink of an eye while fire goes on until it runs out of fuel. My question is: despite being plasma just like a fire, why does lightning only occur for such a brief time?
Lightning is an electrostatic discharge. After that phenomenon, the atmosphere and ground temporarily neutralize themselves. Hence, it is over in a blink of an eye. Fire is a byproduct of some chemical reaction and hence can go on for a longer time. https://www.quora.com/Scientifically-speaking-is-there-any-connection-between-fire-and-lightning
{ "source": [ "https://physics.stackexchange.com/questions/694432", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75502/" ] }
694,736
As we know from Newton's law, we have that $\mathbf{F} = m\cdot\mathbf{a}$ . This means that as long as the mass stays constant, force depends solely on acceleration. But how does this agree with what we can observe in our day-to-day lives? If I drop a coin on someone's head with my hand standing just a couple centimeters above their hair, they won't be bothered too much; but if I drop the same coin from the rooftop of a skyscraper, then it could cause very serious damage or even split their head open. And yet acceleration is pretty much constant near the surface of the earth, right? And even if we don't consider it to be constant, it definitely has the same value at $\sim1.7\text{ m}$ from the ground (where it hits the person's head) regardless of whether the motion of the coin started from $\sim1.72\text{ m}$ or from $\sim1 \text{ km}$ . So what gives? Am I somehow missing something about the true meaning of Newton's law?
The acceleration on the way down is the same. The acceleration when it strikes the person's head is different. Both coins have to be stopped by the skull (we hope). The coin that is going 2m/s will not require as much acceleration to stop in the same distance as a coin that is going 10m/s. That slower acceleration will require less force. After the coin is stopped, it will still produce a force equal to its weight. But that's only part of the force involved.
{ "source": [ "https://physics.stackexchange.com/questions/694736", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/122044/" ] }
694,742
We learnt that a spring stores and releases energy in either direction from the resting position when extended by some distance. When I tried doing this is real life by creating a very low friction surface and a spring and a mass, I noticed that the spring was far better at pulling the mass compared to pushing it. Any possible reasons or is my experimentation wrong?
Note that we are assuming the spring is a coil-type spring, the likes of which you will find in school labs and such. Most springs don't follow Hooke's law when compressed to the point where the windings of the springs start touching each other. So if by pulling you mean to extend the spring and then letting go, it is expected that this works better than compressing the spring if the spring is already "tightly wound". A good way of seeing the situation where pulling and pushing is actually symmetrical is by hanging a spring from a table with a small load applied. That way, the spring+load system settles into an equilibrium position where the spring is not tightly wound. You can then give it a small push either up or down, and you will notice that it starts to oscillate with approximately the same amplitude if your push is almost the same in the two cases.
{ "source": [ "https://physics.stackexchange.com/questions/694742", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/328244/" ] }
695,789
Having read that atomic clocks are more accurate than mechanical clocks as they lose a second only in millions of years, I wonder why it is necessary for a reference clock to worry about this, if the definition of the second itself is a function of the number of ticks the clock makes. Why don't we just use a single simple mechanical clock somewhere with a wound up spring that makes it tick, and whenever it makes a tick, treat it as a second having elapsed? (Assuming this clock was broadcasting its time via internet ntp servers to everyone in the world)
why it is necessary for a reference clock to worry about this, if the definition of the second itself is a function of the number of ticks the clock makes. The concern is that somebody else (say a scientist in France or China or Botswana) needs to be able to build a clock that measures seconds at the same rate mine does. If we both have atomic clocks, we can keep our clocks syncronized to within microseconds per year. If we have mechanical clocks they might be different from each other by a second (or anyway some milliseconds) by the end of a year. If we're doing very exact measurements (comparing the arrival times of gamma rays from astronomical events at different parts of the Earth, or just using a GPS navigation system) then a few milliseconds (or even microseconds) can make a difference in our results.
{ "source": [ "https://physics.stackexchange.com/questions/695789", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/328798/" ] }
698,471
I am studying the inkjet printer in detail. I have come across thermal inkjet printing technology (bubble inkjet technology) and this short discussion below. We create a water vapour bubble by heating a resistance, which displaces ink and forms a drop but after that. Why does the bubble collapse in 10 to 20 microseconds? Bubble Lifetime The bubble lifetime can be determined from the reflectance measurements. If the bubble collapse is considered complete when the reflectance recovers to 0.75 of its initial value, then the typical lifetime for this printhead is about 11 μs, depending on the voltage applied to the heater. A higher voltage tends to length the bubble lifetime as seen in Figure 3b. The reflectance does not assume its initial value quickly until the heater cools to its steady state temperature. ( screenshot of original)
Here is why. The exploding vapor bubble launches up off the surface of the heater resistor at an initial speed of about 5 meters/second and reaches a maximum thickness (top to bottom) of between 25 to 100 microns, depending on which HP printhead design you are studying. That vapor bubble acts as a piston to push a droplet of ink out of the nearby nozzle. (BTW as soon as the resistor surface has been covered by a thin layer of vapor, heat transfer between the hot resistor surface and the liquid is essentially shut off and the bubble expands ballistically.) Because that bubble is expanding ballistically upwards, the inertia of the liquid next to it causes the bubble to overexpand . Its internal temperature falls due to the expansion and also due to heat loss to the cold ink immediately surrounding it. By the time its expansion stops, its internal temperature is ~ambient and its internal pressure has already fallen to subatmospheric, and the bubble begins a very rapid contraction as the vapor inside it quickly condenses back to liquid. The shape of the bubble as it expands and collapses is that of a cushion or pillow with an almost flat top and rounded sides. It has a scale length of order ~1/2(heater length) and for HP's earliest inkjet heaters this varied from 30 to 55 microns. This means that for almost all times during bubble collapse except for the very end stages (where the advancing liquid front is ~1 micron off the heater surface) the effects of surface tension can be safely ignored in comparison to the magnitude of the inertial effects. Also note that the momentum of the inrushing liquid creates a huge overpressure or water-hammer effect at the instant the bubble vanishes, and will swiftly pound holes in the protective layer atop the heater, causing it to fail. The walls enclosing the sides of the heater must be carefully shaped to urge the bubble collapse impingement point off the active surface of the heater to avoid this cavitation damage . I spent 28 years in the business of designing and building thermal inkjet printheads for HP and have lots more information on inkjet device physics to share if you need it.
{ "source": [ "https://physics.stackexchange.com/questions/698471", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/317815/" ] }
699,047
According to Wikipedia , all forces can be decomposed to four fundamental forces: gravity, electromagnetism, strong interaction and weak interaction. When I push someone, this generates a force. Which of the 4 forces is this composed of?
If I push someone, what fundamental force do I create? When a human pushes a object through physical contact, the nature of the force between the human and object is electromagnetic. Atoms consist of positively charged nuclei with negatively charged electrons in orbitals around the atoms. When two atoms are close enough, the electromagnetic force of repulsion between the electrons becomes significant, and the electron orbitals become deformed. The strong, weak and gravitational forces are insignificantly small in this context.
{ "source": [ "https://physics.stackexchange.com/questions/699047", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/326874/" ] }
699,060
Suppose say we have 2 identical conducting spheres. Then we have removed 3 electrons from one of the sphere. Now how will charge distribute on each sphere after charging the uncharged sphere by conduction. Charging by conduction involves the contact of a charged object to a neutral object. In this process both spheres will reach a common potential. Then how the 3rd electron distribute on each sphere after they are separated? Will it break into smaller components or it is projected away from both the spheres or it remains in the mid-way between those spheres?
If I push someone, what fundamental force do I create? When a human pushes a object through physical contact, the nature of the force between the human and object is electromagnetic. Atoms consist of positively charged nuclei with negatively charged electrons in orbitals around the atoms. When two atoms are close enough, the electromagnetic force of repulsion between the electrons becomes significant, and the electron orbitals become deformed. The strong, weak and gravitational forces are insignificantly small in this context.
{ "source": [ "https://physics.stackexchange.com/questions/699060", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/319695/" ] }
699,799
How can the Lorentz factor $\frac{1}{\sqrt{1-\frac{v^2}{c^2}}}$ be understood? What does that mean? For example, what is the reason for the second power and square root? Why not $\frac{1}{1-\frac{v}{c}}$ , or what would happen if it took that form?. Can you point me to other physical laws that make use of the $\sqrt{1-r^2}$ so as to "translate" it. Let $T_0$ be the local period and $L_0$ the local length Light-clock moves $\bot$ light $\rightarrow$ $T=\frac{T_0}{\sqrt{1-\frac{v^2}{c^2}}}$ Light-clock moves $\parallel$ light $\rightarrow$ $T=\frac{2L}{c\left(1-\frac{v^2}{c^2}\right)}$ Referring to 1, How could photons not miss the mirror? Q: How could I understand the Lorentz factor formula intuitively, or what is your concept of it?
Just to add to the other answers, one might ask, "why the square root"? The heart of it is the underlying geometry. When we ask "what is the diagonal of a square with sides $a$ and $b$ , Euclid's rules eventually land us at the conclusion $\sqrt{a^{2} + b^{2}}$ , which then comes from the fact that, in a flat Euclidean space, distances are given by $ds^{2} = dx^{2} + dy^{2} + dz^{2}$ For reasons that ultimately come down to "we need a way to differentiate space from time in order to preserve causaility, while also combining the two", it works out that the distance in spacetime is given by (the sign is an arbitrary choice that once made, must be adhered to, but is otherwise arbitrary -- most people in quantum field theory choose the top signs, most in general relativity choose the bottom signs) $$ds^{2} = \pm(c\, dt)^{2} \mp dx^{2} \mp dy^{2} \mp dz^{2}$$ where, for this purposes, $c$ has to be some "conversion factor" with the units of velocity for this to make sense, and it works out that this has been shown to be the "speed of light". Now, choosing the top signs, and noting that $\frac{dt}{dt} =1$ , we can divide both sides by $(c\,dt)^{2}$ , and the RHS of the above becomes: $${}^{4}v^{2} = 1 - \left(\frac{{}^{3}v}{c}\right)^{2}$$ where the left-sided superscripts indicate three-and four-dimensional velocities. We can definitely be a lot more rigorous here, and there are great depths to work through, but if you're to ask "where does the square root and that minus sign come from?", the heart of it is that negative sign in the distance function, which can be taken as a core axiom of special relativity.
{ "source": [ "https://physics.stackexchange.com/questions/699799", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/330871/" ] }
700,012
When two like magnetic poles are brought together, there's a repulsive force felt that's inversely proportional to their separation. In the standard model, the answer to "What is transmitting this repulsive force through empty space between the two magnets?" is described as virtual photons. If I want to measure 15 newtons of force between two north poles of adjacent magnets, I can position my magnets accordingly and measure the force directly. I'll never see photons involved because of their virtual nature, but the force they're delivering is very real, and easy to measure. If I want to produce the same amount of force on my magnet by directly bombarding it with real photons, however, it would take an enormous amount of energy. It seems strange to think that the same particles responsible for producing a force strong enough to keep two massive objects apart, are barely capable of moving a light sail in microgravity. Why are real photons so much less efficient in carrying momentum than virtual photons? I have to believe virtual particles are the topic for a sizeable portion of questions on SE; if this is a duplicate please feel free to close, but from my review I haven't seen this addressed directly.
Within the usual handwavy accounts of virtual particles, the answer is "simple": Virtual particles aren't required to obey on-shell mass-energy relations (in this case $E=pc$ ), so there can be virtual particles with large momenta but very small energies. However and as usual, I would advise not to think in terms of virtual particles at all - they are artifacts of drawing perturbation theory as Feynman diagrams and you cannot even say non-perturbatively what they are supposed to be. The reason "virtual photons" act so differently from actual photons is that the term "virtual photon" doesn't describe a quantum state that would resemble a free, real photon at all, it describes a certain computation in an interacting quantum field theory. There isn't really any reason except for the name to expect this to have anything to do with the behaviour of actual photons. See this answer of mine for a lengthier discussion on why it is misleading to think of "virtual particles" as particles or as actual intermediate states at all.
{ "source": [ "https://physics.stackexchange.com/questions/700012", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/68611/" ] }
700,040
This question is from electrodynamics although I am stuck in the mechanics part here. $AB$ , $BC$ , $CA$ are wires/rods, we needed to calculate Moment of Inertia(MOI) about an axis $xx'$ parallel to BC and in the plane of loop, each wire/rod is of mass $m$ and length $l$ . I tried calculating MOI about the centroid i.e center of mass(COM), and then using parallel axes theorem, the distance between side and centroid is d = $\frac{l}{2 \sqrt3}$ so calculating MOI perpendicular to the plane about COM would be 3( $\frac{ml^2}{12}$ + $md^2$ ). Now using perpendicular axes theoram $I_x$ + $I_y$ = $I_z$ , here I know $I_y$ would be equals $\frac {ml^2}{12}$ but how do I calculate $I_x$ ? Should I use this approach or is there any other approach for this? The answer given is $\frac{5ml^2}{4}$ . Any help would be appreciated
Within the usual handwavy accounts of virtual particles, the answer is "simple": Virtual particles aren't required to obey on-shell mass-energy relations (in this case $E=pc$ ), so there can be virtual particles with large momenta but very small energies. However and as usual, I would advise not to think in terms of virtual particles at all - they are artifacts of drawing perturbation theory as Feynman diagrams and you cannot even say non-perturbatively what they are supposed to be. The reason "virtual photons" act so differently from actual photons is that the term "virtual photon" doesn't describe a quantum state that would resemble a free, real photon at all, it describes a certain computation in an interacting quantum field theory. There isn't really any reason except for the name to expect this to have anything to do with the behaviour of actual photons. See this answer of mine for a lengthier discussion on why it is misleading to think of "virtual particles" as particles or as actual intermediate states at all.
{ "source": [ "https://physics.stackexchange.com/questions/700040", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/301314/" ] }
700,440
This is the definition on Wikipedia : It is equal to the amount of work done when a force of 1 newton displaces a body through a distance of 1 metre in the direction of the force applied. I take that to mean, it is equal to the amount of work done when you push on an object with a force of 1 newton until the object has moved one metre. When I imagine examples it doesn't make sense though. Let’s say there is a small ball bearing floating in space, completely motionless relative to me. I push on it with a force of 1 newton until it moves 1 metre. OK, I've done 1 joule of work. But now let's replace the ball bearing with a bowling ball. If I push on it with a force of 1 newton until it moves 1 metre, it will accelerate much slower. It will take much longer to move 1 m, so I'm pushing on it with a force of 1 N for longer, so I feel like I have done more work moving it compared to the ball bearing.
Pushing the ball bearing with 1 N for one meter and pushing a bowling ball with 1 N for 1 meter do exactly the same amount of work: 1 joule. As you say: it will take a much longer time for the bowling ball to move one meter. This means that at the end of the 1 meter trip the bowling ball and the ball bearing will have the same kinetic energy , but the bowling ball will have much more momentum . Remember: Force $\times$ distance = change in energy Force $\times$ time = change in momentum . Failing to make this distinction confused everyone right up to the time of Newton, and still confuses people meeting mechanics for the first time.
{ "source": [ "https://physics.stackexchange.com/questions/700440", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/331252/" ] }
701,325
I have read a passage in Wikipedia about the List of unsolved problems in physics and dimensionless physical constants: Dimensionless physical constants : At the present time, the values of various dimensionless physical constants cannot be calculated; they can be determined only by physical measurement .[4][5] What is the minimum number of dimensionless physical constants from which all other dimensionless physical constants can be derived? Are dimensional physical constants necessary at all? One of these fundamental physical constants is the Fine-Structure Constant . But why does Wikipedia say that these constants, such as the fine-structure constant, can be only measured and not theoretically calculated? The fine-structure constant $α$ as far as I know for the electromagnetic force for example can be theoretically calculated by this expression: $$ \alpha=\frac{e^{2}}{4 \pi \varepsilon_{0} \hbar c} \approx \frac{1}{137.03599908} $$ So why then does Wikipedia say that it can only measured but not calculated? I don't understand the meaning of this above-quoted Wikipedia text?
In "natural units", we set $\hbar=c=\epsilon_0=1$ . In these units, the equation you wrote down becomes \begin{equation} \alpha = \frac{e^2}{4\pi} \end{equation} In this notation, it is perhaps more obvious that the equation you wrote down is not really a way to calculate $\alpha$ , since $\alpha$ depends on another dimensionless constant $e^2$ that we do not know how to calculate. In fact, you can read this equation as defining $e^2$ , given $\alpha$ . The dream of "calculating $\alpha$ from first principles" would be to have a formula where $\alpha$ was expressed purely in terms of mathematical constants like $2$ and $\pi$ .
{ "source": [ "https://physics.stackexchange.com/questions/701325", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/183646/" ] }
701,978
I was reading about Larmor precession of the electron in a magnetic field in Griffiths QM when I came across the equation $$ i\hbar \frac{\partial \mathbf \chi}{\partial t} = \mathbf H \mathbf \chi, $$ where $\mathbf\chi(t)$ is a 2D vector that represents only the spin state and does not include information of the wave function. The Hamiltonian is $$ \mathbf H = - \gamma \mathbf B \cdot \mathbf S = - \frac{\gamma B_0 \hbar}{2} \begin{pmatrix} 1 & 0 \\ 0 & -1\end{pmatrix} $$ for a uniform magnetic field $\mathbf B = B_0 \hat k$ . Why should these spinors also obey the Schrödinger equation? The book does not provide any further information as to why this should hold.
The Schrödinger equation applies to any quantum system or quantum field theory, as long as you have a continuous time dimension. The equation $$\hat H\psi = i\hbar\frac{\partial \psi}{\partial t} $$ is just the statement that the Hamiltonian $\hat H$ is the infinitesimal translation operator in time. By Noether's theorem, it corresponds to the total energy of your system. In order to describe a quantum system, you need to first decide what the degrees of freedom are. This decides which Hilbert space this equation is defined on. For example, for a single particle in $\mathbb R^d$ , the Hilbert space is $\mathcal H = L^2(\mathbb R^d)$ . And thus the wavefunction is $\psi(\mathbf r,t)$ . Since the Hamiltonian is the total energy, we have (here for a non-relativistic particle) $$\left[-\frac{\hbar^2}{2m}\nabla^2 + V(\mathbf r)\right]\psi(\mathbf r,t) = i\hbar \frac{\partial}{\partial t}\psi(\mathbf r, t).$$ Now imagine that the particle has some internal degree of freedom, such as spin. For a spin- $\tfrac 12$ particle, the Hilbert space is two dimensional $\mathbb C^2$ . Quantum mechanics tells us that the Hilbert space of the full system is a tensor product $$ \mathcal H = L^2(\mathbb R^d)\otimes\mathbb C^2$$ and wavefunctions are thus $$ \psi(\mathbf r,t) = \begin{pmatrix} \psi_\uparrow(\mathbf r,t) \\ \psi_\downarrow(\mathbf r,t)\end{pmatrix}.$$ The Hamiltonian will now also contain terms coupling spins together, but it depends on which forces are present. For example in $d=3$ you could have $$\left[-\frac{\hbar^2}{2m}\nabla^2\,\mathbf 1 + V(\mathbf r)\,\mathbf 1 + \alpha \,\mathbf B\cdot\mathbf\sigma\right]\psi(\mathbf r,t) = i\hbar \frac{\partial}{\partial t}\psi(\mathbf r, t).$$ Here $\mathbf 1$ is a $2\times 2$ identity matrix and $\mathbf\sigma = (\sigma_x,\sigma_y,\sigma_z)$ are the Pauli matrices. Sometimes people write down a Schrödinger equation for only position or only spin degrees of freedom, if they don't couple to other degrees of freedom in the problem at hand. For example, if the Hamiltonian only has $\mathbf 1$ for the spin then the Hamiltonian will be block diagonal and $\psi_\uparrow$ and $\psi_\downarrow$ will be completely decoupled. You could therefore forget spin, and only keep position in your description. Or if the particle is trapped somewhere and cannot move, then only the spin degrees of freedom are relevant for the description. But the Schrödinger equation is valid for any quantum system, it just describes how the system evolves in time.
{ "source": [ "https://physics.stackexchange.com/questions/701978", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/294400/" ] }
702,077
According to the Larmor formula, the power radiated by an accelerated electron is $P_0=\frac{e^2 a^2}{6\pi \varepsilon_0 c^3}$ . Radio waves can be radiated from an antenna by accelerating electrons in the antenna. Suppose the number of accelerating electrons is $n$ . The total power radiated can be calculated by two methods: In the formula above for $P_0$ , we can replace $e$ by $ne$ . The total power is $n^2$ times $P_0$ , giving $P_2=n^2 P_0$ For a scalar quantity like power, we can simply add (superpose) the power radiated by each electron to calculate the total radiation power. Therefore, $P_2=nP_0$ How can we solve this paradox?
You can not add the power of two waves. You can only add the electric and magnetic fields. Imagine that you have two charges producing e.m. waves with power $P$ . If both waves have an electric field and magnetic field that destruct each other, you couldn't add the powers. Indeed, you would have that the powers have been counteracted. However, if e.m. waves have constructive interference, then you will have that the power will have been multiplied by $4$ as: $$ P \propto E \times B = 2E_0 \times 2B_0 = 4 (E_0 \times B_0) $$
{ "source": [ "https://physics.stackexchange.com/questions/702077", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/271429/" ] }
702,333
In uniform circular motion, acceleration is expressed by the equation $$a = \frac{v^2}{r}. $$ But this is a differential equation and solving it gets the result $$v = -\frac{r}{c+t}.$$ This doesn’t makes any sense. Should velocity be constant? Or at least something trigonometric?
That equation is misleading. You wrongly assume that $a$ is $dv/dt$ here, which it isn't. Let us first rewrite the centripetal acceleration equation properly: $$|\vec {\mathbf a}_N| = \dfrac {\left|\vec {\mathbf v}\right|^2} {R} \Longleftrightarrow a_N = \dfrac {v^2} R$$ where $a_N$ is the normal acceleration. The normal acceleration can be expressed as $\vec{\mathbf a} - \mathbf{\vec a}_T$ where $a_T$ is the tangential compenent of acceleration $a$ . For uniform circular motion, $a_T=0$ so what we're left with is $$\left|\dfrac{d\mathbf{\vec v}}{dt}\right| = \dfrac{|\mathbf{\vec v}|^2}{R} \Longleftrightarrow a=\dfrac{v^2}R$$ Note that $a$ here is $a=\sqrt{a_x^2 +a_y^2}$ in 2-D. Since $\mathbf{\vec v} = \left( v_x, v_y\right)$ , we can say $$a=\sqrt{\left(\dfrac{dv_x}{dt}\right)^2+\left(\dfrac{dv_y}{dt}\right)^2}.$$ Putting this into our centripital acceleration equation, we can get, $$\sqrt{\left(\dfrac{dv_x}{dt}\right)^2+\left(\dfrac{dv_y}{dt}\right)^2}=\dfrac{v_x^2 + v_y^2 }R \tag{*}$$ This is the proper differential equation we are solving. Note that since $a_T=0$ , we have that $\dfrac d{dt} \left| \mathbf{\vec v}\right| = \dfrac d{dt} \sqrt{v_x^2 + v_y^2}=0$ , so $v_x a_x + v_y a_y = \mathbf{\vec v}\cdot\mathbf{\vec a}=0$ , which serves as our second equation to solve ( $\star$ ). Now, since you know that the solution to uniform circular motion is $\mathbf{\vec r} = \left( R\cos(\omega t), R\sin (\omega t\right))$ , you can go ahead and verify that it indeed satisfies $\star$ .
{ "source": [ "https://physics.stackexchange.com/questions/702333", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/332266/" ] }
703,101
Disclamer: I'm not talking about FTL travel here. I'm also not talking about any weird space warping mechanics like wormholes and such. I've always thought that if a star was 4 light years away, then it would be impossible to reach it with less than 4 years of travel time. Therefore, any star more than 100 light years away would require multiple generations on the ship in order to get there even if they travelled at the speed of light(or close to it). The people who set off on the mission wouldn't be alive when the spaceship arrived at the star. But the other day I had a realisation that, for anyone travelling close to the speed of light(0.999c), the length between them and the star would contract and they could get there almost instantly(in their reference frame). This also makes sense for someone observing from earth; they would see me take about 100 years to get to this star, but, because I'm going so fast, they would see me barely age. By the time I had got to the star, they would observe me still being about the same age as I was when I set off, even though they are 100 years older. In my reference frame, I would have also barely aged and I would have reached a star that's 100 light-years away in a lot less than a 100 years of travel time. So is this assessment correct? Could I reach a star that's 100 light years away in my lifetime by going close to the speed of light? It would be good to get an answer in two parts: In a universe that's not expanding, and in a universe that is expanding.
I think your argument is correct. For example, in the Earth reference frame version, you would have aged $L/c\gamma$ , where $L$ is the distance from Earth to the star in Earth's frame, and $\gamma=(1-v^2/c^2)^{-1/2}$ . Since $v$ can be arbitrarily close to $c$ , then $\gamma$ can be arbitrarily large, so your age can be arbitrarily small for fixed $L$ . (Of course, it takes increasing huge amounts of energy to increase $v$ as you get close to $c$ , which will kill any attempt to do this in practice, but I take it you are ignoring this constraint.) In terms of an expanding universe, what you would want to do is solve the geodesic equation for a massive particle in an FLRW spacetime, and then compute your proper time to travel on a path that led from Earth to the star. There are definitely cases where you could reach the star: if the time it took you to reach the star in the Earth frame (which is also the cosmic rest frame) is less that the Hubble time (the timescale over which $L$ changes significantly). There are also definitely cases where you could not reach the star: in an accelerating Universe like the one we find ourselves in now, if the star was outside of our cosmological horizon, you can never reach the star no matter how fast you travel. In between those extremes, the answer will in general depend on $L$ and the expansion history of the Universe. Essentially, the expansion of the Universe will increase the amount by which you will age to reach the star compared to what you would get in a non-expanding Universe, for fixed $v$ . If it's possible to reach the star in a finite amount of time, then you can make that age as small as you want by cranking up your velocity. However, depending on the expansion history, there can be some stars where you cannot reach them with any amount of time. These would be stars where light could not travel from Earth to the star.
{ "source": [ "https://physics.stackexchange.com/questions/703101", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/104527/" ] }
703,268
I'm wondering whether angular momentum is just a convenience that I could hypothetically solve any mechanics problems without ever using the concept of angular momentum. I came up with this question when I saw a problem in my physics textbook today. In the problem, a puck with known velocity hits a lying stick. The puck continues without being deflected, and the stick starts both linear and angular motion. There are three unknowns: velocity of puck and stick after collision, and the angular speed of the stick. So, we need three equations: conservation of linear momentum, kinetic energy, and angular momentum. So, for instance, is it possible to solve this problem without using angular momentum? Also, how would a physics simulator approach this problem?
I'm wondering whether angular momentum is just a convenience that I could hypothetically solve any mechanics problems without ever using the concept of angular momentum. If your criterion for something being a convenience is that you could solve problems without it then everything in physics is just a convenience. There are an infinite number of possible mathematical formulations. So, in principle, it should be possible to convert any mathematical problem into a different formulation that avoids the use of any specific concept that you would like to avoid (or at least hides it so that it is not apparent that you are using the concept). That said, angular momentum is conserved and it is related (by Noether's theorem) to the fact that the laws of physics are symmetric under spatial rotation. Both conserved quantities and symmetries are very important in modern physics. So even if you classify it as a convenience, it is one of the most important and pervasive conveniences in physics.
{ "source": [ "https://physics.stackexchange.com/questions/703268", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/298720/" ] }
703,273
I wonder if with the raise of the quantum computing era, we could test somehow non-linear quantum mechanics failures up to certain scales. That is, how could quantum computers test key assumptions of quantum mechanics? 1st. Linearity/superposition. 2nd. Entanglement. 3rd. Contextuality (QM is contextual, but how to test contextuality failures?) 4th. Uncertainty principles/modified uncertainty principles. 5th. Unitarity. In other words, can modified by nature quantum computers behave non-linearly, unentangledly, non-contextually, certainly and non-unitarily? Are there experimental set-ups to probe those features or how could a quantum computer show one single non-stantard quantum computing behaviour? Also, I wonder if the failure of success of any particular quantum algorithm could show hints of "ultra-quantum" theories beyond standard quantum mechanics, or to weak some hypotheses/axioms of current quantum mechanics postulates.
I'm wondering whether angular momentum is just a convenience that I could hypothetically solve any mechanics problems without ever using the concept of angular momentum. If your criterion for something being a convenience is that you could solve problems without it then everything in physics is just a convenience. There are an infinite number of possible mathematical formulations. So, in principle, it should be possible to convert any mathematical problem into a different formulation that avoids the use of any specific concept that you would like to avoid (or at least hides it so that it is not apparent that you are using the concept). That said, angular momentum is conserved and it is related (by Noether's theorem) to the fact that the laws of physics are symmetric under spatial rotation. Both conserved quantities and symmetries are very important in modern physics. So even if you classify it as a convenience, it is one of the most important and pervasive conveniences in physics.
{ "source": [ "https://physics.stackexchange.com/questions/703273", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/22916/" ] }
704,112
I was recapping the forced oscillations, and something troubled me. The equation concerning forced oscillation is: $$ x=\frac{F_0}{m(\omega_0^2-\omega^2)}\cos(\omega t) $$ I don't understand why this equation predicts that the amplitude will approach infinity as $\omega$ approaches $\omega_0$ . One can come up with the argument that in the actual world, there are damping forces, friction etc. The trouble is, however, even in the ideal world, the amplitude wouldn't approach infinity as the spring's restoring force will catch the driving force at some point, and the system will stay in equilibrium. What I'm wondering is Is my suggestion in the last paragraph correct? If it is correct, what assumption led us to the erroneous model of $x$ ? If it is not correct, what am I missing?
$x(t)$ does not instantaneously go to infinity; the case $\omega=\omega_0$ needs special care. I think an answer directly from the math will help. The driven SHM equation of motion is $$\tag{1} x''+\omega_0^2x=\cos(\omega t) $$ Where I have set all other constants to unity. The general solution of (1) is equal to the homogeneous plus any particular solution $$ x(t)=x_H(t)+x_P(t) $$ The homogeneous solution is simply $$ x_H(t)=C\sin(\omega_0 t)+D\cos(\omega_0 t) $$ Where $C$ and $D$ are determined by the initial conditions. Note that the amplitude of $x_H$ is fixed for all time by the constants $C$ and $D$ . To find a particular solution, I will use undetermined coefficients . Starting with the ansatz $$\tag{2} x_P(t)\stackrel{?}{=}A\sin(\omega t)+B\cos(\omega t) $$ We substitute (2) into (1). If we can consistently solve for the constants $A$ and $B$ , then we are done. When $\omega\neq \omega_0$ , the result is $$ A=0\\ B=\frac{1}{\omega_0^2-\omega^2} $$ Which corresponds to the solution in OP. However, when $\omega=\omega_0$ , there are no $A$ and $B$ that make (2) a solution of (1). Try it and see! In this case, we must modify the ansatz (2) to read $$\tag{3} x_P(t)\stackrel{?}{=}At \sin(\omega_0 t)+B t\cos(\omega_0 t) $$ This is a standard procedure with undetermined coefficients: when the homogeneous solution has a term that is equal to the RHS of (1). Substitute (3) into (1) and solve for $A$ and $B$ , which yields $$ A=\frac{1}{2\omega_0} \\ B=0\\ \therefore x_P(t)=\frac{t \sin(\omega_0 t)}{2\omega_0} $$ Therefore: the particular solution is an oscillating function with amplitude that grows linearly in time . This conclusion follows only from the differential equation (1) with no other physical input or hand-waving. Edit: With the general solution $$\tag{4} x(t)=C\cos(\omega_0 t+\phi) + \frac{F_0}{m}\frac{t}{2\omega_0}\sin(\omega_0 t) $$ in hand, we may answer your question about the relative phases of the spring and driving force. I've reinstated units and written $x_H$ in a more convenient, equivalent, form. The spring force is $$ F_{\text{sp}}(t)=-k x(t) $$ If $C\ll \frac{F_0 t}{\omega_0 m} $ (large times) then the second term on the RHS of (4) is large compared to the first. So we can write approximately $$ F_{\text{sp}}(t)\approx-\frac{F_0 t}{2}\sin(\omega_0 t) \qquad ;\qquad C\ll \frac{F_0 t}{\omega_0 m} $$ While the driving force defined in (1) is $F_0 \cos(\omega_0 t)$ , compare the $\sin$ to the $\cos$ . So for large time, the driving force is approximately $\pi/2$ out of phase with the spring force.
{ "source": [ "https://physics.stackexchange.com/questions/704112", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/298720/" ] }
704,530
In my physics class, I learned about "nonluminous objects" - these are objects which don't produce their own light. But, don't all objects emit light by black body radiation? So are all objects luminous objects? (except objects at 0K, if we get there somehow), or am I missing some point here?
Black bodies are in equilibrium with their surroundings - they absorb radiation from their surroundings and then re-emit it. Luminous bodies have internal energy sources, i.e., there is energy produced within these bodies, which is then emitted in the form of radiation. In this sense these bodies are not in thermal equilibrium, as there is a constant energy transfer from within the body to its surroundings, and this energy does not return back to the body. Thermonuclear reactions within the Sun are an obvious example. Note however, that the radiation emitted in this way is reabsorbed/scattered many times before it reaches the surface, which results in the radiation spectrum being very similar to the black body spectrum - if the Sun is considered as isolated, the radiation reaching its surface can be considered as thermal/equilubrium radiation. However, the Sun is obviously not in thermal equilibrium with the surrounding vacuum (they have different temperatures - 6000K vs 0K). See also How does radiation become black-body radiation? Remarks: @Quillo has pointed out in the comments that the energy is not necessarily generated within a luminous body, but could be simply stored in it (aka fossil heat ) and being gradually released to the environment in a form of radiation. The body is still not in equilibrium with the surroundings, as it releases more energy than it absorbs. @JonathanJeffrey has pointed out that the correct value for vacuum temperature is 3K, due to the microwave background radiation .
{ "source": [ "https://physics.stackexchange.com/questions/704530", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/333379/" ] }
704,818
The well known Heisenberg commutator relation $$[p,q]=\cfrac{\hbar}{i} \cdot \mathbb{I}$$ introduces the imaginary unit $i$ into quantum mechanics. I ask for the deeper reason: Why does the correspondence with real coordinates q and p introduces complex numbers for the commutator? Is the reason from physics or from mathematics? Aside: I'm familiar with complex numbers and with the fact, that some results from the real domain find a satisfactory explanation not until generalization to the complex domain.
Because operators $p$ and $q$ represent physical observables (i.e. they have real eigen-values), they need to be Hermitean (i.e. $p^\dagger=p$ and $q^\dagger=q$ ). From this it is easy to show that their commutator $[p,q]$ is anti-Hermitean. $$[p,q]^\dagger = (pq-qp)^\dagger = (pq)^\dagger-(qp)^\dagger = q^\dagger p^\dagger-p^\dagger q^\dagger = qp-pq = -[p,q]$$ You can get a Hermitean operator from this anti-Hermitean $[p,q]$ only by multiplying it with $i$ . $$(i[p,q])^\dagger = i[p,q]$$ So you can write Heisenberg's commutator relation also as $$i[p,q]=\hbar\mathbb{I}$$ with Hermitean operators on both sides. The operator on the right side corresponds to the very trivial physical observable, which always gives the same measurement value $\hbar$ .
{ "source": [ "https://physics.stackexchange.com/questions/704818", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/89746/" ] }
704,819
I'm working on this problem to determine the ground state angular momenta ( $S,L,J$ ) for nitrogen using Hund's rules, and I want to see if the total orbital angular momentum $L=2$ is possible (the states have to be antisymmetric). In this case, the three $2p$ electrons all have $l=1$ . To my understanding, if the "top of the ladder" ( $L = M$ ) of the composite states with $L=2$ is symmetric/antisymmetric, then the whole collection of states with $L=2$ is symmetric/antisymmetric. Hence, we can examine the composite state $|22\rangle$ . I am unable to find a Clebsch-Gordan table for 3 particles, so I just added up the first two electrons, and then added the third to the composite state of the two. Since the third electron has $l=1$ , the composite state of the first two electrons can have either $L_{12}=1$ or $L_{12}=2$ in order to obtain that final $L=2$ . With reference to the Clebsch-Gordan table, if $L_{12} = 1$ , then $$ |22\rangle = |11\rangle_{12}|11\rangle_3 = \frac1{\sqrt 2}(|11\rangle_1 |10\rangle_2 - |10\rangle_1 |11\rangle_2 )|11\rangle_3, $$ and if $L_{12} = 2$ , then $$ |22\rangle = \sqrt{\frac23}|22\rangle_{12}|10 \rangle_3 - \sqrt{\frac13}|21\rangle_{12}|11 \rangle_3 = \sqrt{\frac23}|11\rangle_1|11\rangle_2|10 \rangle_3 - \sqrt{\frac13}\sqrt{\frac12}(|11\rangle_1 |10\rangle_2 + |10\rangle_1 |11\rangle_2)|11\rangle_3. $$ I noticed that not only are these states not antisymmetric, they are not even symmetric, meaning that if I put the three electrons in the $|22\rangle$ state, exchanging two of the electrons will change the wave function. How could that be if electrons are identical? Otherwise, what went wrong with my procedure, and what is the proper way of adding the angular momenta of three particles?
Because operators $p$ and $q$ represent physical observables (i.e. they have real eigen-values), they need to be Hermitean (i.e. $p^\dagger=p$ and $q^\dagger=q$ ). From this it is easy to show that their commutator $[p,q]$ is anti-Hermitean. $$[p,q]^\dagger = (pq-qp)^\dagger = (pq)^\dagger-(qp)^\dagger = q^\dagger p^\dagger-p^\dagger q^\dagger = qp-pq = -[p,q]$$ You can get a Hermitean operator from this anti-Hermitean $[p,q]$ only by multiplying it with $i$ . $$(i[p,q])^\dagger = i[p,q]$$ So you can write Heisenberg's commutator relation also as $$i[p,q]=\hbar\mathbb{I}$$ with Hermitean operators on both sides. The operator on the right side corresponds to the very trivial physical observable, which always gives the same measurement value $\hbar$ .
{ "source": [ "https://physics.stackexchange.com/questions/704819", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/294400/" ] }
704,820
I don't understand why torques produced by internal forces cancel in the sum $\sum \tau$ . My textbook gives the following explanation: due to N3L, if a particle exerts a force on another particle of the body, there will be an equal in magnitude but opposite in direction reaction force. If the line of action of the two forces is the line joining the particles, the lever arm will be equal, and thus the torque produced by each force will be equal and opposite again. However, what if multiple forces act on the particles (if more than 2 particles are interacting)? Then the torque produced by the net force on one particle might not lie on the same line of action as before and might not have the same magnitude anymore, so how come the torques cancel nonetheless (think a system of three particles, where the first and the second particle interact with the third). [I haven't studied angular momentum yet and only seen torque in 2D, $\tau = Fl = rFsin\phi = F_{tan}r$ ]
Because operators $p$ and $q$ represent physical observables (i.e. they have real eigen-values), they need to be Hermitean (i.e. $p^\dagger=p$ and $q^\dagger=q$ ). From this it is easy to show that their commutator $[p,q]$ is anti-Hermitean. $$[p,q]^\dagger = (pq-qp)^\dagger = (pq)^\dagger-(qp)^\dagger = q^\dagger p^\dagger-p^\dagger q^\dagger = qp-pq = -[p,q]$$ You can get a Hermitean operator from this anti-Hermitean $[p,q]$ only by multiplying it with $i$ . $$(i[p,q])^\dagger = i[p,q]$$ So you can write Heisenberg's commutator relation also as $$i[p,q]=\hbar\mathbb{I}$$ with Hermitean operators on both sides. The operator on the right side corresponds to the very trivial physical observable, which always gives the same measurement value $\hbar$ .
{ "source": [ "https://physics.stackexchange.com/questions/704820", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/332598/" ] }
705,017
It is often claimed that Special Relativity had a huge impact on humanity because its understand enabled the invention of the nuclear bomb. Often, the formula $E = mc^2$ is displayed in this context, as if it would explain how nuclear bombs work. I don't understand this line of reasoning. The key to understanding nuclear fission is understanding neutron scattering and absorption in U-235. For this, quantum mechanics is key. Bringing quantum mechanics and special relativity together correctly requires quantum field theory, which wasn't available in the 1940's. When U-236 breaks up, it emits neutrons at 2 MeV (says Wikipedia ), which is a tiny fraction of their rest mass, that is to say, far below light speed. Meaning hopefully that non-relativistic QM calculations should give basically the same answers as relativistic ones. So it seems to me that in an alternate universe where quantum mechanics was discovered and popularised long before special relativity, people would still have invented fission chain reaction and the nuclear bomb, without knowing about special relativity. Or, more precisely: You don't need to understand special relativity to understand nuclear bombs. Right? Of course it's true that you can apply $E = mc^2$ and argue that the pieces of the bomb, collected together after the explosion, are lighter than the whole bomb before the explosion. But that's true for a TNT bomb as well. And what's more, it's an impractical thought experiment that has no significance for the development of the technology.
It's certainly relevant. Mass is very measurably non-conserved in nuclear reactions. Using special relativity allows us to determine the potential energy release of a given nuclear reaction just by directly measuring the masses of the nuclei involved and using $E=mc^2$ to convert that to an energy. For instance, consider the reaction: $$ \rm ^{235}U+n\to {^{140}Xe}+ {^{94}Sr}+2n $$ The masses on the left add up to about 236.05 atomic mass units, while the masses on the right add up to about 235.85 atomic mass units. Multiplying the difference by $c^2$ gives an energy of $185~\rm MeV$ . In other words, special relativity allows us to take measurements of a few hundred masses and to determine fission candidates that way rather than having to painstakingly attempt to experimentally determine it for every possible isotope. It also gives us a way to measure the energy release independent of having to actually precisely measure the kinetic energy of all the daughter particles (which is hard), and instead only requires us to identify the daughter isotopes and measure their masses (which is less hard).
{ "source": [ "https://physics.stackexchange.com/questions/705017", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/1592/" ] }
705,385
I'm reading this Physics Today article on magnetic monopoles, and I'm a bit confused by a discussion of the necessity of Dirac strings for compatibility with quantum mechanics. I'll reproduce the relevant discussion here: At first sight, magnetic monopoles seem to be incompatible with quantum mechanics. This is because, in quantum mechanics, electromagnetic fields have to be described in terms of a scalar potential $\phi$ and vector potential, $\vec{B} = \nabla \times \vec{A}$ , and it follows [that] the field must then be sourceless, $\nabla \cdot \vec{B} = 0$ It's clear, of course, that if $\vec{B} = \nabla \times \vec{A}$ , then $\nabla \cdot \vec{B} = 0$ . I don't understand, however, what quantum mechanics has to do with any of this. Under classical electrodynamics, the magnetic vector potential is defined to be that whose curl gives the magnetic field. Together with electric potential $\phi$ , we may specify the electric field. It is stated earlier in the article that magnetic monopoles are compatible with classical electrodynamics. It goes on to suggest that they are (seemingly) incompatible with quantum mechanics, but then argues this using what seems to be classical electrodynamics. How is this related to QM?
The argument here is supposed to go something like this: Classical electrodynamics, as long as you don't insist on a Lagrangian formulation, is fully described by Maxwell's equations in terms of the field strength tensor/the electric and magnetic fields. The 4-potential of electrodynamics is a convenient description of electromagnetism, but adding a magnetic charge current to Maxwell's equations and thereby destroying the sourcelessness of the magnetic field that is necessary for the 4-potential to exist is straightforward and poses no obvious problems on the level of Maxwell's equations (see also this answer of mine ). Quantum mechanics doesn't work with "Maxwell's equations", it's based on quantizing an action formulation (e.g. Lagrangian formulation via path integral quantization or Hamiltonian formulation via canonical quantization) of electromagnetism where the dynamical variable is the 4-potential. Introducing a magnetic charge makes it impossible to define a 4-potential, and therefore there is no straightforward modification of the "quantizable version" (i.e. the action formulation) of electromagnetism that is compatible with the existence of magnetic charges. (The observability of the Aharonov-Bohm effect is often interpreted as the experimental sign of this reliance on the 4-potential, but this is a different can of worms) I discuss various ways to treat magnetic monopoles in an action formulation in more detail e.g. in this answer of mine
{ "source": [ "https://physics.stackexchange.com/questions/705385", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/273521/" ] }
706,398
It is possible to produce strong gravitational accelerations on the free electrons of a conductor in order to obtain electrical current. This allows the conversion of gravitational energy directly into electrical energy. Considering that there is formal analogy between gravitational theory and electromagnetic theory, then it seems like that such a proposition is possible, at least theoretically. And if it is indeed possible to convert gravitational energy into electrical energy, will it imply potential destruction of natural gravitational field?
Suppose you have a charged parallel plate capacitor, with fixed equal and opposite charges on the plates. Both plates are parallel to the ground (i.e. perpendicular to the gravitational field). The upper plate is fixed, and the lower plate can be released to fall a certain distance under the influence of gravity. You release the lower plate, and the plate spacing gets bigger as it falls. Neglecting fringe fields, the electric field between the capacitor plates stays the same, but now occupies a greater volume, so more energy is stored in the electric field. You have converted gravitational potential energy directly into electrical energy. You have not destroyed the gravitational field in the process.
{ "source": [ "https://physics.stackexchange.com/questions/706398", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/329778/" ] }