source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
354,314
Explanations of conductors in electrostatics that I have encountered seem to describe positive charge spreading out, because you could say that lack of electrons can be thought of as abundance of protons (that in itself is not trivial to me- can any system of negative charges be replaced by another system of positive charges, creating identical field lines?). If we theoretically removed all electrons from the conductor (this is theoretically possible, isn't it?) I would be left with a bunch of stationary positive charges (as the protons of the atoms) which are spread somewhat evenly across the volume of the conductor (and not the surface, as would have happened with mobile positive charges). As far as I know, this make electric field inside possible, which is not what textbooks and lectures indicate that happens. What would happen in reality?
If you suddenly removed all the electrons from a piece of material, or even just the valence electrons, you would be left with a huge concentration of positive ions in a small volume, which would exert a huge electrostatic repulsion on each other. Since you no longer have the bonding influence of the electrons to counteract this repulsion, the material would blast apart, in very short order, in a process that's known as a Coulomb explosion. To put some numbers into things, suppose that you have one cubic millimeter of iron, and you suddenly remove one electron per atom. This turns out to be about $0.00014\:\mathrm{mol}$ of iron, but because Avogadro's number is so huge, that's about $8.491\times 10^{19}$ electrons, and a corresponding charge of about $13.6\:\rm C$ in the sphere, an electrostatic charge distribution that holds about $1.6\times 10^{15}\:\rm J$ of energy, or about $385$ kilotons of TNT, i.e. about twenty times bigger than the explosion that flattened Hiroshima. (And, obviously, that's the amount of energy that you will need to put in to be able to suddenly remove all of those electrons. In more human terms, that's a $1\:\rm GW$ power station running nonstop for 18 days. And, as mentioned in the comments, this amount of energy represents about twenty times more than the original rest mass of the iron.) That said, if you scale things down significantly, then Coulomb explosions can become quite reasonable things and indeed important research tools. Normally you do this with small(ish) molecules and atomic clusters (so, from a few to a few hundred atoms), where you have a few hundred electrons or so (instead of tens of quintillions), and you remove them with a high-intensity, high-photon-energy beam coming from a free-electron laser (FEL). In the process you might then get single-molecule x-ray diffraction spectra, information about the initial structure from where the atoms flew off to after the explosion, or you might just learn about the physics of the ionization and explosion processes. For a nice overview, see these slides by Christoph Bostedt , or the papers in this google search .
{ "source": [ "https://physics.stackexchange.com/questions/354314", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167699/" ] }
354,554
Typical atmosphere near sea level, in ambient conditions is around 100,000 pascals. But the pascal, as the unit, is not defined through Earth atmospheric pressure. It's defined as one newton per square meter. The newton is $\rm{kg \: m}\over s^2$. So, $\rm[Pa] = [ {kg \over {m \: s^2}} ]$. Nowadays, definitions of units are often fixed to various natural phenomena, but it wasn't quite so when they were being created. The Second is an ancient unit, derived from a fraction of day, 1/86400 of synodic day on Earth. The meter is derived from circumference of Earth, $10^{-7}$ the distance from north pole to equator. The kilogram came to be as mass of a cubic decimeter of water. 100,000 pascals, or 1 bar, though, is about the average atmospheric pressure at sea level. That's an awfully round number - while Earth atmosphere pressure doesn't seem to have anything in common with the rest of the "sources" of the other units. Is this "round" value accidental, or am I missing some hidden relation?
This is a coincidence. There's nothing about the atmosphere that would make it have a nice relationship with the Earth's rotation or diameter, or the fact that water is plentiful on the surface. On the other hand, it's important to note that the coincidence isn't quite as remarkable as you note, because of a version of Benford's law . Given absolutely zero prior knowledge about how much air there is in the atmosphere, our guess about the value of the atmospheric pressure would have to be evenly distributed over many orders of magnitude. This is akin to throwing a dart at a piece of log-scale graph paper: Note that the squares in which the coordinates start with $1.\:{{.}{.}{.}}$ are bigger than the others, so they're rather more likely to catch the dart. A similar (weaker) effect makes the probability of the second digit being 0 be 12% instead of the naive 10%.
{ "source": [ "https://physics.stackexchange.com/questions/354554", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/2754/" ] }
354,889
I am a layman in physics and just read about black holes on the internet. I read that matter encounters geodesic incompleteness in the singularity in an uncharged black hole. I heard an analogy of geodesic incompleteness as a straight line on a paper reach a hole on the paper, so it cannot continue. But in this analogy, isn't the straight line possible to continue into 3D (continue down the paper)? So, if matter reaches the singularity, is it possible too (to reach another dimension)? I also heard that the matter is annihilated when reaching the singularity, does it mean it disappear from this world, and violates conservation of energy?
Strictly speaking geodesic incompleteness doesn't mean the worldline of the particle ends at the singularity, but rather that we can't predict what happens to it. The trajectory of a freely falling particle is given by an equation called the geodesic equation: $$ \frac{d^2x^\alpha}{d\tau^2} = -\Gamma^\alpha_{\,\,\mu\nu}\frac{dx^\mu}{d\tau} \frac{dx^\nu}{d\tau} $$ It's a scary looking equation but you don't need to understand all the details to see what the problem is. What happens at the singularity in a black hole is that some of the parameters $\Gamma^\alpha_{\,\,\mu\nu}$ become infinitely large and we're left with an equation that has infinity on the right hand side. Since we can't do arithmetic with infinity (because it's not a number) we have no way to calculate the trajectory of the particle at the singularity. Incidentally much the same happens when we try to work backwards in time towards the Big Bang, and that's why it's commonly said that time started at the Big Bang. See my answer to How can something happen when time does not exist? for more on this. Anyhow, the upshot is that GR cannot tell us what happens to matter falling into a black hole when it hits the singularity. However most of us believe that general relativity ceases to be a good description of the physics when we get close to the singularity and some form of quantum gravity theory will take over. The trouble is that we currently have no theory of quantum gravity.
{ "source": [ "https://physics.stackexchange.com/questions/354889", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167973/" ] }
354,897
Simple question, hopefully there's a simple answer. I'm about half a piano tuner, not a physicist. A musical tone has a fundamental frequency, say $220\,\text{Hz}$. Its second harmonic is $440\,\text{Hz}$, its third harmonic is $660\,\text{Hz}$, etc. My question is: Does a harmonic have its own harmonic series with itself as the fundamental? For example, does a $220\,\text{Hz}$ vibration, a 2nd harmonic which exists only because someone banged on a piano string that sounded a $110\,\text{Hz}$ fundamental, have its own second harmonic that is $440\,\text{Hz}$, a third harmonic, etc. If not why not? It seems to me that if harmonics are real which I know they are because I learned to hear them, they must have their own harmonic series too. If so, do these other harmonics have a name? I couldn't find them on google or wikipedia. I would think that if they exist, they must have a relatively low amplitude.
Strictly speaking geodesic incompleteness doesn't mean the worldline of the particle ends at the singularity, but rather that we can't predict what happens to it. The trajectory of a freely falling particle is given by an equation called the geodesic equation: $$ \frac{d^2x^\alpha}{d\tau^2} = -\Gamma^\alpha_{\,\,\mu\nu}\frac{dx^\mu}{d\tau} \frac{dx^\nu}{d\tau} $$ It's a scary looking equation but you don't need to understand all the details to see what the problem is. What happens at the singularity in a black hole is that some of the parameters $\Gamma^\alpha_{\,\,\mu\nu}$ become infinitely large and we're left with an equation that has infinity on the right hand side. Since we can't do arithmetic with infinity (because it's not a number) we have no way to calculate the trajectory of the particle at the singularity. Incidentally much the same happens when we try to work backwards in time towards the Big Bang, and that's why it's commonly said that time started at the Big Bang. See my answer to How can something happen when time does not exist? for more on this. Anyhow, the upshot is that GR cannot tell us what happens to matter falling into a black hole when it hits the singularity. However most of us believe that general relativity ceases to be a good description of the physics when we get close to the singularity and some form of quantum gravity theory will take over. The trouble is that we currently have no theory of quantum gravity.
{ "source": [ "https://physics.stackexchange.com/questions/354897", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/23099/" ] }
354,914
Earlier today, I noticed my little brother playing with his skateboard. He had the strange idea of attaching it to his electric toy car so that he could be dragged by the car using the remote. When I came, he said I was too heavy and it probably wouldn't work. I found it weird as I thought this logic would probably be correct if the skateboard didn't have wheels, as the friction would be greater, but I don't see how added mass would make the skateboard slower with the same force. In this case friction is necessary for the skateboard to move. Can anyone help me?
Strictly speaking geodesic incompleteness doesn't mean the worldline of the particle ends at the singularity, but rather that we can't predict what happens to it. The trajectory of a freely falling particle is given by an equation called the geodesic equation: $$ \frac{d^2x^\alpha}{d\tau^2} = -\Gamma^\alpha_{\,\,\mu\nu}\frac{dx^\mu}{d\tau} \frac{dx^\nu}{d\tau} $$ It's a scary looking equation but you don't need to understand all the details to see what the problem is. What happens at the singularity in a black hole is that some of the parameters $\Gamma^\alpha_{\,\,\mu\nu}$ become infinitely large and we're left with an equation that has infinity on the right hand side. Since we can't do arithmetic with infinity (because it's not a number) we have no way to calculate the trajectory of the particle at the singularity. Incidentally much the same happens when we try to work backwards in time towards the Big Bang, and that's why it's commonly said that time started at the Big Bang. See my answer to How can something happen when time does not exist? for more on this. Anyhow, the upshot is that GR cannot tell us what happens to matter falling into a black hole when it hits the singularity. However most of us believe that general relativity ceases to be a good description of the physics when we get close to the singularity and some form of quantum gravity theory will take over. The trouble is that we currently have no theory of quantum gravity.
{ "source": [ "https://physics.stackexchange.com/questions/354914", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167987/" ] }
355,224
I recall being told once that hollow rods/shafts tend to resist torsion more than solid rods/shafts...but I wasn't told why this is the case. Now that I'm a little older, running this "fact" through my mind, my intuition tells me that a solid rod should be able to resist torsion better than a hollow one (assuming the wall of the hollow one is thick enough to withstand being "dimpled" in the process). But I need this verified. A quick spot of Googling didn't lead to any answers (maybe it did, but I couldn't comprehend most of the stuff out there...way off-topic for a high-school student). Physics.SE has this post that's similar to my question; however, that post requires a comparison between a hollow shaft's and a solid shaft's resistance to bending , whereas I want to know about their resistance to torsion . I (vaguely) understood the ideas expressed in the answers there (still trying to wrap my head around the "second moment of inertia")...but I'm not sure if those ideas apply to torsion as well as bending of the shafts. Can someone tell me (in a simplified way...not too simplified though, I don't want to miss out on all the good stuff) why a hollow shaft tends to resist torsion better than a solid one? (Since it was mentioned in the question I linked) The two possible cases that arise are : $1)$ Solid and hollow rods of same diameter , $2)$ Solid and hollow rods of same mass .
A solid rod will be stiffer (both in torsion and in bending) than a hollow rod of the same diameter. But it won't be much stiffer because almost all the stiffness comes from the outer layers of the rod (this is what the whole second-moment-of-area thing is about). So if you have a certain amount of material to use (a certain mass, or a certain mass per unit length), then rather than making a rod out of it, you get a stiffer structure by making a tube with a larger diameter. The tradeoff between diameter and wall-thickness is complicated: larger diameter tubes with thinner walls tend to be stiffer, but much more fragile and have nastier (more abrupt) failure modes, and as the walls get very thin I think they effectively get less stiff again as they become so fragile that they collapse under ordinary loads. But what people mean by saying tubes are stiffer is that they are stiffer for a given amount of material , not stiffer for a given diameter. (This whole area is something engineers spend a lot of time thinking about and there is a lot of literature. I don't have any pointers to it, sadly, as what little I know about this stuff comes from speaking to people about bike frames & car chassis design.) An earlier version of this answer mistakenly used the term 'strength' to mean 'stiffness'. Here's the difference, as I understand it (disclaimer: not an engineer): strength tells you at what point something will fail -- either completely or by deforming in some way from which it does not then recover (passing its elastic limit in other words); stiffness tells you the how the deformation of the object goes as the stress on it -- it's basically the slope of the linear section of the strain-stress curve before the proportional limit is reached. I think the definition of strength I've used is slightly dependent on the regime: some engineering components are designed to deform permanently in use (think of crumple zones in cars, for instance), and for those you'd need a more complicated definition of strength. Finally it should be clear that stiffness and strength are not the same thing: something can be very stiff but can have a rather low maximum stress.
{ "source": [ "https://physics.stackexchange.com/questions/355224", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/158854/" ] }
355,525
I'm very confused about the Pauli exclusion principle . Wikipedia states it as "two identical fermions cannot occupy the same quantum state in a quantum system". I understand this for electrons that for each energy level in an atom there are two possible electrons that may occupy this energy state but with opposite spin numbers. What about for protons and neutrons? Protons and neutrons are both fermions, so why in a nucleus can multiple protons and neutrons simultaneously exist. I understand that neutrons and protons are not identical fermions but considering them individually, suppose in a nucleus with X protons, are the energies of individual protons different from one another (and similarly for neutrons in the nucleus)? Apologies, I'm not very familiar with quantum theory or the maths involved. I super confused about how the exclusion principle works for protons and neutrons. The only explanations I've been able to find consider 2 protons and state that they can have different spin. What happens when we consider more than 2 protons/neutrons?
To a reasonable approximation the protons and neutrons in a nucleus occupy nuclear orbitals in the same way that electrons occupy atomic orbitals. This description of the nucleus is known as the shell model . The exclusion principle applies to all fermions, including protons and neutrons, so the protons and neutrons pair up two per orbital, just as electrons do. Note that the protons and neutrons have their own separate sets of orbitals. I say to a reasonable approximation because neither nuclear orbitals nor atomic orbitals really exist. The atomic orbitals we all know and love, the $1s$, $2s$, etc, appear in an approximation known as the mean field . However the electron-electron pair repulsion mixes up the atomic orbitals so strictly speaking they don't exist as individual separate orbitals. This effect is small enough to be ignored (mostly) in atoms, but in nuclei the nucleons are so close that the nuclear orbitals are heavily mixed. That means we have to accept that the shell model may be a good qualitative description, but we have to be cautious about pushing it further than that.
{ "source": [ "https://physics.stackexchange.com/questions/355525", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/130326/" ] }
355,726
In my textbook, it is written that "For collision, physical contact is not a necessary condition". How can collision occur without physical contact? If there is no physical contact, then there would be no contact force between particles to act as impulsive force. What would act as impulsive force in such a collision where there is no physical contact between the particles? Can you give an example of such a collision?
In science, language is specific and unambiguous. That means that terms are defined in ways often different from colloquial usage. I'll quote Wikipedia on the definition of a collision. "A collision is an event in which two or more bodies exert forces on each other for a relatively short time." Note that there is no requirement for contact. Of the four fundamental forces, both electromagnetism and gravity are long range. Despite being long range, they both fall with the inverse square of the distance (for simply distributed objects). This means you can mostly ignore the effects of the force at large distances relative to their closest approach. A charged particle being deflected by another charged particle as they pass by each other is an example of a collision where no contact takes place. A gravitational slingshot where a small object moves around a much heavier object to gain speed is another example.
{ "source": [ "https://physics.stackexchange.com/questions/355726", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/137647/" ] }
355,741
Suppose you want to estimate the number of atoms in a rectangular sheet of graphene. You might estimate the sheet to have $10^{7}$ atoms along one edge and $2*10^{7}$ atoms along the other edge. Multiplying while keeping track of units, we get $$10^{7}\text{atoms} * 2*10^{7} \text{atoms} = 2*10^{14} \text{atoms}^2$$ But obviously, there are $2*10^{14}$ atoms, not $2*10^{14} \text{atoms}^2$. What is wrong with this calculation?
Your "dimensions" are not quite "correct". The calculation should be something like $10^{7} \frac{\hbox{atoms}}{\hbox{row}}\times 2\times 10^7 \hbox{row}=2\times 10^{14}$ atoms. In fact, atoms are objects to be counted and added, like cars or pears. I believe I remember from a math course that the Greeks could not (apparently) abstract numbers and so would always think of "$5$" as associated to objects: $5$ apples, $5$ pebbles, etc. Thus you could add apples: $5$ apples + $5$ apples + $5$ apples is $15$ apples. Multiplication was different and considered as a geometrical operation. A rectangle of sides $3$ m and $4$ m had an area of $3\times 4 =12\hbox{m}^2$. As a result, they (apparently) never "discovered" the general abstract result that $a\times b=b+b+b\ldots$ ($a$ times) since the two operations were in some sense "incompatible". Moreover, since we live in $\mathbb{R}^3$, it didn't make sense to them to multiply more than $3$ numbers together. The OP likewise would like to equate two "incompatible" operations (in the sense of the Greeks), the outcome of which numerically agree because one needs to sum all atoms of all rows rather than "multiply" atoms together. Unfortunately, I cannot find a source to confirm this.
{ "source": [ "https://physics.stackexchange.com/questions/355741", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/74/" ] }
355,774
I'm giving a high school lecture and I want to introduce the potential energy of a spring. My students have not learned the Hooke's Law and the notion of integral is too advanced. I'm really trying to justify with a hand waving argument that the energy is given by $$U = \frac{1}{2}kd^2.$$ To do so I let them realize that stretching/compressing the spring will change its energy. So this lets me justify why it only depends on the properties of the spring captured by $k$ and the deformation $d$. Then by looking at the units of energy they should realize that the deformation $d$ has to be squared and that the constant $k$ takes care of the remaining units. But if a student argues that $k$ could be defined with other units so that the dependence in $d$ is linear, I could answer that the energy should be identical whether the spring is stretched/compressed so that only $|d|$ or $d^{2n}$ are possible solutions. I see how to justify that $|d|$ is not a physical solution because it would create a cusp in the energy in $d=0$ and that nature does not like that (at their level at least). Additionally, having $n=1$ is just the simplest case. My missing argument is therefore how to justify that the energy is the same when a spring is stretched/compressed by $d$. Please keep answers light on the mathematics.
One way is to explain how a spring actually works. A coil spring is a large wire that is wound into a helix. When you compress or extend a spring, from the wire’s perspective, you aren’t really pushing or bending. Instead, you are twisting the wire one way or another. Twisting a bar clockwise or counterclockwise should be the same thing.
{ "source": [ "https://physics.stackexchange.com/questions/355774", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/28965/" ] }
356,173
I was discussing about the theory that claims that "every emitter also behaves like a receptor": Are emitters always receptors? I was brilliantly told that this theory would be false for fluorescent lights and also for resistors, because of entropy messing with time-reversed operations (condensed version!). However, as I'm curious by nature, I made this little experiment: heating a 330 Ohm resistor with a flame and measure its voltage with a cheap multimeter. What a surprise to discover that some current flowed from this hot resistor! At 20°C, I measured 0.0mV, after ±3 seconds of heating it was 0.6 mV (current around 1μA). Have I just discovered some kind of "reversed" first Joule's law? :) Or did I made some logical, methodological or experimental mistakes?
Nice experiment! But think: why would a current flow in a particular direction and not the other? Is the system somehow asymmetric? As the other answers have suggested, this is probably a thermo-electric current due to the Seebeck effect (a complication I didn't want to get into for your other question). But now that we're there, here's another thing to try: can you change which direction the current flows by how you are heating the resistor? Does the magnitude or direction of current change when you heat one wire of the resistor versus the other? The Seebeck effect comes into play at electrical interfaces, where there's asymmetry, so the direction of the interface would determine the direction of the current! The asymmetry issue is another way how you can see that you can't reverse the standard Joule heating: You will get the same heating for a current running either direction through the resistor. But if you cool the resistor to try to undo it, which way would the resulting current flow?
{ "source": [ "https://physics.stackexchange.com/questions/356173", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/126581/" ] }
356,257
This was a question on a worksheet during my first week in a class on Electromagnetism. The answer is essentially: No. Life would be no different if electrons were positively charged and protons were negatively charged. Opposite charges would still attract, and like charges would still repel. The designation of charges as positive and negative is merely a definition. But how would we have negative charges within the nucleus? Taking a look at the Wikipedia page for residual strong force , it seems that down quarks are required in this new "negative proton" to help with creating the pion to "transmit a residual part of the strong force even between colorless hadrons". I tried to set out and try to find a particle with (-1) charge with a down quark (by going through this list of baryons ) but all the particles are not stable with the exception of two unknown: Bottom Xi Baryon Double Bottom Xi Baryon Assuming one of the above are stable, could the strong force act between one of them and a neutron, implying its alright to have a negatively charged nucleus? Edit: If it wasn't clear, my question concerned whether or not there existed stable baryons of $-1$ charge with down quarks that could "replace" the proton.
The point is that whether we call it 'positive' charge or 'negative' charge makes no difference, as long as we are consistent. If we decided to label the charge of a proton as 'negative' then, to be consistent, we must also relabel the charges of the quarks (i.e. d would become +1/3e, and u would become -2/3e). In which case your question is void.
{ "source": [ "https://physics.stackexchange.com/questions/356257", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/87728/" ] }
356,366
In free body diagrams, such as a beam attached horizontally to a wall, $F_g$ is always shown acting on the center of gravity of an object. My question - is this the case in real life, where gravity only acts on this point of the object? Or is gravity acting on all parts of the object, but that point is at the exact center of all the force?
Gravity (treated as homogenous) is acting the same on all parts of the object, but if the object is rigid, internal forces allow the simplification that the centre of mass is where all the force acts. Torque: There is equal mass on both sides of the centre of mass so there is no net torque about it. If the pivot is at the centre of mass, the object will not turn, it will balance.
{ "source": [ "https://physics.stackexchange.com/questions/356366", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/149370/" ] }
356,412
What is meant by enthalpy ? My professor tells me "heat content". That literally makes no sense. Heat content, to me, means internal energy. But clearly, that is not what enthalpy is, considering: $H=U+PV$ (and either way, they would not have had two words mean the same thing). Then, I understand that $ΔH=Q_{p}$. This statement is a mathematical formulation of the statement: "At constant pressure, enthalpy change may be interpreted as heat." Other than this, I have no idea, what $H$ or $ΔH$ means. So what does $H$ mean?
Standard definition: Enthalpy is a measurement of energy in a thermodynamic system. It is the thermodynamic quantity equivalent to the internal energy of the system plus the product of pressure and volume. $H=U+PV$ In a nutshell, The $U$ term can be interpreted as the energy required to create the system, and the $PV$ term as the energy that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, $n$ moles of a gas of volume $V$ at pressure $P$ and temperature $T$, is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy $U$ plus $PV$, where $PV$ is the work done in pushing against the ambient (atmospheric) pressure. More on Enthalpy : 1) The total enthalpy, H, of a system cannot be measured directly. Enthalpy itself is a thermodynamic potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point; therefore what we measure is the change in enthalpy, $\Delta H$. 2) In basic physics and statistical mechanics it may be more interesting to study the internal properties of the system and therefore the internal energy is used. But In basic chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure-volume work represents an energy exchange with the atmosphere that cannot be accessed or controlled, so that $\Delta H$ is the expression chosen for the heat of reaction. 3) Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure $P$ remains constant; this is the $PV$ term. The supplied energy must also provide the change in internal energy, $U$, which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy $U + PV$. For systems at constant pressure, with no external work done other than the $PV$ work, the change in enthalpy is the heat received by the system. For a simple system, with a constant number of particles, the difference in enthalpy is the maximum amount of thermal energy derivable from a thermodynamic process in which the pressure is held constant. (Source : https://en.wikipedia.org/wiki/Enthalpy ) OP's question- What does "make room" mean ? - For instance, you are sitting on a chair. Then you stand up and stretch your arms. Doing this, you displace some air to make room for yourself. Similarly a gas does some work to displace other gases or any other constraint to make room for itself. To make it more understandable, imagine yourself contained in a box just big enough to contain you. Now, trying stretching your arms. You will certainly have to do a lot of work to completely stretch you arms completely. Air is just like this box except in case of air you have to do negligible work to make room for yourself.
{ "source": [ "https://physics.stackexchange.com/questions/356412", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/148764/" ] }
356,413
Can anyone explain intuitively what actually are components of forces (in the $x,\,y,\,z$ directions)? Are they actual forces? Can the components have their own components in other directions?
Standard definition: Enthalpy is a measurement of energy in a thermodynamic system. It is the thermodynamic quantity equivalent to the internal energy of the system plus the product of pressure and volume. $H=U+PV$ In a nutshell, The $U$ term can be interpreted as the energy required to create the system, and the $PV$ term as the energy that would be required to "make room" for the system if the pressure of the environment remained constant. When a system, for example, $n$ moles of a gas of volume $V$ at pressure $P$ and temperature $T$, is created or brought to its present state from absolute zero, energy must be supplied equal to its internal energy $U$ plus $PV$, where $PV$ is the work done in pushing against the ambient (atmospheric) pressure. More on Enthalpy : 1) The total enthalpy, H, of a system cannot be measured directly. Enthalpy itself is a thermodynamic potential, so in order to measure the enthalpy of a system, we must refer to a defined reference point; therefore what we measure is the change in enthalpy, $\Delta H$. 2) In basic physics and statistical mechanics it may be more interesting to study the internal properties of the system and therefore the internal energy is used. But In basic chemistry, experiments are often conducted at constant atmospheric pressure, and the pressure-volume work represents an energy exchange with the atmosphere that cannot be accessed or controlled, so that $\Delta H$ is the expression chosen for the heat of reaction. 3) Energy must be supplied to remove particles from the surroundings to make space for the creation of the system, assuming that the pressure $P$ remains constant; this is the $PV$ term. The supplied energy must also provide the change in internal energy, $U$, which includes activation energies, ionization energies, mixing energies, vaporization energies, chemical bond energies, and so forth. Together, these constitute the change in the enthalpy $U + PV$. For systems at constant pressure, with no external work done other than the $PV$ work, the change in enthalpy is the heat received by the system. For a simple system, with a constant number of particles, the difference in enthalpy is the maximum amount of thermal energy derivable from a thermodynamic process in which the pressure is held constant. (Source : https://en.wikipedia.org/wiki/Enthalpy ) OP's question- What does "make room" mean ? - For instance, you are sitting on a chair. Then you stand up and stretch your arms. Doing this, you displace some air to make room for yourself. Similarly a gas does some work to displace other gases or any other constraint to make room for itself. To make it more understandable, imagine yourself contained in a box just big enough to contain you. Now, trying stretching your arms. You will certainly have to do a lot of work to completely stretch you arms completely. Air is just like this box except in case of air you have to do negligible work to make room for yourself.
{ "source": [ "https://physics.stackexchange.com/questions/356413", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/166537/" ] }
356,551
Here is a picture of my power adapter. You can see in has one green LED lit when charging. Now here is a picture of my mirror with beveled edges. When I view the power adapter in the mirror, I see three (3) projections of the LED: the original, plus two "ghost" LEDs (one on either side). Can someone explain how the mirror's bevels are able to cause these triplicate projections? And why does it only appear to affect the LED (except near the edges)? If the mirror had a different bevel pattern (for example, in the following image), would it affect the number of "ghost" projections? UPDATE : For what it's worth, I took another picture late at night, with the camera lens as close to the mirror as possible to maximize the angle of reflection. Doing this showed me 6 (!) ghost projections, which is (mostly) in line with @Agent_L's answer ( There are actually many more "after", but as every next one is, let's say, 90% dimmer than the previous, only the first is noticeable. ). Apologies for the poor image quality.
I believe this is what's happening: The first "beforeghost" is the faint reflection off the surface of the glass. The second solid image is the intended reflection off metalized layer. The third "afterghost" is the faint internal reflection off the inside of the glass, then properly reflected again off the metalized layer. There are actually many more "after", but as every next one is around 96% dimmer than the previous (exact number courtesy of Jan Hudec ), only the first is noticeable. They're all spaced nicely and evenly, because this spacing is determined by the thickness of the glass. This effect has only coincidental relation to the beveled edge. If the glass is very thin, all images appear so close to each other that they appear as blur, not distinct images. The glass has to be quite thick - and thick mirrors tend to have large, decorative bevels. (Also, the observed object needs to be small and standing out - just what your LED is.) Similar effects happen in eyeglasses, and those are sometimes specially treated to minimize it, known as "antireflection coating". An effect similar to "afterghost" is deliberately exploited in so called " infinity mirrors ". Instead of internal reflection they employ a second, partially transmissible mirror faced inside to increase the reflection into useful levels.
{ "source": [ "https://physics.stackexchange.com/questions/356551", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/122656/" ] }
357,119
If gravity - a field force - has an elementary particle, the graviton, why don't other field forces like magnetic fields have their elementary particles? I mean, why isn't there a magneton? Or, what elementary particle is associated with the magnetic field? Is there a boson for the magnetic field? If one considers the magnetic field to be a special type of EM field with 0 amplitude electric field, then should you expect to detect a photon when you place a photon detector near a magnetic field?
The gauge boson associated with the magnetic field is the photon. Electric and magnetic fields are in effect different views of the same thing, i.e. the electromagnetic field, and the gauge boson for the electromagnetic field is of course the photon. Consider you are looking a static charge, which obviously has just a static electric field. But now suppose I am moving relative to that charge. This means the charge is moving relative to me, and a moving charge generates a magnetic field. So you see an electric field generated by the charge while I see a magnetic field. That's why I say electric and magnetic fields are just different views of the same thing. Footnote : I see Lupus Liber has added an answer that goes into more detail about how the electric and magnetic fields are different views of the EM field, and I recommend reading his answer though you may find it hard going. You might also be interested to read the answers to Do photons truly exist in a physical sense or are they just a useful concept like $i = \sqrt{-1}$? .
{ "source": [ "https://physics.stackexchange.com/questions/357119", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/157019/" ] }
357,387
I get that forces can be classified as either conservative or non-conservative, depending on whether the work done in a round trip is zero or non-zero. What property of the force makes it to be, conservative or non-conservative, so that the work done in a round trip is zero/non-zero? Note: I'm not asking the conditions for a force to be conservative . I'm asking what exactly makes it conservative.
All fundamental forces are conservatives and I would say that this is a postulate. Fundamental physics is constructed in such way that there is a quantity called energy which can be assigned to every possible state. If any fundamental process seems to violate conservation of energy we nowadays believe that there are some states, processes or even interactions that we are missing to take into account. Once we are able to take into account every state and interaction, the system and its interactions are conservative. On the other hand, at macroscopic level, most of times we are not able to describe the system in terms of fundamental forces. We need to replace the zillions of coupled equations describing the dynamics of the system by a single equation or force, which we shall call effective, and which can describe the macroscopic results we observe. However, in this process we may miss many of the states and processes occurring such that we are no longer able to keep track of the mechanical energy balance. Energy balance would fail unless we consider other forms of energy, such as heat, which is also an effective quantity. A classic example is friction. We are not able to describe two macroscopic surfaces interacting in terms of every microscopic particle participating in the process. So we forget about it and assume there is an effective force called friction. Mechanical energy balance fails and we need to assume that the effective missing energy is present in the form of heat. That is why friction is non conservative. Another example is that of an time varying potential. It is only non conservative because we are effectively replacing a large, with many particles and closed system by one small, with few particles under external interaction. There is something that we are not able to keep track whose effect is the same as of a time varying potential.
{ "source": [ "https://physics.stackexchange.com/questions/357387", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/134658/" ] }
357,410
Classic momentum problems sometimes suppose two masses will stick together after a collision and move at same speed regardless of initial motions (such as one at rest, both travelling in same direction or opposite directions). When and why would two objects stick together after collision?
All fundamental forces are conservatives and I would say that this is a postulate. Fundamental physics is constructed in such way that there is a quantity called energy which can be assigned to every possible state. If any fundamental process seems to violate conservation of energy we nowadays believe that there are some states, processes or even interactions that we are missing to take into account. Once we are able to take into account every state and interaction, the system and its interactions are conservative. On the other hand, at macroscopic level, most of times we are not able to describe the system in terms of fundamental forces. We need to replace the zillions of coupled equations describing the dynamics of the system by a single equation or force, which we shall call effective, and which can describe the macroscopic results we observe. However, in this process we may miss many of the states and processes occurring such that we are no longer able to keep track of the mechanical energy balance. Energy balance would fail unless we consider other forms of energy, such as heat, which is also an effective quantity. A classic example is friction. We are not able to describe two macroscopic surfaces interacting in terms of every microscopic particle participating in the process. So we forget about it and assume there is an effective force called friction. Mechanical energy balance fails and we need to assume that the effective missing energy is present in the form of heat. That is why friction is non conservative. Another example is that of an time varying potential. It is only non conservative because we are effectively replacing a large, with many particles and closed system by one small, with few particles under external interaction. There is something that we are not able to keep track whose effect is the same as of a time varying potential.
{ "source": [ "https://physics.stackexchange.com/questions/357410", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/167831/" ] }
357,722
I was told that a use of both alpha and beta decay is in radiometric dating. Why is radiometric dating not also considered to be a use of gamma radiation?
Radiometric dating tends to use a nucleus that changes into some other easily distinguishable nucleus. For example uranium decays to lead 206 and 207, which can be easily measured in a mass spectrometer. We measure both the uranium concentration and the lead concentration and infer the age from how much of the uranium has changed into lead. The problem with gamma radiation is it doesn't produce a chemically distinguishable product. Gamma decay is effectively a decay of the excited state of a nucleus to a lower energy state of the same nucleus. So there is no way to tell how much of the original parent nucleide has decayed. By contrast alpha decay produces a daughter atom with an atomic number lower than the parent by two, and beta decay produces a daughter atom with an atomic number higher than the parent by one. In both cases a mass spectrometer can easily tell the difference between the original atom and the product.
{ "source": [ "https://physics.stackexchange.com/questions/357722", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/29532/" ] }
357,860
I have been asked this question by school kids, colleagues and family (usually less formally): When ascending a flight of stairs, you exchange mechanical work to attain potential Energy ( $W_\text{ascend} = E_\text{pot} = m gh$ ). However, when descending, you have to exert an equivalent force to stop yourself from accelerating and hitting the ground (with $v_\text{splat} = \sqrt{2 g h}$ ). If you arrive downstairs with : $$v_\text{vertical} \ll v_\text{splat}$$ you counteracted basically all of your potential energy, i.e. $$\int F(h) \cdot \mathrm dh = W_\text{descend} \approx E_\text{pot} = m g h$$ So is the fact that ascending stairs is commonly perceived as significantly more exhausting than descending the same stairs purely a biomechanical thing, e.g. having joints instead of muscles absorb/counteract kinetic energy? Or is there a physical component I am missing? Edit-1: I felt I need to clarify some points in reaction to the first answers. A) The only reason I introduced velocity into the question was to show that you actually have to expend energy going downstairs to prevent ending up as a wet spot on the floor at the bottom of the steps. The speed with which you ascend or descend doesn't make a difference when talking about the energy, which is why I formulated the question primarily using energy and Mechanical work. Imagine that while ascending you pause for a tiny moment after each step ( $v = 0$ ). Regardless of whether you ascended very slowly or very quickly, you would have invested the same amount of work and gained the same amount of Potential energy ( $\delta W = m \cdot g \cdot \delta h_\text{step} = \delta E_\text{pot}$ ). The same holds true while descending. After each step, you would have gained kinetic energy equivalent to $$E_\text{kin} = m \cdot g \cdot \delta h_\text{step}$$ but again, imagine you take a tiny pause after each step. For each step, you will have to exert a force with your legs such that you come to a complete stop (at least in $y$ direction). However fast or slow you do it, you mathematically will end up expending $$W_\text{step} = \int F(h) \cdot \mathrm dh = m \cdot g \cdot \delta h_\text{step}$$ If you expended any less "brake" work, some of your kinetic energy in $y$ direction would remain for each step , and adding that up over a number of steps would result in an arbitrarily high terminal velocity at the bottom of the stairs. Since we usually survive descending stairs, my argument is that you will have to expend approximately the same amount of energy going down as going up, in order to reach the bottom of arbitrarily long flights of stairs safely (i.e. with $v_y \approx 0$ ). B) I am quite positive fairly sure that friction does not play a significant role in this thought experiment. Air friction as well as friction between your shoes and the stairs should be pretty much the same while ascending and descending. In both cases, it would be basically the same amount of additional energy expenditure, still yielding identical total energy emounts for ascending and descending. Anna v is of course right in pointing out that you need the friction between your shoes and the stairs to be able to exert any force at all without slipping (such as on ice), but in the case of static friction without slippage, no significant amount of energy should be dissipated, since said friction exerts force mainly in $x$ direction, but the deceleration of your body has a mostly y component, since the $x$ component is roughly constant while moving on the stair (~orthogonal directions of frictional force and movement, so no energy lost to friction work). Edit-2: Reactions to some more comments and replies, added some emphasis to provide structure to wall of text C) No, I am not arguing that descending is subjectively less exhausting, I am asking why it is less exhausting when the mechanics seem to indicate it shouldn't be. D) There is no "free" or "automatic" normal force emanating from the stairs that stops you from accelerating. The normal force provided by the mechanic stability of the stairs stops the stairs from giving in when you step on them, alright, but you have to provide an equal and opposite force (i.e. from your legs) to decelerate your center of gravity, otherwise you will feel the constraining force of the steps in a very inconveniencing manner. Try not using your leg muscles when descending stairs if you are not convinced (please use short stairs for your own safety). E) Also, as several people pointed out, we as humans have no way of using or reconverting our stored potential energy to decelerate ourselves. We do not have a built-in dynamo or similar device that allows us to do anything with it - while descending the stairs we actually have to "get rid of it" in order to not accelerate uncontrollably. I am well aware that energy is never truly lost, but also the "energy diversion instead of expenditure" process some commenters suggested is flawed (most answers use some variation of the argument I'm discussing in C, or "you just need to relax/let go to go downhill", which is true, but you still have to decelerate, which leads to my original argument that decelerating mathematically costs exactly as much energy as ascending). F) Some of the better points so far were first brought up by dmckee and Yakk: Your muscles have to continually expend chemical energy to sustain a force , even if the force is not acting in the sense of $W = F \cdot s$ . Holding up a heavy object is one example of that. This point merits more discussion, I will post about that later today. You might use different muscle groups in your legs while ascending and descending , making ascending more exhausting for the body (while not really being harder energetically). This is right up the alley of what I meant by biomechanical effects in my original post. Edit-3: In order to address $E$ as well as $F_1$ , let's try and convert the process to explicit kinematics and equations of motion. I will try to argue that the force you need to exert is the same during ascent and descent both over $y$ direction (amount of work) and over time (since your muscles expend energy per time to be able to exert a force). When ascending (or descending stairs), you bounce a little to not trip over the stairs. Your center of gravity moves along the $x$ axis of the image with two components: your roughly linear ascent/descent (depends on steepness of stairs, here 1 for simplicity) and a component that models the bounce in your step (also, alternating of legs). The image assumes $$h(x) = x + a \cdot \cos(2 \pi \cdot x) + c$$ Here, $c$ is the height of your CoG over the stairs (depends on body height and weight distribution, but is ultimately without consequence) and $A$ is the amplitude of the bounce in your step. By derivation, we obtain velocity and acceleration in $y$ direction $$\begin{align} v(x) &= 1 - 2 \pi \cdot A \sin(2 \pi \cdot x)\\ a(x) &= -(2 \pi)^2 \cdot A \cos(2 \pi \cdot x) \end{align}$$ The total force your legs have to exert has two parts: counteracting gravity, and making you move according to $a(x)$ , so $$F(x) = m \cdot g + m \cdot a(x)$$ The next image shows F(x) for $A = 0.25$ , and $m = 80\ \mathrm{kg}$ . I interpret the image as showing the following: In order to gain height, you forcefully push with your lower leg, a) counteracting gravity b) gaining momentum in $y$ direction. This corresponds to the maxima in the force plotted roughly in the center of each step. Your momentum carries you to the next step. Gravity slows your ascent, such that on arriving on the next step your velocity in $y$ direction is roughly zero (not plotted $v(x)$ ). During this period of time right after completely straightening the pushing lower leg, your leg exerts less force (remaining force depending on the bounciness of your stride, $A$ ) and you land with your upper foot, getting ready for the next step. This corresponds to the minima in $F(x)$ . The exact shape of $h(x)$ and hence $F(x)$ can be debated, but they should look qualitatively similar to what I outlined. My main points are: Walking down the stairs, you read the images right-to-left instead of left-to-right. Your $h(x)$ will be the same and hence $F(x)$ will be the same. So $$W_\text{desc} = \int F(x) \cdot \mathrm dx = W_\text{asc}$$ The spent amounts of energy should be equal. In this case, the minima in $F(x)$ correspond to letting yourself fall to the next step (as many answers pointed out), but crucially, the maxima correspond to exerting a large force on landing with your lower leg in order to a) hold your weight up against gravity b) decelerate your fall to near zero vertical velocity. If you move with roughly constant $x$ velocity, $F(x)$ is proportional to $F(t)$ . This is important for the argument that your muscles consume energy based on the time they are required to exert a force: $$W_\text{muscle} \approx \int F(t) \cdot \mathrm dt$$ Reading the image right-to-left, $F(t)$ is read right-to-left, but keeps its shape. Since the time required for each segment of the ascent is equal to the equivalent "falling" descent portion (time symmetry of classical mechanics), the integral $W_\text{muscle}$ remains constant as well. This result carries over to non-linear muscle energy consumption functions that depend on higher orders of $F(t)$ to model strength limits, muscle exhaustion over time and so on.
However, when descending, you have to exert an equivalent force to stop yourself from accelerating and hitting the ground... Absolutely correct. So is the fact that ascending stairs is commonly perceived as significantly more exhausting than descending the same stairs purely a biomechanical thing, e.g. having joints instead of muscles absorb/counteract kinetic energy? Right. When going up the stairs, you must exert large forces by your large muscles. When your legs raise your torso, your muscles are supplying sufficient forces (with an energy cost) to do so. When you go down the stairs, it is not the reverse of ascending. Instead of using your large muscles to decelerate, most people will take a straightened leg and plant it on the lower step. The deceleration is accomplished by plastic deformation in joints, fluid displacement in your foot, and the materials in your shoes and the floor. There is still some energy demand on the muscles for coordination and moving the legs, but it is significantly less than if the muscles were doing the deceleration job.
{ "source": [ "https://physics.stackexchange.com/questions/357860", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/55547/" ] }
358,894
If I arranged an experiment where light raced electricity what would be the results? Let's say a red laser is fired at the same time a switch is closed that applies 110 volts to a 12 gauge loop of copper wire with a meter at a distance of ten meters. Also, does the speed of the electricity depend on the voltage applied or the resistance of the conductor? For this test let's say the distance is ten meters through air. I'm not looking for an exact answer. An approximation is fine.
The speed of electricity is conceptually the speed of the electromagnetic signal in the wire, which is somewhat similar to the concept of the speed of light in a transparent medium. So it is normally lower, but not too much lower than the speed of light in the vacuum. The speed also depends on the cable construction. The cable geometry and the insulation both reduce the speed. Good cables achieve 80% of the speed of light; excellent cables achieve 90%. The speed does not directly depend on the voltage or resistance. However, different frequencies have different attenuation. In your example, the very moment of switching on represents a high frequency front that will be attenuated. While at the input the voltage would increase very fast, at the output it would increase gradually, as if with a delay. It is not really a delay per se, because the initial low level signal would get there almost with the speed of light, but its amplitude would only gradually increase and reach the full voltage with a substantial delay that would depend on the cable and circuit impedance (mostly on the cable inductance). If you use a high speed coaxial cable (like a 3GHz satellite TV cable) instead of a wire, the delay would be much shorter (80-90% of the speed of light to the full voltage). Hope this helps.
{ "source": [ "https://physics.stackexchange.com/questions/358894", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/129433/" ] }
358,956
I am reading The Elegant Universe by Brian Greene. In many places it's directly/indirectly mentioned that the LHC may not be able to detect (with the current technology) heavy particles to prove Super-Symmetry. What prevents such accelerators from detecting such heavy particles? I always thought it would be the opposite as heavy particles can make a stronger impression on detectors than light particles.
It's not detecting the particles that is hard, it's making them in the collisions. Although the LHC collision energy is 14TeV, collisions aren't between the protons but rather between individual quarks inside the protons. Since the energy is shared between the three quarks in a proton the actual quark-quark collision energy is a lot less than 14TeV. Even then, for various reasons to do with conservation of momentum not all that energy can go into creating new particles. The end result is that it's hard to create particles much about above a TeV in weight. More on this in What is the maximal particle mass one can create via the LHC? Can we create dark-matter particles via the LHC? if you want to pursue this further. The upshot is that if the heavy particles have a mass much greater than a TeV the LHC can't create them, and obviously if they can't be created they can't be detected. All is not completely lost since we might be able to detect heavy particles indirectly by the influence they have on the collisions we can detect. Even so the upper mass limit is still restricted.
{ "source": [ "https://physics.stackexchange.com/questions/358956", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/70204/" ] }
359,002
I've already read the below questions (and their answers) regarding neutrinos vs. electromagnetic waves propagating through space, but I'm still not clear on something. Neutrinos arrived before the photons (supernova) The delay between neutrinos and gammas in a supernova, and the absolute mass scale of neutrinos Neutrino Speed in Supernova Speed of neutrinos (Especially dmckee's answer ) Given that Light from SN 1987A arrived 2 or 3 hours after its neutrinos, implying that it was "slowed down" relative to the neutrinos Light from SN Refsdal has been "lensed" multiple times to re-appear on time scales of several decades, implying that light interacts with matter (mass) Neutrinos interact extremely little with matter but are known to have mass and energy Question Why did neutrinos (with their mass and momentum) arrive before the light (considered to be massless) from SN 1987A ? Considering SR and GR, this seems to be a contradiction. What am I missing? Postscript I've tried desperately to avoid using the word "photon" above (in reference to light) after learning of the Lamb Controversy™ (via related discussions here and here on Phys SE).
Both neutrinos and photons were produced in the core of the star but photons have a much stronger probability of interacting with the outer layer of the star than the neutrinos. Thus the photons were trapped whereas the neutrinos easily escaped. This has nothing to do with mass and all to do with the cross-section of interaction with protons/electrons for photons on one hand and for neutrinos on the other. Reading @dmckee's answer made me realise that the phrasing of the previous paragraph makes it sound like the light flash we observe might be due to those photons eventually escaping. This is not what I meant: it would take millions of years for those photons to escape, as is well known for our own Sun. It is only because the outer layers of the star are eventually blown off that we see a light flash. I should also have pointed out that electron neutrinos can escape only in the early stages of the collapse of type II supernovae. As the density increases beyond a few times $10^{11} \text{g}\ \text{cm}^{-3}$, the scattering of neutrinos with stellar matter is sufficient to make the timescale of the diffusion of neutrinos out of the star shorter than the collapse timescale. This is a combination of increasing density (and therefore increasing interactions) and accelerating collapse. So the neutrino flash measured on Earth came from the very beginning of the evolution into a supernova. Let me add some orders of magnitude. The cross-section of photon-electron scattering is of the order of $10^{-24} \text{cm}^2$. Compare this with the neutrino-nucleon scattering. It varies as the square of the neutrino energy: $$\sigma_\nu \approx 10^{-44} E_\nu^2\ \text{cm}^2$$ with the energy in MeV. So that's 20 orders of magnitude, give or take. Where does this huge difference come from? Neutrinos interact solely through the weak interaction whereas photons interact through the electromagnetic interaction with charged nuclei and electrons in the star plasma. So this is just a reflection of the relative strength of both interactions. There is no reason it should be like that: it is just the way our universe is! We would not be here to discuss these matters if it were not, actually…
{ "source": [ "https://physics.stackexchange.com/questions/359002", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/108554/" ] }
359,088
I understand that particles interact via the fundamentals forces of nature. For example photons interact with matter because they carry the change in the electromagnetic field. Neutrinos, on the other hand, do not interact with the electromagnetic field, and so go through matter with (almost) no interaction. However neutrinos still have a size (and a mass), so can a neutrino "hit" an electron (which also has a size) though unlikely that is? Since neutrinos do not interact with the electromagnetic field, they would not be deflected, so can two particles, like a neutrino and an electron, hit "head-on"?
Yes, neutrinos "hit" electrons all the time inside the sun, on their way to getting out, which results into the resonant conversion of their flavor, predicated on the changing effective index of refraction. They interact with electrons, and protons, and neutrons, etc... through their favorite interaction, the weak , not electromagnetic interaction . (They can also interact through the puny gravitational interaction, for them.) They are detected on earth through their (rare) weak interactions with nucleons in the detectors built for that very purpose. Your "head on collision" metaphor would hardly apply for any particle, let alone neutrinos. (For all practical purposes, particles, indeed, never quite hit each other. "Hit each other" is a useful shared metaphor to summarize an interaction, quite well described mathematically by Quantum Field Theory. At some point, taking the metaphor to be more "real" than an informal summary of the math will lead you astray and inflict conceptual casualties.) In any case, talking about the "size" of neutrinos and electrons makes very little sense. You may wish to think of their inverse mass, their Compton wavelength (which, for the ν, would exceed 0.1 μm ) as some sort of a "size", but you are likely to run into nasty absurdities unless you were very-very careful.
{ "source": [ "https://physics.stackexchange.com/questions/359088", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/104419/" ] }
359,604
In QED, the fine structure constant $\alpha$ runs upwards in the UV, with a loop calculation (involving a geometric series of the vacuum polarisation diagram) indicating a divergence in $\alpha$ at $\sim 10^{286}\,\text{eV}$. It is often claimed (see, for instance, Schwartz, QFT and the Standard Model, section 21.2) that this means QED is an incomplete theory at high energies, or that it is not predictive at these energies, and that some UV completion is required. However, QCD is another theory with a Landau pole (in the IR this time), at $\sim100\,\text{MeV}$. Neverthless, QCD is a theory valid down to arbitrarily low energies; it is merely non-perturbative in this regime. My understanding is that the Landau pole is an artefact of extrapolating a perturbative calculation of the coupling strength $\alpha_s$ into the non-perturbative regime . In fact, there is no divergence in $\alpha_s$, although explicitly calculating it is impossible (or perhaps not even meaningful) with current tools and understanding. Therefore, whilst perturbation theory clearly breaks down in QED at very high energies, is it not possible that QED is a perfectly legitimate and consistent theory up to arbitrarily high energies, in much the same way that QCD is at low energies? Is the QED Landau pole really there ? Said another way, is there really any link between "the point at which perturbation theory breaks down" and "the point at which the theory stops being predictive"? Perhaps these are linked when we're working with an EFT with infinitely many terms whose coefficients are unconstrained, but if we postulate the QED Lagrangian as fundamental, is it not, at least in principle, predictive up to arbitrarily high energies?
You are completely correct that the perturbative calculation of the Landau pole can't be trusted, as it will clearly become invalid long before the putative pole is reached. The only method that we know of that can give accurate predictions for the high-energy behavior of QED is numerical simulation. According to https://arxiv.org/abs/hep-th/9712244 and http://www.sciencedirect.com/science/article/pii/S092056329700875X , numerics suggests that QED is indeed quantum trivial (i.e. $e$ always renormalizes to zero for any choice of bare coupling), but not because of a Landau pole, which is the usual explanation. Instead, chiral symmetry breaking kicks in before the Landau pole is reached. So there is no Landau pole at high energies, but there is a different phase transition that causes QED to break down.
{ "source": [ "https://physics.stackexchange.com/questions/359604", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/30217/" ] }
360,007
There is a question that explains work and energy on stack exchange but I did not see this aspect of my problem. Please just point me to my error and to the correct answer that I missed. What I am asking is this: Why in physics when the units are the same that does not necessarily mean you have the same thing.? Let me explain. Please let me use m for meter, sec= second , and kg = kilogram as the units for brevity sake. The units for work are kg * m/sec^2 * m. The units for kinetic energy are kg* (m/sec)^2. They look that same to me. I need them to be the same so I can figure out the principle of least action. Comments are welcome.
One definition of work is "a change in energy." Any change in a physical quantity must have the same units as that quantity. Different kinds of work are associated with different kinds of energy: conservative work is associated with potential energy, non-conservative work with mechanical energy, and total work with kinetic energy. In fact, that's one way to see the oft-quoted Law of Conservation of Energy: $$ W_{total}=W_{non-conservative}+W_{conservative}\\ \Delta KE=\Delta E - \Delta PE \\ \therefore \Delta E=\Delta KE + \Delta PE $$ So just like impulse (which is a change in momentum) has the same units as momentum, work has the same units as energy. Any change in a physical quantity must have the same units as that quantity. A change in velocity has units of velocity, etc. A more difficult question might be why torque has the same units as energy. This is more subtle, but the key concept is this: units are not the only thing that determines a quantity's interpretation. Context matters too. Energy and torque may have the same units, but they are very different things and would never be confused for one another because they appear in very different contexts. One cannot blindly look at the units of a quantity and know what is being discussed. A dimensionful quantity might be meaningless or meaningful depending on the context, and it's meaning can change with that context. Action times speed divided by length has the same units as energy but without any meaningful interpretation (as far as I'm aware).
{ "source": [ "https://physics.stackexchange.com/questions/360007", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
360,582
What do I mean? There are two kind of equalities, or two ways to interpret an equality. Take for example the ideal gas law $$PV = Nk_BT$$ We all know what this equation means: when you calculate both sides of the equation, you find the same physical quantity. This equation in other words is saying that temperature of an ideal gas is proportional to the pressure and the volume of the enclosing container, and inversely proportional to the number of molecules of the gas. There are so many of these equations in physics, but there is another, more subtle kind. Take this other equation about from the statistics of ideal gas: $$\langle E\rangle = \frac{1}{2}mv_{rms}^2 = \frac{3}{2}k_bT$$ Now, this equation can also be taken as an expression of proportionality. However, this can also be taken as a definition for temperature. We can read this equation to mean that temperature (a macroscopic phenomenon) is the average kinetic energy of a gas particle (up to a multiplication). Incidentally, one take $$F=ma$$ in a similar way. For example, when we are in a rotating or generally accelerated frame, the equation actually defines the fictional force in terms of the acceleration. So is $$G = 8\pi T$$ an expression of proportionality, or a definitional identity? and why? Follow up The ideal gas law does not tell us why $lhs = rhs$ , it expresses a law , it does not explain nature. On the other hand, the second equation informs us on the nature of temperature, it explains nature, it tells us: this is what temperature is . I find these kinds of equations very satisfying, and they are much rarer in physics. If you disagree on any of this, please leave a comment, I am interested.
There is a pure geometric definition of the Einstein tensor $G_{\mu\nu}$ in terms of derivatives of the metric. Independent of any physics. Likewise, given a field theory, you could in principle calculate the stress energy tensor. GR is a physical theory which couples the geometry, through the Einstein tensor, to the matter content, via the stress energy tensor. There are other self-consistent theories which couple geometry to matter in different ways. In this sense, the equation $G=8\pi T$ is model dependent. Just like the ideal gas law $PV=NRT$, which only applies for ideal gases, not interacting gases.
{ "source": [ "https://physics.stackexchange.com/questions/360582", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/77496/" ] }
361,590
My friend and I are discussing whether or not physical phenomena are deterministic. Let's say, for example, that we have a 3-dimensional box with balls inside of it upon which no gravitational forces are acting. The balls each have their own size, mass, starting position and starting velocity. After a given amount of time, the balls will have changed positions and possibly also velocities due to movement and possible collisions with other balls. The question is, does the same initial state always lead to the same state after a given amount of time? In other words, if we have two boxes of the same size with the same number of balls of the same size, starting at the same positions with the same initial velocities, will the balls inside of each box be at the same positions, as we would expect from a deterministic system, or would there be any randomness involved?
There are several layers to this, so I get to have fun uncovering them. The first layer is simple Netwonian Mechanics. If we assume Netwonian mechanics applies, and that the universe only consists of this box and its contents, and the contents of the box are set up exactly the same every time, then the resulting positions of the balls as time goes on is deterministic. It will be exactly the same, every single time. However, it gets more interesting. Newtonian mechanics can be chaotic . A chaotic system is sensitive to initial conditions. A slight perturbation of the setup can yield drastically different results. Perhaps you put one of the balls in the wrong place: off by 0.5mm. This can cause the collisions to occur differently, and lead to drastically different results. A classic example of this is the double pendulum . In many regions, its motion is very sensitive to initial conditions. In this sense the box is unpredictable but deterministic . There's only one way the balls can move, but it's impossible to predict because properly predicting it would require infinitely precise measurements, and we don't have any way of measuring things like that. Which brings us to widening our universe. Up to this point, we only considered a universe containing this box and this box alone. But there are outside influences on real world boxes. For example, there are gravitational forces being applied. Literally speaking, the position of Jupiter could affect the position of these balls colliding around by subtly changing the velocities of the balls. Of course, what I just said sounds like astrology, so I should back off a bit. In practical scenarios, Jupiter is not going to noticeably affect the results. In a truly chaotic system, all inputs matter, but in our practical box, forces like friction are eventually going to make the system highly predictable. There's no need to go to a fortune teller to find the alignment of the planets before doing this experiment in real life! But we are good at making experiments which are closer and closer to these ideal chaotic environments. So we can ask ourselves what happens as we push this to the extreme. What happens when we make an experiment so refined that Jupiter is having an effect. Well, we also start seeing other effects: quantum effects. Quantum effects will perturb the setup, just like failing to perfectly set up all of the balls, or failing to account for the gravitational effects of Jupiter. These effects are tiny , so in any practical situation, you will not observe them. However, they are there. And what's interesting about them is that, to our best knowledge, they are truly random . We know of no way to predict the effects of quantum interactions at a particle by particle level. As far as we can tell, their effects are truly nondeterministic, and so your box is nondeterministic too. But, taking a step back, if you look at the sum total of many trillions of quantum interactions occurring each and every second, the results are statistically predictable. If you take the laws of quantum mechanics, and apply them to incredibly large non-coherent bodies (like a billiard ball or a box), you find that the equations simplify out to Newtonian mechanics (more or less). So unless you carefully craft your box and balls with the expressed intent of detecting the nondeterministic effects of quantum mechanics, you will find that the balls behave very much deterministically (although if you build a chaotic system, they may still not be predictable).
{ "source": [ "https://physics.stackexchange.com/questions/361590", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/169475/" ] }
361,679
If I write on the starting page of a notebook, it will write well. But when there are few or no pages below the page where I am writing, the pen will not write well. Why does this happen?
I'd say the culprit is the contact area between the two surfaces relative to the deformation. When there are other pieces of paper below it, all the paper is able to deform when you push down; because the paper is fairly soft and deformable fiber. If there is more soft deformable paper below it, the layers are able to bend and stretch more. (A simplified example of this is Springs in series , where the overall stiffness decreases when you stack up multiple deformable bodies in a row) This deformation creates the little indents on the page (and on pages below it; you can often see on the next page the indents for the words you wrote on the page above). The deeper these indents are, the more of the ballpoint is able to make contact with the surface. If there is barely any deformation, then the flat surface doesn't get to make good contact with the page. This makes it hard for the tip of the pen to actually roll, which is what moves the ink from the cartridge to the tip. It would also make a thinner line due to less contact area. Here is an amazing exaggerated illustration I made on Microsoft Paint: The top one has more pages, the bottom one has fewer. I've exaggerated how much the pages deform obviously; but the idea is that having more pages below with make that indent larger; leading to the increased surface area on the pen tip. Note that this doesn't really apply to other types of pens. Pens that use other ways to get the ink out have less of an issue writing with solid surfaces behind; but ballpoint pens are usually less expensive and more common.
{ "source": [ "https://physics.stackexchange.com/questions/361679", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/153578/" ] }
361,681
My understanding of motional EMF is that one of the ways it is created is by moving a conductor (moving such that its orientation doesn't change) in a uniform magnetic field (non changing). EMF is produced due to the segregation of charges due to the Lorentz force experienced by the charges while moving in a magnetic field. Is that correct.? If that is the case, how do you reconcile this with Faraday's law as Faraday's law requires change of flux and here flux is not changing. Of course, if you are moving a conductor in a field such that flux is changing (like changing the orientation of conductor), EMF is induced and that can be given by Faraday's law. But the case, where flux is not changing but still emf is being produced. How to explain that.?
I'd say the culprit is the contact area between the two surfaces relative to the deformation. When there are other pieces of paper below it, all the paper is able to deform when you push down; because the paper is fairly soft and deformable fiber. If there is more soft deformable paper below it, the layers are able to bend and stretch more. (A simplified example of this is Springs in series , where the overall stiffness decreases when you stack up multiple deformable bodies in a row) This deformation creates the little indents on the page (and on pages below it; you can often see on the next page the indents for the words you wrote on the page above). The deeper these indents are, the more of the ballpoint is able to make contact with the surface. If there is barely any deformation, then the flat surface doesn't get to make good contact with the page. This makes it hard for the tip of the pen to actually roll, which is what moves the ink from the cartridge to the tip. It would also make a thinner line due to less contact area. Here is an amazing exaggerated illustration I made on Microsoft Paint: The top one has more pages, the bottom one has fewer. I've exaggerated how much the pages deform obviously; but the idea is that having more pages below with make that indent larger; leading to the increased surface area on the pen tip. Note that this doesn't really apply to other types of pens. Pens that use other ways to get the ink out have less of an issue writing with solid surfaces behind; but ballpoint pens are usually less expensive and more common.
{ "source": [ "https://physics.stackexchange.com/questions/361681", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/31058/" ] }
361,690
I'm not sure whether this has already been asked, but I post here as I couldn't find a satisfactory answer anywhere. This classic example was used by my teacher to illustrate the effect of an inductor in a circuit. Consider the circuit given above. What happens to the bulbs when you turn the switch off? Here's my teacher's argument. Here, when you switch the circuit off, inductor opposes a change in current and hence, bulb A is supposed to glow for a longer time . But, I had a different answer. The inductor resists the current decay and makes it still flow through the circuit. Since the external circuit is open, the circuit containing both the bulbs would act as a separate unit, with flowing current. Hence, both the bulbs would glow for a while . When my teacher insisted on his answer, I had to convince myself that the current due to the inductor is just enough to feed bulb A. I am very sure that that this is not the case, then what is ? Am I wrong in considering the inductor as a emf source? By the way, please explain, from where the inductor pulls charges to maintain the current in the circuit ? Is it from the conducting wire ? Or from the battery ?!
I'd say the culprit is the contact area between the two surfaces relative to the deformation. When there are other pieces of paper below it, all the paper is able to deform when you push down; because the paper is fairly soft and deformable fiber. If there is more soft deformable paper below it, the layers are able to bend and stretch more. (A simplified example of this is Springs in series , where the overall stiffness decreases when you stack up multiple deformable bodies in a row) This deformation creates the little indents on the page (and on pages below it; you can often see on the next page the indents for the words you wrote on the page above). The deeper these indents are, the more of the ballpoint is able to make contact with the surface. If there is barely any deformation, then the flat surface doesn't get to make good contact with the page. This makes it hard for the tip of the pen to actually roll, which is what moves the ink from the cartridge to the tip. It would also make a thinner line due to less contact area. Here is an amazing exaggerated illustration I made on Microsoft Paint: The top one has more pages, the bottom one has fewer. I've exaggerated how much the pages deform obviously; but the idea is that having more pages below with make that indent larger; leading to the increased surface area on the pen tip. Note that this doesn't really apply to other types of pens. Pens that use other ways to get the ink out have less of an issue writing with solid surfaces behind; but ballpoint pens are usually less expensive and more common.
{ "source": [ "https://physics.stackexchange.com/questions/361690", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/155230/" ] }
361,935
I am wondering about this question since I asked myself: why do people feel more weightless in the rear car of a roller-coaster than in the front car? To feel the effect of weightlessness, you must accelerate at the acceleration of the gravity (around 9.8m/s^2). Thus, you do not feel that effect in the front car but more likely in the rear car. But all the cars are connected together, and one individual car cannot accelerate faster or go faster because they will get pulled/pushed from the other cars. I am stuck right now to get the answer. If all cars must go at the same acceleration or same speed at different points on the tracks, why does the rear-car feel more weightless? To have that feeling you must accelerate near the gravitational acceleration ... it doesn't make sense! I have put the air friction, the frictional forces outside of this since I am guessing their force shouldn't be taken in consideration in that kind of situation.
The acceleration along the track is always equal for every car, but for each car that acceleration aligns with the hills/gravity in different ways. As the front car crests a hill, the coaster is decelerating; the front car is being pulled backward by the other cars. But as the rear car crests a hill, it's being pulled forward by the rest of the cars. The front car is accelerated down hills. The rear car is accelerated over hills. This is why they feel different to ride.
{ "source": [ "https://physics.stackexchange.com/questions/361935", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/170996/" ] }
362,305
Every now and then a question comes up about the status of the momentum operator in the infinite square well, and while we have two good answers on the topic here and here , I'm generally not satisfied by their level of detail and by how easy (not very) it is to find them when you need them. I therefore posit that it is high time we settle things and build a canonical Q&A thread for this, so, with that in mind: What's the deal with momentum in the infinite square well? Is the momentum operator $\hat p=-i\hbar \frac{\mathrm d}{\mathrm dx}$ symmetric when restricted to the compact interval of the well? Are there any subtleties in its definition, via its domain or similar, that are not present for the real-line version? Is the momentum operator $\hat p=-i\hbar \frac{\mathrm d}{\mathrm dx}$ self-adjoint in these conditions? If not, why not, and what are the consequences in terms of the things we normally care about when doing one-dimensional QM? If it is not self-adjoint, does it admit a self-adjoint extension ? If it does, is that extension unique? If the extension is not unique, what are the different possible choices, and what are their differences? Do those differences carry physical meaning / associations / consequences? And what is a self-adjoint extension anyways, and where can I read up about them? What is the spectrum and eigenvectors of the momentum operator and its extensions? How do they differ from each other? Is there such a thing as a momentum representation in this setting? If not, why not? What is the relationship between the momentum operator (and its possible extensions) and the hamiltonian? Do they commute? Do they share a basis? If not, why not? Do these problems have counterparts or explanations in classical mechanics? ( nudge, nudge ) And, more importantly: what are good, complete and readable references where one can go and get more information about this?
Is the momentum operator $P=-i\hbar \frac{\mathrm d}{\mathrm dx}$ symmetric when restricted to the compact interval of the well? Are there any subtleties in its definition, via its domain or similar, that are not present for the real-line version? (I henceforth assume the Hilbert space is $L^2([0,1],dx)$ .) It depends on the precise definition of the domain of $P$ . A natural choice is $$D(P) = \{\psi \in C^2([0,1])\:|\: \psi(0)=\psi(1)=0\}\:.\tag{1}$$ With this definition $P$ is symmetric : (a) the domain is dense in $L^2([0,1], dx)$ and (b) the operator is Hermitian $$\langle P\psi| \phi \rangle = \langle \psi| P\phi \rangle\quad \mbox{for $\psi, \phi \in D(P)\:.$}\tag{2}$$ You can consider different definitions of the domain more or less equivalent. The point is that the self-adjoint extension are related with the closure of $P$ and not to $P$ itself, and you may have several possibilities to get the same closure stating from different domains. The situation is similar to what happens on the real line. There $P$ can be defined as a differential operator on $C_0^\infty(\mathbb R)$ or $\cal S(\mathbb R)$ (Schwartz' space), or $C^1_0(\mathbb R)$ and also interpreting the derivative in weak sense. In all cases the closure of $P$ is the same. Is the momentum operator $P=-i\hbar \frac{\mathrm d}{\mathrm dx}$ self-adjoint in these conditions? If not, why not, and what are the consequences in terms of the things we normally care about when doing one-dimensional QM? It is not self-adjoint with the said choice of its domain (or with every trivial modification of that domain). The consequence is that it does not admit a spectral decomposition as it stands and therefore it is not an observable since there is no PVM associated with it. Defining $P= -i \hbar \frac{\mathrm d}{\mathrm dx}$ on the real line with one of the domain said above, the same problem arises. The general fact is that differential operators are never self-adjoint because the adjoint of a differential operator is not a differential operator since it cannot distinguish between smooth and non-smooth functions, because elements of $L^2$ are functions up to zero-measure sets. At most a symmetric differential operator can be essentially self-adjoint , i.e., it admits a unique self-adjoint extension (which coincides to the closure of the initial operator). This unique self-adjoint operator is the true observable of the theory. If it is not self-adjoint, does it admit a self-adjoint extension ? Yes. The canonical way is checking whether defect indices of $P$ with domain (1) are equal and they are. But the shortest way consists of invoking a theorem by von Neumann: If a (densely defined) symmetric operator commutes with an antilinear operator $C$ defined on the whole Hilbert space and such that $CC=I$ , then the operator admits self-adjoint extensions. In this case $(C\psi)(x):= \overline{\psi(1-x)}$ satisfies the hypothesis. If it does, is that extension unique? NO it is not, the operator is not essentially self-adjoint. If the extension is not unique, what are the different possible choices, and what are their differences? Do those differences carry physical meaning / associations / consequences? And what is a self-adjoint extension anyways, and where can I read up about them? There is a class of self-adjoint extensions parametrized by elements $\chi$ of $U(1)$ . These extensions are defined on the corresponding extension of the domain $$D_\chi(P) := \{\psi \in L^2([0,1],dx)\:|\:\psi' \mbox{in weak sense exists in $L^2([0,1],dx)$ and $\psi(1) = \chi\psi(0)$} \}\:. $$ (It is possible to prove that with the said definition of $D_\chi$ the definition is consistent: $\psi$ is continuous so that $\psi(0)$ and $\psi(1)$ makes sense.) Next the self-adjoint extension of $P$ over $D_\chi(P)$ is again $-i\hbar \frac{d}{dx}$ where the derivative is interpreted in weak sense. The simplest case is $\chi=1$ and you have the standard momentum operator with periodic boundary conditions , that is self-adjoint. The other self-adjoint extensions are trivial changes of this definition. I do not know the physical meaning of these different choices (if any): the theory is too elementary at this stage to imagine some physical interpretation. Maybe with an improved model a physical interpretation arises. What is the spectrum and eigenvectors of the momentum operator and its extensions? How do they differ from each other? Is there such a thing as a momentum representation in this setting? If not, why not? You can easily compute the spectrum which is a pure-point spectrum and the eigenvectors are shifted exponentials. If $\chi = e^{i \alpha}$ where $\alpha \in \mathbb R$ , and we denote by $P_\alpha$ the associated self-adjoint extension of $P$ a set of eigenvectors is $$\psi^{(\alpha)}_n(x) = e^{i(\alpha + 2\pi n)x}$$ with eigenvalues $$p^{(\alpha)}_n := \hbar(\alpha + 2\pi n)\quad n \in \mathbb Z\::$$ The set of the $\psi^{(\alpha)}_n$ is a Hilbert basis because it is connected with the standard basis of exponentials by means of the unitary operator $(U_\alpha \psi)(x) = e^{i\alpha x} \psi(x)$ . Essentially Nelson's theorem and the spectral decomposition theorem prove that $P_\alpha$ has pure point spectrum made of the reals $p_n^{(\alpha)}$ . So a momentum representation exists as you can immediately prove. What is the relationship between the momentum operator (and its possible extensions) and the hamiltonian? As you know, if you start form $H := -\hbar \frac{d^2}{dx^2}$ on $D(H):= \{\psi \in C^2([0,1]) \:|\: \psi(0)=\psi(1) =0\}$ (I assume $2m=1$ ) this is essentially self-adjoint, though the corresponding momentum operator with domain (1) is not. (The proof immediately arises from Nelson's theorem since $H$ is symmetric and admits a Hilbert basis of eigenfunctions.) However there are also different candidates for the Hamiltonian operator arising by taking the second power of each self-adjoint extension of $P$ with domain (1). The spectrum is made of the second powers the elements of the spectrum of the corresponding seof-adjoint extension $\hbar^2(\alpha + 2\pi n)^2$ Do they commute? Do they share a basis? Momentum and associated Hamiltonian commute and a common basis is that written above for the momentum. Different self-adjoint extensions and different Hamiltonians do not commute as you easily prove by direct inspection. Do these problems have counterparts or explanations in classical mechanics? ( nudge, nudge ) I do not know And, more importantly: what are good, complete and readable references where one can go and get more information about this? I do not know, many results are spread in the literature. It is difficult to collect all them. A good reference is Reed and Simon's textbook: Vol I and II. ADDENDUM . A technical point deserves a little discussion. Sometimes when introducing selfadjointness domains as above in the space $L^2(I)$ , where $I\subset \mathbb{R}$ is a bounded interval, the functions $\psi$ are required to be absolutely continuous . This requirement is actually included in the condition that the weak derivative $\psi'$ exists and is included in $L^1$ (or $L^2$ since $I$ is bounded). In fact, a measurable function $\psi:I \to \mathbb{C}$ is absolutely continuous if and only if it admits weak derivative in $L^1(I)$ . In this case, as the function is absolutely continuous, its derivative exists almost everywhere and coincides with $\psi'$ .
{ "source": [ "https://physics.stackexchange.com/questions/362305", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/8563/" ] }
362,780
I was reading up on the Clay Institute's Millenium prizes in mathematics. And I noticed the Navier-Stokes equations were described as minimally understood. As far as I was taught in physics a few weeks ago(SCQF Level 6), they are used but solutions to them are hard to find in three dimensions because they require large amounts of computational power due to the complexity of the equations and so approximations are used. How were the equations discovered in the first place if we can't solve them?
I just wanted to give a more concrete idea of how we know these equations even though we have trouble proving analytical theorems about them. Stuff moving in space Consider any stuff (as in, any conserved quantity) distributed over space. We know that we can describe this with a time-dependent density field $\rho(x,y,z,t)$ such that any little volume $dV$ has some amount of stuff $\rho~dV$ at that point. We also know that this stuff might be flowing around over time and we formally treat this by saying that we want to know the flow through a little flat surface of area $dA,$ which is oriented in the $\hat n$ direction: that is, the surface is normal to $\hat n$ and "positive" flow will be in the $+\hat n$ direction. Combined together this is a vector $d\mathbf A = \hat n~dA$ and there is some vector field $\mathbf J(x,y,z,t)$ such that the amount of stuff which flows through this area over a time $\delta t$ is $\delta t~d\mathbf A\cdot\mathbf J(x,y,z,t).$ With $\rho$ and $\mathbf J$ we know almost everything. Since the stuff is conserved, we can say that in this box of volume $dV,$ if the amount of stuff in the box changes, it is either because there was a net flow into or out of the sides of the box, so we are doing some $\iint d\mathbf A\cdot \mathbf J$ which turns out by Gauss's theorem to be just $dV~\nabla\cdot\mathbf J,$ or else it came from outside the system we're studying, so there is some term $dV~\Phi$. Equating that to the change in the box $dV~(\partial\rho/\partial t)$ gives the simple starting equation $${\partial \rho\over\partial t} = -\nabla\cdot \mathbf J + \Phi.$$Now when we've got a flow field $\mathbf v(x,y,z,t)$ dictating how a fluid flows, the most dominant transport term is that the box flows downstream, $\mathbf J = \rho~\mathbf v + \mathbf j$ for some deviation $\mathbf j.$ Usually the principal deviation then comes from Fick's law , that there is a flow proportional to the difference in density between adjacent points, $\mathbf j = -D~\nabla \rho,$ but there may be more complex terms there; in particular we shall see pressure here. Conservation of momentum The key point here is that $p_x$, the momentum in the $x$-direction, is a stuff. It is a known conserved quantity. It is conserved as a direct result of Newton's third law which turns out, under Emmy Noether's celebrated theorem , to be the same as the statement that the laws of physics are the same at position $x$ as they are at position $x+\delta x$, for a suitable definition of "laws of physics." We are pretty sure about this, and we are pretty sure that the momentum of the fluid itself in the $x$-direction must therefore also be conserved, and this is $\rho~v_x$ where I am shifting definitions a bit on you: $\rho$ now refers to the mass density field and $v_x$ still refers to the fluid velocity in the $x$-direction. Now a flow of momentum per unit time, which we said is what $\mathbf J\cdot d\mathbf A$ is, is a force . Therefore $\mathbf J$ naturally takes the form of a force per unit area in this context. Now we know that Newton's expression for viscous forces was in fact to write $F_x = \mu~A~v_x/y$ where I am moving a surface of a fluid at speed $v_x$ at a perpendicular distance $y$ from a place where it is being held still; it will not surprise you at all to see that this is very similar to Fick's law and can be written as just $\mathbf j_\text{viscosity} = -\mu~\nabla v_x.$ To that we also need to add the effects of pressure, as a lowering in pressure also drives a fluid motion; this is a little bit harder to reason out but it takes the form that we can imagine a constant flow in the $x$-direction of $p~\hat x$ and then deviations in this flow would produce the change in momentum per unit time $-\partial p/\partial x$ through this divergence term. (That's a little bit of a sloppy way to show that we are talking about a stress tensor and part of it is $p~\mathbf 1$, the identity matrix multiplied by the pressure.) Combining these two components of $\mathbf j$ we have $${\partial \over\partial t}(\rho~v_x) = -\nabla\cdot (\rho~v_x~\mathbf v - \mu \nabla (v_x)) - \frac{\partial p}{\partial x} + \Phi_x.$$The external contribution $\Phi$ comes from forces influencing the fluid from outside, like gravity. In the Navier-Stokes equations the Millenium Prize has restricted itself to a considerably simpler case where $\nabla\cdot\mathbf v = 0$ and $\rho$ and $\mu$ are constant, which we call "incompressible flow." This is generally a valid assumption when you're interacting with a fluid at speeds much lower than the speed of sound in that fluid; then the fluid would rather move away from you than be compressed into any one place. In this case we can commute $\rho$ out of all of the spatial derivatives and then divide by it, so that the only impact is to rewrite $\nu=\mu/\rho$ and $\lambda=p/\rho$ and $a_x=\Phi_x/\rho$, eliminating the unit of mass from the equation. For $v_x$ we have specifically, $${\partial v_x\over\partial t} + \mathbf v\cdot\nabla v_x - \nu \nabla^2 v_x = - \frac{\partial \lambda}{\partial x} + a_x,$$ and then we can extend the above analysis to the directions $y,z$ too to find, $$\dot{\mathbf v} + (\mathbf v\cdot\nabla)\mathbf v - \nu \nabla^2 \mathbf v = - \nabla \lambda + \mathbf a.$$This is the version of the Navier-Stokes equations written down in the Millenium Prize; we have a very straightforward explanation of this as "The flow of momentum in a small box flowing downstream in an incompressible homogeneous Newtonian fluid is due entirely to Fick's-law diffusion of the momentum due to the viscosity of the fluid, plus a force due to pressure gradients inside the fluid, plus forces imposed by the external world." Why this equation? The understanding of the physics of how we got to this equation is not in question. What's at stake is the mathematics of this equation, in particular this $(\mathbf v \cdot \nabla) \mathbf v$ term which contains $\mathbf v$ twice and thereby makes it a nonlinear partial differential equation: given two flow fields $\mathbf v_{1,2}$ which are valid, in general $\alpha \mathbf v_1 + \beta \mathbf v_2$ will not solve this equation, removing our most powerful tool from our toolbox. Nonlinearity turns out to be unbelievably hard to solve in general, and essentially the Clay Mathematics institute is giving the million-dollar prize for anyone who cracks nonlinear differential equation theory strongly enough that they can answer one of the more basic mathematical questions about these Navier-Stokes equations, as a "most basic example" for their new theoretical toolkit. The idea of the Clay prizes is that they are specific problems (which is important for awarding a prize for their solution!) but that they seem to require powerful new general ideas which would allow our mathematics to go into places where it has historically been unable to go. You see this for example in $\text{P} = \text{NP}$, it's a very specific question but to answer it we would seem to need to have a better handle on "here's a classification of set of stuff which computers can do, and here are some things which a computer can't efficiently do" which nobody has yet been able to convincingly present. A new toolbox which could resolve this "stupid little" question would therefore profoundly improve our ability to work on a huge class of related problems in computation.
{ "source": [ "https://physics.stackexchange.com/questions/362780", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/149395/" ] }
363,056
Several people, including Fusion for Energy have posted this infographic: To meet the energy needs of a city of 1 million people one would need: Either 250,000 tonnes of oil Or 400,000 tonnes of coal Or 60 kg of fusion fuel Ignoring the fact that the info-graphic doesn't include for how long, What I'm asking here is about the rough equality of two energies? I assume by fusion fuel they mean deuterium/tritium. Is 60kg of fusion fuel energy equivalent to 400,000,000kg of coal?
Yes. See for example this table of energy densities . Let's take 30 MJ/kg for coal (the middle of the range in the table), then 400,000 tonnes of coal gives 1.2*10 16 Joule. Assuming they're talking about deuterium-tritium fusion ( which is the easiest form of nuclear fusion ), we have 340,000,000 MJ/kg, and the 60 kg gives us 2.04*10 16 Joule. Of course, both the coal and the (as-of-yet hypothetical) fusion plant will have inefficiencies that prevent us from extracting 100% of this energy. The 340,000,000 MJ/kg number can be calculated from the energy released in a single deuterium-tritium reaction, see for example here .
{ "source": [ "https://physics.stackexchange.com/questions/363056", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/114596/" ] }
363,126
As I understand it, during boiling the input of heat destroys or re-arranges the hydrogen bonds. It is used, in other words, against the potential energy of the intermolecular bonds. But if some hydrogen bonds between molecules are destroyed then why is the kinetic energy of these particular molecules not increased and, consequently, temperature?
This is because the external pressure is constant (at one atmosphere). If you increase the pressure, e.g. by using a pressure cooker, then the temperature goes up, or likewise if you reduce the pressure the temperature goes down. Water boils when the chemical potential of the water is the same as the chemical potential of the steam. If we consider steam as an ideal gas then the chemical potential is controlled by the pressure and temperature. If you start with just water at below 100ºC then the water evaporates, and the partial pressure of the water vapour increases until the chemical potential of the vapour and water match. At that point there is no net evaporation of the water. However at 100ºC the partial pressure of the steam in equilibrium with the water rises to one atmosphere and it can't get any higher. So if you raise the temperature above 100ºC the water and vapour cannot be in equilibrium so the water boils continuously in a desperate but hopeless attempt to raise the steam pressure. This is how pressure cookers raise the boiling point. At 100ºC the water boils, but in a pressure cooker it can raise the steam pressure to above an atmosphere so the water can remain in equilibrium with the steam above 100ºC.
{ "source": [ "https://physics.stackexchange.com/questions/363126", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/110669/" ] }
363,179
The ABC , reporting on the announcement of gravitational wave GW170817, explained that for the first time we could identify the precise source of a gravitational wave because we also observed the event in the electromagnetic spectrum. It notes however that the gamma ray burst detected by the FERMI space telescope was observed nearly two seconds later than the gravitational wave. Did the gamma ray burst actually arrive at Earth two seconds after the gravitational wave, or is this time delay just some kind of observational artefact? If the delay is real, what is its cause? Is the delay due to the gamma ray burst somehow being slowed, or was it 'emitted' at a different time in the merger event? What does a delay even mean when the gravitational wave was detected for over 100 seconds? Is it a delay from peak GW to peak gamma ray?
This is addressed in section 4.1 "Speed of Gravity" of one of the GW170817 companion papers: Gravitational Waves and Gamma-Rays from a Binary Neutron Star Merger: GW170817 and GRB 170817A . General relativity predicts that GWs travel at the speed of light. The difference in time of arrival could come from a difference in speed or a difference in the time of emission, i.e. the gamma rays are emitted after the merger . Under conservative assumptions described in the paper the fractional difference in the speed of light and the speed of gravity is bounded as: $$-3\times 10^{-15} \le \frac{v_\mathrm{GW}-c}{c} \le +7\times 10^{-16}$$ I think this is the strongest bound on the speed of gravity to date. They also discuss dispersion through the intergalactic medium. The speed of light in a medium depends on the frequency of the light, with low frequencies traveling slower than high frequencies. Gamma-rays have very high frequencies and should not be slowed very much The intergalactic medium dispersion has negligible impact on the gamma-ray photon speed, with an expected propagation delay many orders of magnitude smaller than our errors on ${v}_{\mathrm{GW}}$ . To answer your question 3, the delay is measured as the time from the merger of the two neutron stars to the start of the gamma-ray burst. The gravitational waves are emitted during the inspiral phase of the binary evolution too. They are detectable for about 100 s before the merger. During the merger, the material at the core of the event will be very dense . Even gamma rays won't be able to propagate through it. To answer question 2, the gamma rays that were observed are probably generated slightly after the merger, outside of the newly formed single body.
{ "source": [ "https://physics.stackexchange.com/questions/363179", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/62379/" ] }
363,185
Imagine I have an iron ball in vacuum and it is moving and it hits an iron wall. All the kinetic energy gets converted into heat, right?(since sound cannot be produced).Will there be any other energy conversion? The ball does not bounce back and nor does the wall move. So,how is the momentum being conserved? Energy can be conserved as the thermal energy is produced but since momentum is a vector,how do we know that the random motion of the particles of the ball and the wall all add up vector-ally to conserve momentum?
This is addressed in section 4.1 "Speed of Gravity" of one of the GW170817 companion papers: Gravitational Waves and Gamma-Rays from a Binary Neutron Star Merger: GW170817 and GRB 170817A . General relativity predicts that GWs travel at the speed of light. The difference in time of arrival could come from a difference in speed or a difference in the time of emission, i.e. the gamma rays are emitted after the merger . Under conservative assumptions described in the paper the fractional difference in the speed of light and the speed of gravity is bounded as: $$-3\times 10^{-15} \le \frac{v_\mathrm{GW}-c}{c} \le +7\times 10^{-16}$$ I think this is the strongest bound on the speed of gravity to date. They also discuss dispersion through the intergalactic medium. The speed of light in a medium depends on the frequency of the light, with low frequencies traveling slower than high frequencies. Gamma-rays have very high frequencies and should not be slowed very much The intergalactic medium dispersion has negligible impact on the gamma-ray photon speed, with an expected propagation delay many orders of magnitude smaller than our errors on ${v}_{\mathrm{GW}}$ . To answer your question 3, the delay is measured as the time from the merger of the two neutron stars to the start of the gamma-ray burst. The gravitational waves are emitted during the inspiral phase of the binary evolution too. They are detectable for about 100 s before the merger. During the merger, the material at the core of the event will be very dense . Even gamma rays won't be able to propagate through it. To answer question 2, the gamma rays that were observed are probably generated slightly after the merger, outside of the newly formed single body.
{ "source": [ "https://physics.stackexchange.com/questions/363185", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/165920/" ] }
363,211
By "strange" I mean 'Is there a reason for this, or is it something we accept as a peculiarity of our universe?' I see no reason why if magnetic field is in the $+x$ direction and a charge's velocity is in the $+y$ direction, that the force experienced by the charge can't be in either the $+z$ or $-z$ direction, both are perpendicular to $x$ and $y$. The equation for Lorentz force just tells us that it goes in the $+z$ direction, but it seems equally valid that the force would be in the $-z$ direction (I mean there's nothing to distinguish $+z$ from $-z$ anyway, you can turn one into the other by swapping the handedness of your coordinate system). It seems as if the universe is preferentially selecting one direction over the other. As a practical example, if a current carrying wire is in a magnetic field and it experiences an upward force, why shouldn't it experience a downward force?
The universe is not preferentially selecting one direction over another. The fact that it appears that this is happening is an artifact of how we represent the magnetic field. It is well-known that the existence of magnetic forces can be inferred from a Lorentz-invariant theory involving electric forces. For example, see this answer . The magnetic force so derived necessarily has the property that parallel currents attract while antiparallel currents repel. The magnetic field can be thought of as being the field that needs to be introduced into the theory in order to give a local description of this attraction between parallel currents. It is therefore necessary for the Lorentz force law to be written in such a way so that it gives the correct direction for the magnetic force between two currents. Otherwise the law would violate the observed Lorentz invariance of our universe. A law itself does not determine what actually happens; that can only be determined by experiment. Because the direction of the magnetic field is assigned through a right-hand rule, a second application of the right-hand rule is needed in the Lorentz force law in order to get the correct direction for the actual force between the two currents. If the magnetic field direction were assigned through a left-hand rule, the Lorentz force law would also involve a left-hand rule. In neither case does the universe enforce an arbitrary choice of one over the other. We are simply describing the phenomenon in a way that requires us to put in the rule by hand in order to get the correct result. This contrasts with the situation with weak interactions, which really do violate parity symmetry.
{ "source": [ "https://physics.stackexchange.com/questions/363211", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/108664/" ] }
363,306
Just recently, LIGO and Virgo successfully detected new signs of gravitational waves. This time, instead of colliding black holes, it is a pair of colliding neutron stars. This collision emits light and gravitational waves. I read in CNN: First-seen neutron star collision that this collision had a signal that lasted for 100 seconds. I read before that the first gravitational wave detection of two colliding black holes had a signal lasting for a split second, and this is also an indication of how long the merging takes place. Is this accurate? If so, if the colliding neutron star has signal lasting for 100s, does that mean that the merging takes a longer time? In addition, if the merging of two black holes happen almost instantly (short time), then why does the merging of two neutron star take up more amount of time?
It is not that the merger of two neutron stars takes longer, the inspiral and merger of a pair of neutron stars just spends a longer time in the frequency range where LIGO is most sensitive. Let me try to explain in more detail. LIGO is sensitive only to gravitational waves with frequencies between approx. 10 Hz and 10 kHz. (See LIGO sensitivity curve ). As has been much discussed in the press announcements, the gravitational waves from the merger of a compact binary follow a "chirp" pattern increasing both in amplitude and frequency until it cuts off at the merger. The maximum frequency reached is inversely proportional to the "chirp mass" a rather arcance combination of the masses of the two components of the binary. The upshot of this, is that heavier binaries have a lower maximum frequency than lighter binaries. For the first event, GW150914, which was very heavy, this meant that LIGO was only sensitive to the very last part of the inspiral (only the last few cycles). The lightest BH binary merger to date, GW151226, already spent a lot more cycles of its inspiral in LIGO's sensitivity range. Now neutron stars are obviously even lighter, allowing LIGO to see even more of the cycles of the inspiral before the merger (around 3000). In fact, for the GW170817, the final merger happens in a frequency range where LIGO is no longer that sensitive. The most accurate data is obtained from the inspiral phase.
{ "source": [ "https://physics.stackexchange.com/questions/363306", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/170555/" ] }
363,394
I was looking for examples of conservative and non conservative vector fields and stumbled on the question Is magnetic force non-conservative? and in the answer section, it is stated the magnetic force cannot be a vector field so it cannot possibly be considered conservative. As I am learning the maths for divergence and curl naturally I wanted to find some examples to help my intuition and that is what leads me to ask for an explanation of the explanation I found on Stack Exchange.
The magnetic force is not a vector field because vector fields are functions $\mathbf F:\mathbf r \mapsto \mathbf F(\mathbf r)$ that take a single vector value at each position, and the magnetic force $\mathbf F=q\mathbf v\times \mathbf B(\mathbf r)$ also depends on the velocity of the particle that's experiencing the force. You could ask, instead, for the magnetic force on a particle with a given velocity , which itself may or may not depend on the position, i.e. $\mathbf v=\mathbf v(\mathbf r)$ (where that dependence may just be a constant), in which case you'll get a map $$ \mathbf F:\mathbf r\mapsto \mathbf F(\mathbf r)=q\mathbf v(\mathbf r)\times \mathbf B(\mathbf r) $$ that does have a unique value at each position and which therefore does define a vector field, of which you can ask e.g. whether it's conservative or not. However, until you actually define what velocity dependence $\mathbf v(\mathbf r)$ you want to take, none of those terms are applicable.
{ "source": [ "https://physics.stackexchange.com/questions/363394", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
363,490
I've often heard it said that the motion of a double pendulum is non-periodic. (This may be related to the fact that it's a chaotic system, but I'm not sure about that.) But this does not seem possible to me, for the following reason. Let $\theta_1$ and $\theta_2$ be the angles of the two masses relative to the vertical. Then we can consider the two-dimensional phase space with a $\theta_1$ axis and a $\theta_2$ axis, and the motion of the double pendulum is a continuous curve $\gamma:[0,\infty) \rightarrow [0,2\pi]\times[0,2\pi]$. The thing is, I'm pretty sure such a curve must be self-intersecting. Because if it's not self-intersecting, then its graph would cover more and more of the codomain with time, and so I think you'd get a space-filling curve. And yet space-filling curves are always self-intersecting, so you'd get a contradiction. Thus $\gamma$ must be self-intersecting, and thus the motion of a double pendulum is always periodic. So what's wrong with my reasoning? Or is my reasoning correct, and is the motion of a double pendulum always periodic, just with such a long period that it looks non-periodic? If so, is there a formula for the period?
Short answer: No. General trajectories of double pendulum are not periodic . You need to distinguish between two aspects: the trajectory in the spatial coordinate system and the trajectory in phase space. Your claim about $\gamma$ is about the first aspect and is thus false. It is perfectly okay for trajectories to intersect in the real space, and this doesn't mean the solution is periodic. However, in the phase space it is forbidden for different trajectories to intersect (because of the uniqueness of the solution of ODEs given initial conditions). And if they do, you are correct that the dynamic is periodic. Indeed, notice that it may be that the mass travels through the same spatial point twice, but it can be with different velocities. As @agemO suggested in a comment below, it is important to stress that although the solution is not periodic, it seem like it is getting close to there (which is probably what confuses you). Suppose for example that the mass starts from a point $(x,y)$ in the XY plane with velocity vector $(v_{x},v_{y})$. Then according to Poincare Recurrence Theorem , after some time the mass will travel as close as you want to that point with a very very similar velocity - but they are not guaranteed to be the same. In other words, the motion is as close as you want to be periodic, but it misses, and the resulting behavior is chaotic. There is another very interesting theorem that should also be worth stating in this case. It is called Poincare Bendixson Theorem , and it states that a traped trajectory in 2D phase space must eventually repeat itself (given that the trapping region doesn't contain fixed points). But in this case the phase space is 4D and the theorem doesn't apply.
{ "source": [ "https://physics.stackexchange.com/questions/363490", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/27396/" ] }
363,748
Could you please give an intuitive definition of chemical potential ? It seems that it is an extremely important notion of physics but definitions are really vague.
You say that the definitions are vague, but $\mu_i=\left(\frac{\partial U}{\partial N_i}\right)_{S,V,N_{j\neq i}}=\left(\frac{\partial G}{\partial N_i}\right)_{P,T,N_{j\neq i}}$ is precise. However, it may be helpful to use an analogy to obtain an intuitive definition. I'm sure you're familiar with the fact that systems tend to evolve to reduce gradients. Any such spontaneous change involves two conjugate parameters : a generalized force (corresponding to a gradient in some field, such as a pressure difference) and a generalized displacement (corresponding to the flow, such as a change in volume). The product of the two conjugate variables has units of energy. In heat transfer, for example, a temperature gradient causes a spontaneous flow of energy. The "stuff" that is transferred is entropy. Thus, we obtain the differential term $dU = T\,dS$. A pressure gradient drives a change in volume: $dU = -P\,dV$. What would cause spontaneous movement of matter? In this case, the driving force is a gradient in the chemical potential of a material $i$: $dU = \mu_i\,dN_i$. I'm sure you're also familiar with the concept that changes in concentration drive material transport or diffusion. This is only an approximation. It fails to explain why oil and water separate, for example. (Or why any two mixed materials would separate.) The chemical potential is like an augmented concentration that also incorporates bonding between materials (as well as concentration). It is the true arbiter of how matter will move.
{ "source": [ "https://physics.stackexchange.com/questions/363748", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/110669/" ] }
363,851
It is known that a shadow can travel faster than the speed of light and break no known laws of physics, because it is not an actual thing. A shadow is just the blockage of light. However in the following statement "Maximum speed is finite: No physical object, message or field line can travel faster than the speed of light in a vacuum.", my questions concerns the message portion of this statement. Why can't a shadow be a message? Batman gets his messages from shadows i.e. the bat signal, all the time. I'm not a scientist, obviously, so I know there is something I am missing here?
Good question! Imagine sweeping the bat signal really quickly across Mars. You could totally have it move faster than the speed of light across the surface! But now imagine that you are standing on the surface of Mars while the bat signal is being swept over your position. You would like to send a message to another point on Mars faster than the speed of light. How do you send the message? To do so, you would need to modify the bat signal somehow (according to some prearranged code, such as Morse) as it's going over you to send your desired message. But you can't! Since you didn't send the light, you have no way to modulate the shadow so that it appears different when it reaches the destination. No one in the path of the shadow can change the shadow to use it to send a faster-than-light signal. All you can do on the surface of Mars is watch the bat signal wash over your position and marvel that they have such a bright lamp down in Gotham. On the other hand, if someone in Gotham wanted to send Batman a signal up on Mars, they could do so, but the light would take the usual $300000000$ m/s to get there.
{ "source": [ "https://physics.stackexchange.com/questions/363851", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/172756/" ] }
364,340
My question is, why use silicon-28 atoms to calculate the kilogram when you already have carbon-12 atoms defining the constant? Does the Avogadro Project intend to define the constant by replacing the idea of carbon-12 and putting silicon-28 in its place?
The idea is to create a sphere of about 1 kilogram and then both weight it and count the number of atoms in it. This is only possible by using crystalline matter, by taking advantage of the regular arrangement of the atoms. Diamond would be indeed a perfect candidate but machining diamond is a hell of a lot more difficult than machining a crystal of silicon, because of the huge difference in hardness, a problem which pales in comparison with the sheer impossibility of making a diamond mono-crystal weighing one kilogram! The world record is about 20 grams. The difficulty is the need to apply a pressure of the order of 100,000 atmospheres, which is only possible in too small a volume for the target weight of 1 kilogram, which would be a cube with 6.5 cm sides. We could imagine settling for a bunch of smaller diamonds of course but this would introduce an extra source of uncertainty. Since it is possible to make a monocrystal of silicon weighing one kilogram, by using refinements of the growth methods developed and refined by the electronic industry, it would not make sense to consider diamond. Why not graphite instead? Unlike diamond, it is possible to carefully make big enough mono-crystals. Unfortunately, graphite is made of a regular arrangement of carbon atoms strongly bonded in sheets, and those sheets do then stack up and keep together because of weaker forces between them: in particular they can easily shift with respect to each other. As a result, this makes graphite much less suitable for the described precision experiment where pinpointing the atomic arrangement is key. With graphite and diamond, we have exhausted the crystalline phases of carbon. Thus exit carbon!
{ "source": [ "https://physics.stackexchange.com/questions/364340", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/173003/" ] }
364,690
I find it a annoying and a pain to climb up to the roof and adjust the TV antenna so that I can watch my favorite TV program without distortion. I am not using satellite service which requires a dish to capture the signal and I know the antenna is designed to receive radiowave signals from the nearest broadcast tower and my TV will be tuned to a specific frequency. My question why does the orientation of the TV antenna in-situ affect the picture quality? Image is taken from Wirelesshack , please note that mine doesn't look exactly like this but close.
Picture the radio waves from the TV transmitter as flat horizontal sine waves. You want the antenna to pick up the full width of this wave in the horizontal bars. So you need to point the antenna as directly as possible towards the TV transmitter. Cell phones send the signal in a vertical wave (apologies for the oversimplification). This doesn't allow you to pick up a clear signal from as far away, but does mean that you can simply hold the antenna vertically and be able to receive a signal from any direction. Cell phones also contain a lot of modern signal processing electronics to handle a weak signal, which weren't available when TV was designed P.S. You presumably only need to adjust it if you want to get a signal from a different transmitter. Or perhaps if the signal is really weak and the atmosphere slightly changes the direction the strongest signal is coming from.
{ "source": [ "https://physics.stackexchange.com/questions/364690", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75502/" ] }
364,771
I've been told it is never seen in physics, and "bad taste" to have it in cases of being the argument of a logarithmic function or the function raised to $e$. I can't seem to understand why, although I suppose it would be weird to raise a dimensionless number to the power of something with a dimension.
It's not "bad taste", it's uncalculable to the point of meaninglessness. The whole point of dimensional analysis is that there are some quantities that are not comparable to each other: you can't decide whether one meter is bigger or smaller than ten amperes, and trying to add five volts to ten kelvin will only yield inoperable nonsense. (For details on why, see What justifies dimensional analysis? and its many linked duplicates on the sidebar on the right.) This is precisely what goes on with, say, the exponential function: if you wanted the exponential of one meter, then you'd need to be able to make sense of $$ \exp(1\:\rm m) = 1 + (1\:\rm m) + \frac12(1\:\rm m)^2 + \frac{1}{3!}(1\:\rm m)^3 + \cdots, $$ and that requires you to be able to add and compare lengths with areas, volumes, and other powers of position. You can try to just trim out the units and deal with it, but keep in mind that it needs to match, exactly , the equivalent $$ \exp(100\:\rm cm) = 1 + (100\:\rm cm) + \frac12(100\:\rm cm)^2 + \frac{1}{3!}(100\:\rm cm)^3 + \cdots, $$ and there's just no invariant way to do it. Now, to be clear, the issue is much deeper than that: the real problem with $\exp(1\:\rm m)$ is that there's simply no meaningful way to define it a way that will (i) be independent of the system of units, and (ii) keep a set of properties that will really earn it the name of an exponential. If what one wants is a simple clear-cut way to see it, a good angle is noting that, if one were to define $\exp(x)$ for $x$ with nontrivial dimension, then among other things you'd ask it to obey the property $$ \frac{\mathrm d}{\mathrm dx}\exp(x)=\exp(x), $$ which is dimensionally inconsistent if $x$ (and therefore $\mathrm d/\mathrm dx$) is not dimensionless. It's also been noted in the comments, and indeed in a published paper , that you can indeed have Taylor series over dimensional quantities, by simply setting $f(x) = \sum_{n=0}^\infty \frac{1}{n!} \frac{\mathrm d^nf}{\mathrm dx^n}(0)x^n$, and that's true enough. However, for the transcendental functions we don't want any old Taylor series, we want the canonical ones: they're often the definition of the functions to begin with, and if someone were to propose a definition of, say $\sin(x)$ for dimensionful $x$, then unless it can link back to the canonical Taylor series, it's simply not worth the name. And, as explained above, the canonical Taylor series have fundamental scaling problems that render them dead in the water. That said, for logarithms you can on certain very specific occasions talk about the logarithm of a dimensional quantity $q$, but there you're essentially taking some representative $q_0$ and calculating $$\log(q/q_0)=\log(q)-\log(q_0),$$ where in making sense of the latter you require that the two numerical values be in the same units ─ in which case the final answer is independent of the unit itself. If the situation also allows you to drop additive constants, or incorporate them into something else (such as when solving ODEs, for example, with a representative case being the electrostatic potential of an infinite line charge , or when doing plots in log scale) then you might get rid of the $\log(q_0)$ in the understanding that it will come out in the wash when you come back to dot the i's. However, just because it can be done in the specific case of the logarithm, which is unique in turning multiplicative constants into additive ones, doesn't mean you can use it in other contexts ─ and you can't.
{ "source": [ "https://physics.stackexchange.com/questions/364771", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/153135/" ] }
365,390
I've recently come across a strange result when comparing the Hamiltonian and Lagrangian formulations of classical mechanics. Suppose we are working in the regime where we can say the Hamiltonian $H$ is equal to the total energy $$H=T+V.\tag{1}$$ That is, the constraints are holonomic and time-independent, and the potential is $V=V(q)$ where $q$ a the generalized position vector $q=(q_1,q_2,\ldots,q_n)$. Let $$L=T-V\tag{2}$$ be the Lagrangian. Now, the Euler-Lagrange equations tell us $$\frac{d}{dt}\frac{\partial L}{\partial \dot{q_\sigma}} - \frac{\partial L}{\partial q_\sigma} = 0,\tag{3}$$ for the generalized coordinate $q_\sigma,$ with $\sigma\in\{1,\ldots,n\}$. We also know that the conjugate momenta are defined by $p_\sigma = \frac{\partial L}{\partial \dot{q_\sigma}}$. So this equation tells us $$\dot{p_\sigma} - \frac{\partial L}{\partial q_\sigma} = 0.\tag{4}$$ In the Hamiltonian formalism, we know that $$\dot{p_\sigma} = -\frac{\partial H}{\partial q_\sigma}.\tag{5}$$ Combining these gives $$\frac{\partial H}{\partial q_\sigma}=-\frac{\partial L}{\partial q_\sigma}.\tag{6}$$ Now, this seems very strange because in the regime we are considering, this implies that $$\frac{\partial (T+V)}{\partial q_\sigma}=-\frac{\partial (T-V)}{\partial q_\sigma}\Rightarrow \frac{\partial T}{\partial q_\sigma}=0. \tag{7}$$ Of course, there are many examples where this is not true. I.e., simply consider the free particle analyzed using polar coordinates. Then we have $$H = L = T = \frac{1}{2}m(\dot{r}^2 + r^2\dot{\theta}^2),\tag{8}$$ and so $$\frac{\partial T}{\partial r } \neq 0.\tag{9}$$ What is the explanation for this strange discrepancy? Am I making a silly mistake somewhere?
The problem is that the Lagrangian and the Hamiltonian are functions of different variables, so you must be exceedingly careful when comparing their partial derivatives. Consider the differential changes in $L$ and $H$ as you shift their arguments: $$dL = \left(\frac{\partial L}{\partial q}\right) dq + \left(\frac{\partial L}{\partial \dot q}\right) d\dot q$$ $$dH = \left(\frac{\partial H}{\partial q}\right) dq + \left( \frac{\partial H}{\partial p}\right) dp$$ Finding $\frac{\partial L}{\partial q}$ corresponds to wiggling $q$ while holding $\dot q$ fixed. On the other hand, finding $\frac{\partial H}{\partial q}$ corresponds to wiggling $q$ while holding $p$ fixed. If $p$ can be expressed a function of $\dot q$ only, then these two situations coincide - however, if it also depends on $q$, then they do not, and the two partial derivatives are referring to two different things. Explicitly, write $p = p(q,\dot q)$. Then using the chain rule, we find that $$dH = \left(\frac{\partial H}{\partial q}\right) dq + \left(\frac{\partial H}{\partial p}\right)\left[\frac{\partial p}{\partial q} dq + \frac{\partial p}{\partial \dot q} d\dot q\right]$$ So, if we shift $q$ but hold $\dot q$ fixed, we find that $$ dL = \left(\frac{\partial L}{\partial q} \right)dq$$ while $$ dH = \left[\left(\frac{\partial H}{\partial q} \right) + \left(\frac{\partial H}{\partial p}\right)\left(\frac{\partial p}{\partial q} \right)\right]dq$$ If $L(q,\dot q) = H(q,p(q,\dot q))$ as in the case of a free particle, then we would find that $$dL = dH$$ so $$\left(\frac{\partial L}{\partial q}\right)= \left(\frac{\partial H}{\partial q} \right) + \left(\frac{\partial H}{\partial p}\right)\left(\frac{\partial p}{\partial q} \right)$$ We can check this for the free particle in polar coordinates, where $$L = \frac{1}{2}m(\dot r^2 + r^2 \dot \theta^2)$$ $$ H = \frac{p_r^2}{2m} + \frac{p_\theta^2}{2mr^2}$$ $$ p_r = m\dot r \hspace{1 cm} p_\theta = mr^2 \dot \theta$$ for the left hand side, $$ \frac{\partial L}{\partial r} = mr \dot \theta^2$$ For the right hand side, $$ \frac{\partial H}{\partial r} = -\frac{p_\theta^2}{mr^3} = -mr\dot\theta^2$$ $$ \frac{\partial H}{\partial p_\theta} = \frac{p_\theta}{mr^2} = \dot \theta$$ $$ \frac{\partial p_\theta}{\partial r} = 2mr\dot \theta$$ so $$ \frac{\partial H}{\partial r} + \frac{\partial H}{\partial p_\theta} \frac{\partial p_\theta}{\partial r} = -mr\dot \theta^2 + (\dot \theta)(2mr\dot \theta) = mr\dot \theta^2$$ as expected. Your mistake was subtle but common. In thermodynamics, you will often find quantities written like this: $$ p = -\left(\frac{\partial U}{\partial V}\right)_{S,N}$$ which means The pressure $p$ is equal to minus the partial derivative of the internal energy $U$ with respect to the volume $V$, holding the entropy $S$ and particle number $N$ constant This reminds us precisely what variables are being held constant when we perform our differentiation, so we don't make mistakes.
{ "source": [ "https://physics.stackexchange.com/questions/365390", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/166166/" ] }
366,089
I cant understand this, according to what I read here . The speed of a wave depends on its wavelength and its depth, through the relation $$ v=\sqrt{\frac{g\lambda}{2\pi}\tanh\left(2\pi \frac d\lambda\right)}, $$ where $\lambda$ is the wavelength, $d$ is the water depth, and $g$ is the acceleration due to gravity. Yet the speed of sound is constant at $343\:\rm m/s$. But sound is a product of an oscillation that provokes the molecules of the medium (atmosphere) to move up and down. An oceanic wave is again the product of an oscillation that provokes the molecules of the medium (water/ocean) to move up and down. Shouldn't both be constants just of a different value since water is a denser medium than air ? I can understand the reasoning of the one but when accepting it I can not understand why it does not apply to the other and vise versa :P I am really confused here...
I think that this question is why sound waves are non-dispersive whereas gravity waves on the surface of water are and also depend on the depth of the water. In fact if the depth of the water is less than about half a wavelength, the speed of the gravity waves is $\sqrt{gd}$ and not dependent on the wavelength of the waves. The speed of gravity waves depending on the depth of the water is really no different than the speed of sound in air depending on the pressure, density etc. Also sound waves can show dispersion as is illustrated in the article about the dispersion in concrete . We find that at low ultrasonic frequencies the arrival velocity of ultrasonic pulse, in such a material, increases with the grain size. At the high ultrasonic frequencies a decrease of the pulse velocity with frequency and grain size is observed. In the chapter The Origin of the Refractive Index Feynman explains that electromagnetic waves interact with the bound electrons of a dielectric. The bound electrons undergo forced oscillations under the influence of the incoming electromagnetic waves. If the frequency of the electromagnet wave is not close to that of a natural frequency of the material then the dispersion is very small but near resonance the material will be highly dispersive. So what you must look at is the interaction of the wave with the medium and its surroundings. In the link from HyoerPhysics that you quoted you will have noted that the motion of the gravity waves are as shown below. If the depth of water is restricted (shallow water waves) then you can imagine that the speed of the waves might well be affected. This dependence of velocity on depth is explained in this poor video quality but excellent content Waves in Fluids which is one of a series of videos on fluid dynamics made by the National Committee for Fluid Mechanics Films . In deep water the gravity waves do become dispersive as the phase velocity is $\sqrt{\dfrac{g\lambda}{2 \pi}}$ which depends on the wavelength. As is explained in the video gravity waves are the result in a difference in hydrostatic pressure which causes horizontal forces resulting in wave propagation. I am afraid that I cannot simply explain by "hand waving" why it is that longer wavelength gravity waves travel faster than shorter wavelength waves which is shown in the Ripples in a Pond video in which capillary waves are also described. So perhaps the answer to your question is that when one starts to study wave motion the examples used tend to be relatively simple and dispersion tends not to be mentioned except in the splitting up of white light into its component colours by a prism. More advanced courses then show that the assumptions made in the less advanced course are not necessarily valid. The book by Willard Bascom " Waves and Beaches " is available on free e-loan from Archive.org if you register with them.
{ "source": [ "https://physics.stackexchange.com/questions/366089", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/116624/" ] }
366,103
So I was reading about the special properties of radio frequencies. Specifically I read this: "In contrast, RF current can be blocked by a coil of wire, or even a single turn or bend in a wire. This is because the inductive reactance of a circuit increases with frequency." from here: https://en.wikipedia.org/wiki/Radio_frequency#Special_properties_of_RF_current And it occurred to me that tesla coils had something to do with coils and radio frequency. So I looked up this: https://en.wikipedia.org/wiki/Tesla_coil#Operation In the tesla coil operation section we immediately find this: "A Tesla coil is a radio frequency oscillator" So basically tesla coils resonate at very high radio frequencies, and if you look at one they are all coil, which is supposed to block RF. As a last ditch effort to solving my own questions I tried to read the tesla coil Wikipedia article, but there wasn't much about coils blocking RF. However like half way through or something I found this: "The supply transformer (T) secondary winding is connected across the primary tuned circuit. It might seem that the transformer would be a leakage path for the RF current, damping the oscillations. However its large inductance gives it a very high impedance at the resonant frequency, so it acts as an open circuit to the oscillating current. If the supply transformer has inadequate leakage inductance, radio frequency chokes are placed in its secondary leads to block the RF current." So like one of the coils is blocking the RF but not the others? Weird.
I think that this question is why sound waves are non-dispersive whereas gravity waves on the surface of water are and also depend on the depth of the water. In fact if the depth of the water is less than about half a wavelength, the speed of the gravity waves is $\sqrt{gd}$ and not dependent on the wavelength of the waves. The speed of gravity waves depending on the depth of the water is really no different than the speed of sound in air depending on the pressure, density etc. Also sound waves can show dispersion as is illustrated in the article about the dispersion in concrete . We find that at low ultrasonic frequencies the arrival velocity of ultrasonic pulse, in such a material, increases with the grain size. At the high ultrasonic frequencies a decrease of the pulse velocity with frequency and grain size is observed. In the chapter The Origin of the Refractive Index Feynman explains that electromagnetic waves interact with the bound electrons of a dielectric. The bound electrons undergo forced oscillations under the influence of the incoming electromagnetic waves. If the frequency of the electromagnet wave is not close to that of a natural frequency of the material then the dispersion is very small but near resonance the material will be highly dispersive. So what you must look at is the interaction of the wave with the medium and its surroundings. In the link from HyoerPhysics that you quoted you will have noted that the motion of the gravity waves are as shown below. If the depth of water is restricted (shallow water waves) then you can imagine that the speed of the waves might well be affected. This dependence of velocity on depth is explained in this poor video quality but excellent content Waves in Fluids which is one of a series of videos on fluid dynamics made by the National Committee for Fluid Mechanics Films . In deep water the gravity waves do become dispersive as the phase velocity is $\sqrt{\dfrac{g\lambda}{2 \pi}}$ which depends on the wavelength. As is explained in the video gravity waves are the result in a difference in hydrostatic pressure which causes horizontal forces resulting in wave propagation. I am afraid that I cannot simply explain by "hand waving" why it is that longer wavelength gravity waves travel faster than shorter wavelength waves which is shown in the Ripples in a Pond video in which capillary waves are also described. So perhaps the answer to your question is that when one starts to study wave motion the examples used tend to be relatively simple and dispersion tends not to be mentioned except in the splitting up of white light into its component colours by a prism. More advanced courses then show that the assumptions made in the less advanced course are not necessarily valid. The book by Willard Bascom " Waves and Beaches " is available on free e-loan from Archive.org if you register with them.
{ "source": [ "https://physics.stackexchange.com/questions/366103", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/99567/" ] }
366,769
I have bought a handmade rug of size 1.5 $\times$ 2m. About 1-2 weeks ago I noticed the rug was not in the center of my room and it had moved a bit. I thought maybe because I walked on it, it has moved. I put it in its place and it happened again and again. It moves about 1 millimeter each day; I placed a marker on the floor so I can calculate the distance it moves. I avoid walking on it so my steps don't move it. What reason can be causing this rug to move? Can a magnetic field be the reason? It is a handmade rug made of animal fur, like wool. I used my smartphone to find if any magnetic field exists near the rug and I see $40\text{-}50\:\mu\mathrm{T}$ which is close to the Earth's magnetic field. I used an app with an orientation sensor and the floor is level, so that Earth's gravity can't be moving the rug. I have no pet, no maid, no room mate. Nobody can prank me or move this rug.
It is probably creeping. This is often because of thermal (or humidity) cycling. Essentially, the rug and the floor are going through some thermal or humidity cycle (over the course of the day usually). During this cycle their sizes change slightly relative to each other. And different bits of the rug stick to the floor at different parts of the cycle, which causes it to creep along the floor. So imagine a cycle where the rug gets larger and then returns to its original size. Now imagine that, while the rug is growing, its left-hand edge (viewed from some direction) doesn't move with respect to the floor (it is the sticky bit of the rug in this part of the cycle). So when it is at its largest, the left hand edge is where it was, but the right hand edge has moved right. Now, when it shrinks, suppose that its right hand edge now sticks to the floor: it will then drag its left hand edge rightwards. At the end of the cycle it has moved right. This changing in relative size can either be through differing thermal expansion constants, differing temperatures (the rug will see temperature changes sooner than the floor, and indeed will insulate the floor from them) or through one or both of the objects having differing responses to humidity changes. In particular if the rug is hand-made as you say it is, it is quite likely also made from wool or cotton, both of which experience quite significant size changes with humidity. And probably you are only in the room for part of the day so there's a whacking great humidity cycle going on because of your breath. The differential-slipping thing can be caused by things like the structure of the cloth: if there are lots of hairs sticking in one direction under the rug then it will 'want to' move one way, and will creep. You can often check this by just trying to slide it around. See user40292's answer for more on this mechanism. The solution is some kind of underlay: something which has enough friction between both the rug and the floor to stop it moving. You can buy this stuff, as this is quite a common problem.
{ "source": [ "https://physics.stackexchange.com/questions/366769", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/174254/" ] }
367,059
Light from celestial objects is old. In the case of galaxies, it's millions of years old. It seems plausible to me that light might show signs of its age. I was surprised that a Google search only turned up one study in this area: Measurement of the speed of light from extraterrestrial sources . It looked at the speed of light from several bright stars: Aldebaran, Capella, and Vega. The results showed that the speeds were different! My question is, have there been other studies by physicists that looked at old light versus new? It would be so interesting to view in an interferometer light that is a million years old. I can think of many other tests, and I'm sure physicists could think of more. Why hasn't this been done or has it? Could we probably find an age marker by looking closely at the old light?
Light does not "experience" time, the concept "age" does not apply to light in a meaningful way (with respect to human experience). [As background; recall clocks slow for objects as they near the speed of "light" reaching a theoretical 0 if full light speed were attainable.] A thought experiment clock on a photon would therefore stand still. A photon's source does have an "age" in the traditional (human experience) sense, and it is standard that we say the light is as old as it's source. That "age" does not then carry with it the traditional effect of aging. While the light source ages in a traditional fashion and may in fact be completely burned out though we can observe it today from our distant position in space, any photon from an object no matter how old the source is in no way different than a newly created photon presuming it is the same wavelength. As I view it you could not discern the "age" of light without knowledge of its source, because light is in reality timeless.
{ "source": [ "https://physics.stackexchange.com/questions/367059", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/129433/" ] }
367,089
I had a sprain in my leg a few days back. The doctor recommended dipping my foot alternately in ice-cold and hot water to aid blood circulation. It is here that I discovered something interesting. The picture above shows the piece of ice that was put in the bucket. The above picture shows the ice cube from above. If you look at the large piece of cube from the side (see below), you can see that the upper part of the cube, that was near the open surface of the container in which the ice froze, seems to be almost transparent and has a crystalline appearance. The lower part does not have this appearance, and it is white and opaque. Why is there a difference in the layers of ice in the large cube? Is it because the water was from tap and not completely pure? The water was put in the refrigerator for a period greater than 12 hours, so the ice has frozen properly. Can anyone explain this unique structure of ice? I've never seen this before. Update: This update is to simply demonstrate the bubble formation in the ice,which causes the cloudiness. Out of the two answers, I had accepted the one by @IliaSmilga . Today, the ice formed demonstrated this idea clearly. Given below are the pictures in which the bubbles of dissolved gas are clearly visible. The last two pictures are the best.
As @pr1268 explained, tap water is not pure: it contains dissolved gases (basically air) and dissolved minerals. However, I do not think stratification causes this phenomenon: I think that as long as water remains unfrozen, the dissolved gas concentration remains approximately constant throughout the solution (basically equal to the saturation concentration). Here is an alternative explanation. When ice crystals start to form, they naturally tend to exclude the impurities; so the impurities are "squeezed out" into the part of the water that remains liquid. Eventually the impurity concentration exceeds the saturation threshold, and they start to precipitate out of the solution. But by this time, the ice has already formed an airtight enclosure around the liquid water which prevents gas bubbles from escaping to the surface; and the mineral crystals stay no matter what. For this reason, on a big lake where only a small portion of the total water freezes, you can get crystal clear ice (unless the water surface has snow, slush or other impurities to begin with).
{ "source": [ "https://physics.stackexchange.com/questions/367089", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/149663/" ] }
367,105
After reading the book Das Dunkle Universum by Adalbert Pauldrach ( which btw is very well written) questions remain regarding the Higgs field and it's role in the early universe. According to the hypothesis by Tulin and Servant, initially Higgs and anti-Higgs Bosons formed. Since the former interacted and produced mass particles, anti-Higgs Bosons had no more partners to interact and are sitting around as dark matter. Question: Can anti-Higgs particles have formed before any baryonic matter existed or is the Higgs boson always its own anti-particle ? Is it plausible that a particle with a half-life of $1.56 × 10^{−22}s$ can stay around for 14 billion years because it has no reaction partners ? Regarding the Higgs field: does it have the size of the universe or are there innumerable individual fields in the universe?
As @pr1268 explained, tap water is not pure: it contains dissolved gases (basically air) and dissolved minerals. However, I do not think stratification causes this phenomenon: I think that as long as water remains unfrozen, the dissolved gas concentration remains approximately constant throughout the solution (basically equal to the saturation concentration). Here is an alternative explanation. When ice crystals start to form, they naturally tend to exclude the impurities; so the impurities are "squeezed out" into the part of the water that remains liquid. Eventually the impurity concentration exceeds the saturation threshold, and they start to precipitate out of the solution. But by this time, the ice has already formed an airtight enclosure around the liquid water which prevents gas bubbles from escaping to the surface; and the mineral crystals stay no matter what. For this reason, on a big lake where only a small portion of the total water freezes, you can get crystal clear ice (unless the water surface has snow, slush or other impurities to begin with).
{ "source": [ "https://physics.stackexchange.com/questions/367105", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/174422/" ] }
367,622
I hear the word "measurement" thrown around a lot in quantum mechanics, and I have yet to hear a scientific definition that makes sense. How do we define it?
Until we have an accepted solution of the Measurement Problem there is no definitive definition of quantum measurement, since we don't know exactly what happens at measurement. In the meanwhile, measurement is simply defined as part of the postulates and recipe associated with the notion of a quantum observable. Mostly an observable is thought of as an Hermitian operator, but I rather like to think of it as such an operator indivisibly linked with a recipe for how to interpret its predictions when the quantum state $\psi$ prevails, namely, that: The probability distribution of the measurement modelled by the observable has $n^{th}$ moment $\langle \psi|\hat{A}^n|\psi\rangle$ , whence, with all the moments calculated thus, we can derive the distribution itself; Immediately after the measurement, the quantum state $\psi$ is an eigenvector $\psi_{A,\,j}$ of $\hat{A}$ , the measurement outcome is the corresponding eigenvalue and the "choice" of eigenvector is "random", with the probability of its being $\psi_{A,\,j}$ given by the squared magnitude $|\langle \psi | \psi_{A,\,j}\rangle|^2$ of the projection of the state $\psi$ before the measurement onto the eigenvector $\psi_{A,\,j}$ in question. The sequence of events in point 2. is what we postulate a the most stripped down, simplest measurement to be. How the quantum state arrives in the eigenvector is as yet unknown; this "how" is the essence of the quantum measurement problem. Real measurements will of course deviate from the idealizations above. But we postulate that the above is the bare minimum. User Donnydm makes the pertinent comment" I think "immediately" in 2 is not correct; according to the decoherence program, measurement is done with a rate which decays the state to some preferred basis. and indeed this comment is probably correct, depending on what mechanism is finally accepted to resolve the measurement problem. One would say that "immediately" in my answer above is to be read as "immediately after the defined measurement process", where, by the above definition, the measurement is not over until the system winds up in one of the said eigenstates. Donnydm's comment of course is about probing what happens during this unknown process. Quite aside from my answer is the answer to the question of why my definition is a useful model of measurement at all, i.e. a solution of the measurement problem. The decoherence program Donnym referring to is a number of similar theories in progress whereby one tries to explain measurement through the unitary evolution of a larger system comprising the quantum system in question together with the measurement system. If a quantum system is allowed to "decohere" by interacting fleetingly with the measurement system then, given various "reasonable" assumptions (for example that the interaction Hamiltonian decomposes as the tensor product $X_{\rm sys}\otimes O_{\rm meas}$ of two operators, the first $X_s$ acting on only the system under scrutiny, the second $O_{\rm meas}$ acting on only the measurement system), the whole system unitary evolution that happens through the interaction tends most probably to bring the system under scrutiny into one of the eigenstates of the $X_{\rm sys}$ , with the "probabilities" of the respective eigenstates being given by the Born rule. See, for example, Daniel Sank's answer here for further details. So if this kind of unitary evolution does indeed explain measurement, then such evolution always takes nonzero time, just as Donnydm says. See, for example, my answer here , which shows in principle how to calculate this nonzero time through Wigner-Weisskopf theory (see also the reference I link in my other answer).
{ "source": [ "https://physics.stackexchange.com/questions/367622", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/143937/" ] }
367,792
Layman here. I'm not sure if this is the case or not, but my anecdotal evidence is that mobile phones, especially large screen phones, tend to fall face down when you drop them; much to the owner's dismay, this leads to cracked screens. I'm sure there is a scientific explanation for this, so I'd like to know: Why do mobile phones tend to fall and land face first (if so)? I have a feeling it's related to the way your toast always falls butter side down, or how the shuttlecock always turns toward the same direction, but I'd like to know the explanation.
A physicist working at Motorola actually did this experiment as part of a promotional push for shatter-proof screens. This same physicist had previously written a paper on the same question, applied to the classic "buttered toast" problem (does toast really land butter side down?). The short answer is: the way the phone lands depends on how it is oriented when it leaves your hand. People tend to hold their phones the same way: face up, at an angle, fingers on either side, slightly below the phone's center of gravity, at just about chest-high. The phone also tends to "fall" the same way: slips out of your hand and you fumble slightly trying to catch it. Given all those parameters, when the phone drops out of your hand, it typically flips over a half a revolution by the time it contacts ground. If you were holding the phone flat, or upside down, or lower to the ground, the result would be different. But given the relative uniformity of the way people hold the phones, there's a corresponding relative uniformity in the way they land when dropped.
{ "source": [ "https://physics.stackexchange.com/questions/367792", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/37085/" ] }
368,655
What is the connection between special and general relativity ? As I understand general relativity does not need the assumption on speed of light constant. It is about the relation between mass and spacetime and gravity. Can general relativity be valid without special relativity?
Suppose we start by considering Galilean transformations , that is transformations between observers moving at different speeds where the speeds are well below the speed of light. Different observers will disagree about the speeds of objects, but there are some things they will agree on. Specifically, they will agree on the sizes of objects. Suppose I have a metal rod that in my coordinate system has one end at the point $(0,0,0)$ and the other end at the point $(dx,dy,dz)$. The length of this rod can be calculated using Pythagoras' theorem: $$ ds^2 = dx^2 + dy^2 + dz^2 \tag{1} $$ Now you may be moving relative to me, so we won't agree about the position and velocity of the rod, but we'll both agree on the length because, well, it's a chunk of metal - it doesn't change in size just because you are moving relative to me. So the length of the rod, $ds$, is an invariant i.e. it is something that all observers will agree on. OK, let's move onto Special Relativity. What Special Relativity does is treat space and time together so the distance between two points has to take the time difference between the points into account as well. So our equation (1) is modified to include time and it becomes: $$ ds^2 = -c^2dt^2 + dx^2 + dy^2 + dz^2 \tag{2} $$ Note that our new equation for the length $ds$ now includes time, but the time has a minus sign. We also multiply the time by a constant with the dimensions of a velocity to convert the time into a length. Just as before the quantity $ds$ is an invariant i.e. all observers agree on it no matter how they are moving relative to each other. In fact we give this spacetime length a special name - we call it the proper length (or sometimes the proper time ). By now you're probably wondering what on Earth I'm rambling about, but it turns out we can derive all the weird stuff in Special Relativity simply from the requirement that $ds$ be an invariant. If you're interested I go through this in How do I derive the Lorentz contraction from the invariant interval? . In fact the equation for $ds$ is so important in Special Relativity that it has its own name. It's called the Minkowski metric . And we can use this Minkowski metric to show that the speed of light must be the same for all observers. I do this in my answer to Special Relativity Second Postulate . So where we've got to is that the fact the speed of light is constant in SR is equivalent to the statement that the Minkowski metric determines an invariant quantity. What General Relativity does is to generalise the Minkowski metric, equation (2). Suppose we rewrite equation (2) as: $$ ds^2 = \sum_{\mu=0}^3 \sum_{\nu=0}^3 \,g_{\mu\nu}dx^\mu dx^\nu $$ where we are using the notation $dt=dx^0$, $dx=dx^1$, $dy=dx^2$ and $dz=dx^3$, and $g$ is the matrix: $$g=\left(\begin{matrix} -c^2 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{matrix}\right)$$ This matrix $g$ is called the metric tensor . Specifically the matrix I've written above is the metric tensor for flat spacetime i.e. Minkowski spacetime . In General Relativity this matrix can have different values for its entries, and indeed those elements can be functions of position rather than constants. For example the spacetime around a static uncharged black hole has a metric tensor called the Schwarzschild metric : $$g=\left(\begin{matrix} -c^2(1-\frac{r_s}{r}) & 0 & 0 & 0 \\ 0 & \frac{1}{1-\frac{r_s}{r}} & 0 & 0 \\ 0 & 0 & r^2 & 0 \\ 0 & 0 & 0 & r^2\sin^2\theta \end{matrix}\right)$$ (I mention this mostly for decoration - understanding how to work with the Schwarzschild metric needs you to do a course on GR) In GR the metric $g$ is related to the distribution of matter and energy, and it is obtained by solving the Einstein equations (which is not a task for the faint hearted :-). The Minkowski metric is the solution we get when there is no matter or energy present${}^1$. The point I'm getting at is that there is a simple sequence that takes use from everyday Newtonian mechanics to General Relativity. The first equation I wrote down, equation (1) i.e. Pythagoras' theorem, is also a metric - it's the metric for flat 3D space. Extending it to spacetime, equation (2), moves us on to Special Relativity, and extending equation (2) to a more general form for the metric tensor moves us on to general relativity. So Special Relativity is a subset of General Relativity, and Newtonian mechanics is a subset of Special Relativity. To end let's return to that question of the speed of light. The speed of light is constant in SR so is it constant in GR? And the answer is, well, sort of. I go through this in some detail in GR. Einstein's 1911 Paper: On the Influence of Gravitation on the Propagation of Light but you may find this a bit hard going. So I'll simply say that in GR the speed of light is always locally constant. That is, if I measure the speed of light at my location I will always get the result $c$. And if you measure the speed of light at your location you'll also get the result $c$. But, if I measure the speed of light at your location, and vice versa, we will in general not get the result $c$. ${}^1$ actually there are lots of solutions when no matter or energy is present. These are the vacuum solutions . The Minkowski metric is the solution with the lowest ADM energy .
{ "source": [ "https://physics.stackexchange.com/questions/368655", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/78625/" ] }
369,162
I have this simple question, but I cannot find the answer. I saw this video about a plane getting hit by lightning. In it, Captain Joe explains why people do not get electrocuted. This has a simple explanation, due to the Faraday cage effect produced by the fuselage. But another question come to my mind in that moment: why does the aluminum from the fuselage, that acts as a Faraday cage, not melt because of the extreme currents carried by the lightning? After this, I thought about the following example: A thin metal (correctly grounded) lightning rod is almost intact after a strike, while a tree breaks in the middle and sometimes it even burns: Clearly it has something to do with the resistivity of each material, much higher in the tree's wood. It is also said in this article that the only dangerous zone a plane can get hit " is the radome (the nose cone), as it's the only part of a plane's shell that's not made of metal ". So it clearly has something to do with the conductive properties of the fuselage. So my question is basically this: why does a tree break and burn when struck by lightning but a lightning rod does not? And, ultimately: why does a plane hit by lightning not melt with the hundreds of thousands Amperes going through the fuselage?
The amount of heat generated by current flowing through a resistor (whether from lightning or more ordinary sources) is directly related to the power dissipated by the resistor, which is $$ P = I^2 R.$$ $R$ is small for objects made from good conductors, which many metals are, and large for objects that are made from bad conductors like plastic or wood. Since a lightning strike has a very short duration, the total heat generated during such a strike is not enough to melt metal, but enough to set wood aflame or melt plastic. If you let the large currents from the lightning strike run through the metal for longer, it probably would also heat up gradually and eventually melt, but this would take longer than the time scale on which lightning occurs.
{ "source": [ "https://physics.stackexchange.com/questions/369162", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/38231/" ] }
370,770
I am following a course about gauge theories in QFT and I have some questions about the physical meaning of what we are doing. This is what I understood: When we write a Lagrangian $\mathcal{L}(\phi)$, we are looking for its symmetries. Its symmetries are the transformation we apply on the fields that let the Lagrangian unchanged. It means we are acting with an operator $U$ on the field $\phi$ and we will have: $\mathcal{L}(\phi'=U \phi)=\mathcal{L}(\phi)$. And the operators $U$ belongs to a group. Symmetries are very important because according to Noether theorem we can find the current conserved by knowing the symmetries. In gauge theories, we allow the transformation $U$ to act "differently" on each point of the space. Then we have $U(x)$ (x dependance of the group element). Thus, in my class the teacher did the following: He remarked that this quantity: $$ \partial_{\mu} \phi $$ doesn't transform as: $$ \partial_{\mu} \phi'=U(x) \partial_{\mu} \phi $$ (because of the $x$ dependance of $U$). And then he said "we have a problem, let's introduce a covariant derivative $D_{\mu} \phi$ that will allow us to have: $$D_{\mu} \phi'=U(x)D_{\mu} \phi $$ My questions are the following: Why do we want to have this "good" law of transformation? I am not sure at all but this is what I understood and I would like to check. First question: please tell me if I am right in this following paragraph I think it is because we want to write the Lagrangian as invariant under gauge transformation. To do it we don't start from scratch: we start from a term that we know should be in the Lagrangian: $\partial_\mu \phi^{\dagger} \partial^{\mu} \phi$. We see that this term is not gauge invariant, so we try to modify it by "changing" the derivatives: $\partial_\mu \rightarrow D_\mu$. We see that if we have $D_{\mu} \phi'=U(x)D_{\mu} \phi$ we will have the good law of transformation. And finally, after some calculation we find the "good" $D_\mu$ that respect $D_{\mu} \phi'=U(x)D_{\mu} \phi $. So: Am I right in my explanation? Also: Why do we want a Lagrangian invariant under gauge transformations? Is there a reason behind it or it is just a postulate? I could understand that we want Lagrangian invariant under global transformation (if we assume the universe isotropic and homogenous it makes sense), but for me asking a local invariance is quite abstract. What is the motivation behind all this? I know that if we have lagrangian invariant under all local symmetries then it will be invariant under global symmetries, but this "all" is "problematic" for me. Next question in the following two lines: Why should the lagrangian be invariant under all local symmetries? It is a very strong assumption from my perspective. I would like a physical answer rather than too mathematical one.
We do not start from the assumption that the Lagrangian "should" be invariant under gauge transformations. This assumption is often made because global symmetries are seen as more natural than local symmetries and so writers try to motivate gauge theory by "making the global symmetry local", but this is actually nonsense. Why would we want a local symmetry just because there's a global one? Do we have some fetish for symmetries so that we want to make the most symmetrical theory possible? One can derive gauge theory this way but as a physical motivation, this is a red herring. The actual point is not that we "want" gauge symmetry, but that it is forced upon us when we want to describe massless vector bosons in quantum field theory. As I also allude to in this answer of mine , every massless vector boson is necessarily described by a gauge field. A Lagrangian gauge theory is equivalently a Hamiltonian constrained theory - either way, the number of independent degrees of freedom that are physically meaningful is less that the naive count, since we identify physical states related by gauge transformations. The true physical motivation for gauge theories is not "we want local symmetries because symmetries are neat". It's "we want to describe a world with photons in it and that can only covariantly be done with a gauge theory". A non-quantum motivation of gauge theory can also be given: If you write down the Lagrangian of free electromagnetism, motivated because its equations of motion are the Maxwell equations , not because we like gauge symmetry, then you find it comes naturally with a $\mathrm{U}(1)$ gauge symmetry, corresponding to the well-known fact that adding a gradient to the vector 4-potential is physically irrelevant. Now, if you want to couple other fields to this free electromagnetism, you need to make the additional terms gauge invariant, too, else the theory is no longer "electromagnetism coupled to something else" in any meaningful sense since suddenly adding gradients can change the physics. Once again, gauge symmetry is something one discovers after physically motivating the Lagrangian from something else, not some sort of a priori assumption we put in.
{ "source": [ "https://physics.stackexchange.com/questions/370770", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/75085/" ] }
370,899
According to Wikipedia the Sun's "power density" is "approximately 276.5 $W/m^3$, a value that more nearly approximates that of reptile metabolism or a compost pile than of a thermonuclear bomb." My question is, so why is the Sun's core so hot (15.7 million K)? Using a gardener's (not a physicist's intuition) it seems apparent that you can't keep on increasing the temperature of a compost heap just by making the heap larger.
Your (gardener's) intuition is wrong. If you increase the size of your compost heap to the size of a star, then its core would be as hot as that of the Sun. All other things being equal (though compost heaps are not hydrogen plus helium), the temperature of a spherical compost heap would just depend on its total mass divided by its radius$^{*}$. To support the weight of all the material above requires a large pressure gradient. This in turn requires that the interior pressure of the star is very large. But why this particular temperature/density combination? Nuclear reactions actually stop the core from getting hotter . Without them, the star would radiate from its surface and continue to contract and become even hotter in the centre. The nuclear reactions supply just enough energy to equal that radiated from the surface and thus prevent the need for further contraction. The nuclear reactions are initiated once the nuclei attain sufficient kinetic energy (governed by their temperature) to penetrate the Coulomb barrier between them. The strong temperature dependence of the nuclear reactions then acts like a core thermostat. If the reaction rate is raised, the star will expand and the core temperature will cool again. Conversely, a contraction leads to an increase in nuclear reaction rate and increased temperature and pressure that act against any compression. $*$ This relationship arises from the virial theorem , which says that for a fluid/gas that has reached mechanical equilibrium, that the sum of the (negative) gravitational potential energy and twice the internal kinetic energy will equal zero. $$ \Omega + 2K = 0$$ The internal kinetic energy can be approximated as $3k_BT/2$ per particle (for a monatomic ideal gas) and the gravitational potential energy as $-\alpha GM^2/R$, where $M$ is the mass, $R$ the radius and $\alpha$ is a numerical factor of order unity that depends on the exact density profile. The virial theorem then becomes total kinetic energy $$ \alpha G\left(\frac{M^2}{R}\right) \simeq 2\left(\frac{3k_BT}{2}\right) \frac{M}{\mu}\ ,$$ where $\mu$ is the mass per particle. From this, we can see that $$ T \simeq \frac{\alpha G\mu}{3k_B}\left( \frac{M}{R}\right)$$
{ "source": [ "https://physics.stackexchange.com/questions/370899", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/4075/" ] }
370,901
Can a particle in an inverse square potential $$V(r)=-1/r^{2}$$ in $d=3$ spatial dimensions be solved exactly? Also please explain me the physical significance of this potential in comparison with Coulomb potential? That problem was talking about positive repulsive potential and what I am looking for is an attractive potential.
Your (gardener's) intuition is wrong. If you increase the size of your compost heap to the size of a star, then its core would be as hot as that of the Sun. All other things being equal (though compost heaps are not hydrogen plus helium), the temperature of a spherical compost heap would just depend on its total mass divided by its radius$^{*}$. To support the weight of all the material above requires a large pressure gradient. This in turn requires that the interior pressure of the star is very large. But why this particular temperature/density combination? Nuclear reactions actually stop the core from getting hotter . Without them, the star would radiate from its surface and continue to contract and become even hotter in the centre. The nuclear reactions supply just enough energy to equal that radiated from the surface and thus prevent the need for further contraction. The nuclear reactions are initiated once the nuclei attain sufficient kinetic energy (governed by their temperature) to penetrate the Coulomb barrier between them. The strong temperature dependence of the nuclear reactions then acts like a core thermostat. If the reaction rate is raised, the star will expand and the core temperature will cool again. Conversely, a contraction leads to an increase in nuclear reaction rate and increased temperature and pressure that act against any compression. $*$ This relationship arises from the virial theorem , which says that for a fluid/gas that has reached mechanical equilibrium, that the sum of the (negative) gravitational potential energy and twice the internal kinetic energy will equal zero. $$ \Omega + 2K = 0$$ The internal kinetic energy can be approximated as $3k_BT/2$ per particle (for a monatomic ideal gas) and the gravitational potential energy as $-\alpha GM^2/R$, where $M$ is the mass, $R$ the radius and $\alpha$ is a numerical factor of order unity that depends on the exact density profile. The virial theorem then becomes total kinetic energy $$ \alpha G\left(\frac{M^2}{R}\right) \simeq 2\left(\frac{3k_BT}{2}\right) \frac{M}{\mu}\ ,$$ where $\mu$ is the mass per particle. From this, we can see that $$ T \simeq \frac{\alpha G\mu}{3k_B}\left( \frac{M}{R}\right)$$
{ "source": [ "https://physics.stackexchange.com/questions/370901", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/58147/" ] }
371,332
I have a question about Optics and how this links to burning fuels in a combustion reaction. If I have hexane, the following reaction occurs: hexane + oxygen $\rightarrow$ carbon dioxide + water vapour Now, I have a question. Why don't we tend to see any water being formed when we burn methane on a gas cooker? This is only because the same equation can also be applied to methane in our cookers: methane + oxygen $\rightarrow$ carbon dioxide + water vapour I tried this earlier while preparing my food and it turns out I don't see any, even though I am burning the fuel.
The air in the kitchen is warm enough and dry enough that the water vapor isn't condensing. If you want to see it, take a pan and fill it will ice water, then put it over the flame. You'll see the water vapor condensing on the outside of the pot.
{ "source": [ "https://physics.stackexchange.com/questions/371332", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/176808/" ] }
371,615
If you have 2 mirrors over for each other placed exactly so they face each other perfectly, and then use a laser light pen as the source into one of the mirrors so it bounces to the other mirror and back again: Would the laser light line continue to exist between the 2 mirrors if the source of the laser stopped? If so, how long could the laser light continue to be between the mirrors without a source? Would it just continue existing between the mirrors using itself as the source or is this not possible for light? Note: It doesn't have to be only 2 mirrors. It could be any amount of mirrors if that would change the outcome.
Yes, it would continue. But not forever, for two reasons. One is that no mirror is perfect, so a bit of light is lost at each bounce. The other is that no beam of light is perfectly parallel ("collimated"), so that the light spreads out over time, and light eventually falls outside the mirror. Edits after comments Spherical mirrors will help, but they will not eliminate the problem at hand, which is diffraction. A perfectly collimated beam is not possible, but as a corollary to that it is also not possible to produce a beam with a finite cross section. Some portion of the beam always falls outside of the next mirror. Using beams that are approximately Gaussian (perfect Gaussian beams are impossible) and spherical mirrors the amount of energy that misses the next mirror is small, but not zero. The OP's question #3 is new. I don't quite know what you mean by "itself as the source". If you can somehow get the light going, it will continue for a while after you turn off the source. The decay, as others have pointed out is (approximately) exponential as roughly the same fraction is lost on each bounce. It's approximately exponential because the beam shape will change as bits are diffracted away, so the fraction lost changes slightly from bounce to bounce. How long the light will persist depends entirely on the quality of the set up. The surface quality of the mirrors, their surface figure, the atmosphere, the materials used, the rigidity of the mounts ... It's probably possible to estimate the longest possible persistence time taking into account effects that others have mentioned in other answers: scattering, heating, momentum transfer, ... I don't know what the "theoretical" maximum would be, or the practical limit. A quick search on Fabry-Perot interferometers finds finesse values in the $10^6$ ball park, which would imply a persistence time for a 30 cm cavity of about 1 ms, but that's a very rough estimate.
{ "source": [ "https://physics.stackexchange.com/questions/371615", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/176976/" ] }
371,617
If a two dissimilar metals a connected with different temperature an end is produced between two ends..if it is possible to produce large amount of current.
Yes, it would continue. But not forever, for two reasons. One is that no mirror is perfect, so a bit of light is lost at each bounce. The other is that no beam of light is perfectly parallel ("collimated"), so that the light spreads out over time, and light eventually falls outside the mirror. Edits after comments Spherical mirrors will help, but they will not eliminate the problem at hand, which is diffraction. A perfectly collimated beam is not possible, but as a corollary to that it is also not possible to produce a beam with a finite cross section. Some portion of the beam always falls outside of the next mirror. Using beams that are approximately Gaussian (perfect Gaussian beams are impossible) and spherical mirrors the amount of energy that misses the next mirror is small, but not zero. The OP's question #3 is new. I don't quite know what you mean by "itself as the source". If you can somehow get the light going, it will continue for a while after you turn off the source. The decay, as others have pointed out is (approximately) exponential as roughly the same fraction is lost on each bounce. It's approximately exponential because the beam shape will change as bits are diffracted away, so the fraction lost changes slightly from bounce to bounce. How long the light will persist depends entirely on the quality of the set up. The surface quality of the mirrors, their surface figure, the atmosphere, the materials used, the rigidity of the mounts ... It's probably possible to estimate the longest possible persistence time taking into account effects that others have mentioned in other answers: scattering, heating, momentum transfer, ... I don't know what the "theoretical" maximum would be, or the practical limit. A quick search on Fabry-Perot interferometers finds finesse values in the $10^6$ ball park, which would imply a persistence time for a 30 cm cavity of about 1 ms, but that's a very rough estimate.
{ "source": [ "https://physics.stackexchange.com/questions/371617", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/176977/" ] }
371,626
In this textbook problem which was assigned for homework, I found a flaw in my understanding of torque. A $2.4\ \mathrm{kg}$ block rests on a slope and is attached by a string of negligible mass to a solid drum of mass $0.85\ \mathrm{kg}$ and radius $5.0\ \mathrm{cm}$ . When released, the block accelerates down the slope at $1.6\ \mathrm{m/s^2}$ . Find the coefficient of friction between the block and slope. My initial thought process is to consider the torque because one of the forces applied contributes to torque, and that force makes the "drum" rotate. The applied forces that cause rotation, from my point of view, is the tension force and the force of gravity. However, upon looking at the solution, I noticed that the force of gravity isn't included in the calculation for torque. Why is that? I understand that the drum is "attached" to this triangular platform which might be why the force of gravity isn't considered, but when I was trying to answer this question and making my free body diagram, I included the force of gravity.
Yes, it would continue. But not forever, for two reasons. One is that no mirror is perfect, so a bit of light is lost at each bounce. The other is that no beam of light is perfectly parallel ("collimated"), so that the light spreads out over time, and light eventually falls outside the mirror. Edits after comments Spherical mirrors will help, but they will not eliminate the problem at hand, which is diffraction. A perfectly collimated beam is not possible, but as a corollary to that it is also not possible to produce a beam with a finite cross section. Some portion of the beam always falls outside of the next mirror. Using beams that are approximately Gaussian (perfect Gaussian beams are impossible) and spherical mirrors the amount of energy that misses the next mirror is small, but not zero. The OP's question #3 is new. I don't quite know what you mean by "itself as the source". If you can somehow get the light going, it will continue for a while after you turn off the source. The decay, as others have pointed out is (approximately) exponential as roughly the same fraction is lost on each bounce. It's approximately exponential because the beam shape will change as bits are diffracted away, so the fraction lost changes slightly from bounce to bounce. How long the light will persist depends entirely on the quality of the set up. The surface quality of the mirrors, their surface figure, the atmosphere, the materials used, the rigidity of the mounts ... It's probably possible to estimate the longest possible persistence time taking into account effects that others have mentioned in other answers: scattering, heating, momentum transfer, ... I don't know what the "theoretical" maximum would be, or the practical limit. A quick search on Fabry-Perot interferometers finds finesse values in the $10^6$ ball park, which would imply a persistence time for a 30 cm cavity of about 1 ms, but that's a very rough estimate.
{ "source": [ "https://physics.stackexchange.com/questions/371626", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/112387/" ] }
371,656
How could it be that the entropy of gas is not infinite? If we use the formula $S=k\ln(\Omega)$ we get infinity because there are an infinite number of possible situations (because there are infinite possibilities of position and momentum for each molecule).
The problem you are thinking about is known as the question of thermodynamic coarse graining . This will hopefully give you a phrase that you can search on to find out more. Sometimes possible states of ensemble members are obviously discrete, as they are in a collection of quantum harmonic oscillators. Much of quantum mechanics depends on the underlying Hilbert state space's being separable ( i.e. has a countable dense subset) and for Hilbert spaces this is equivalent to the assertion that the vector basis itself is countable. Thus, even if if an observable such as momentum or position has a continuous spectrum ( i.e. can give a continuous random variable as a measurement), the underlying state space is often discrete. In the OPs particular example, you can model the gas as a system of particles in a 3D box (hat tip to user knzhou for reminding me of this point), so that the state space of ensemble members is clearly discrete. As we raise the volume of our box, the density of states (discussed more in Chris's answer ) increases in proportion with the box's spatial volume, and therefore so does the entropy. In the limit of a very large gas volume, the entropy per unit volume is a well defined, finite limit. In cases where the state space is not obviously discrete, one must resort to either the use of coarse graining or relative entropy. Coarse graining is the somewhat arbitrary partitioning of ensemble members' state space into discrete subsets, with states belonging to a given partition then being deemed to be the same. Thus a continuous state space is clumped into a discrete approximation. Many conclusions of statistical mechanics are insensitive to such clumping. Relative entropy , in the information theoretic sense, is defined for a continuous random variable as an roughly the entropy change relative to some "standard" continuous random variable, such as one governed by a Gaussian distribution. We see the problem you are dealing with if we try naïvely to work out the Shannon entropy of a continuous random variable with probability distribution $p(x)$ as the limit of a discrete sum: $$S \approx -\sum\limits_i p(x_i)\,\Delta x \,\log(p(x_i)\,\Delta x) = -\log(\Delta x)\,\sum\limits_i p(x_i)\,\Delta x - \sum\limits_i p(x_i)\,\Delta x \,\log(p(x_i))\tag{1}$$ The two sums in the rightmost expression converge OK but we are thwarted by the factor $\log(\Delta x)$, which of course diverges. However, if we take the difference between the entropy for our $p(x)$ and that of a "standard" distribution, our calculation gives: $$\Delta S \approx -\log(\Delta x)\,\left(\sum\limits_i p(x_i)\,\Delta x-\sum\limits_i q(x_i)\,\Delta x\right) - \sum\limits_i \left(p(x_i)\,\log(p(x_i))-q(x_i)\,\log(q(x_i))\right)\,\Delta x\tag{2} $$ a quantity which does converge to $\int\left(p\log p - q\,\log q\right)\mathrm{d}x$. The usual relative entropy is not quite the same as this definition (see articles - the definition is modified to make the measure independent of reparameterization) but this is the the basic idea. Often the constants in the limit of (2) are dropped and one sees the quantity $-\int\,p\,\log p\,\mathrm{d}x$ defined as the unqualified (relative) entropy of the distribution $p(x)$. Coarse graining, in this calculation would be simply choosing a constant $\Delta x$ in (1). (1) is then approximately the relative entropy $-\int \,p\,\log p\,\mathrm{d}x$ offset by the constant $-\log(\Delta x)$. Therefore, as long as: We stick with a constant $\Delta x$ in a given discussion; $\Delta x$ is small enough relative to the variations in the probability density so that $\sum\limits_i p(x_i)\,\Delta x \,\log(p(x_i))\approx \int p\,\log p\,\mathrm{d} p$; Our calculations and physical predictions are to do only with differences between entropies (as is mostly the case) then the approaches of coarse graining and relative entropies give identical physical predictions, independently of the exact $\Delta x$ chosen. A good review of these ideas, with historical discussion, is to be found in: Katinka Ridderbos, "The coarse-graining approach to statistical mechanics: how blissful is our ignorance?", Studies in History and Philosophy of Modern Physics , 33 , pp65-77, 2002
{ "source": [ "https://physics.stackexchange.com/questions/371656", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/139691/" ] }
372,380
"We have focused our discussion on one-dimensional motion. It is natural to assume that for three-dimensional motion, force, like acceleration, behaves like a vector. "- (Introduction to Mechanics) Kleppner and Kolenkow We learn it very early in the course of our study that Force is vector; But, if I were the physicist defining the the Newton's second law (experimentally) and analysing the result F=ma, how would I determine whether Force is vector or scalar(especially in 3-D). Actually, when I read the aforementioned sentences from the book, I wanted to know why do the authors expect it to be natural for us to think that in 3-D "Force" behaves like a vector. I know a (acceleration ) is vector and mass a scalar and scalar times vector gives a new vector but is there another explanation for this?
Uhm ... you start with an object at rest and notice that if you push on it in different directions it moves in different directions? Then notice that you can arrange more than two (three for planar geometries and four for full 3D geometries) non-colinear forces to cancel each other out (hopefully you did a force-table exercise in your class and have done this yourself). The demonstration on an object already in motion is slightly less obvious but you can take the ideas here and generalize them. In a sense this is so obvious that it's hard to answer because almost anything you do with forces makes use of their vector nature.
{ "source": [ "https://physics.stackexchange.com/questions/372380", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/174412/" ] }
372,382
This might be hard to understand but: Let’s say you measure a partical and it has a spin of clockwise. If you stop measuring it then remeasure it, would it have a 50/50 random chance at changing its spin? Or would it continue to have the clockwise spin?
Uhm ... you start with an object at rest and notice that if you push on it in different directions it moves in different directions? Then notice that you can arrange more than two (three for planar geometries and four for full 3D geometries) non-colinear forces to cancel each other out (hopefully you did a force-table exercise in your class and have done this yourself). The demonstration on an object already in motion is slightly less obvious but you can take the ideas here and generalize them. In a sense this is so obvious that it's hard to answer because almost anything you do with forces makes use of their vector nature.
{ "source": [ "https://physics.stackexchange.com/questions/372382", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/177480/" ] }
372,653
My book defines: The closest distance for which the lens can focus light on the retina is called the least distance of distinct vision or the near point. The standard value (for normal vision) taken here is $25\, \text{cm}$ (the near point is given the symbol $D$ .) However, in normal everyday life, I've always observed that I can still see objects clearly and distinctly for distances even at around $10\,\text{cm}$ , which is much less than the value $D=25\, \text{cm}$ . Yes, it does strain my eye to be looking at objects so close at $10 \,\text{cm}$ but I still can see them anyway, distinctly and clearly. The Wikipedia article on LDDV is a stub . I couldn't any other useful information elsewhere. Can anyone please resolve this dispute I've arrived at. Thanks!
The focusing of the eye is the result of two things: the curvature of the cornea (which is responsible for the majority of the refraction of light into the eye), and the state of the lens. When you are young, the lens is very pliable and it allows you to change the focus over a wide range of distances. To go from a focus of "infinity" to 25 cm, you need to be able to change the refractive power of the eye by 4 diopters. That's typical for a healthy young eye - but there are two things that can change where you comfortably focus. The first of these is the shape of the cornea - if your cornea has "greater than average" curvature, this means that light from closer up will be in focus while the lens is relaxed: now add 4 diopters, and the final focal distance you can achieve may well be less than 25 cm. You would be considered "near sighted", and might need glasses (with negative power) to see properly in the distance. If your eye is insufficiently curved, a healthy lens may not be able to get you to focus close up; in that case you might need a corrective lens with a positive power. Finally, as you get older, the lens's ability to adjust diminishes; and sooner than you would like, you will need corrective lenses (maybe even bifocals or even more complex lenses) to cover the full range of distances. Perhaps it's just computer glasses so your eye is more relaxed while looking at a screen (our eyes did not evolve to be focused at a short range for long periods of time), but eventually it's one pair for driving, one for reading, one for... UPDATE From the comments, it is clear that you have "defective eyesight" which is corrected with a diverging lens (negative power). When you take the glasses off, your eye is naturally focused closer than "average". This is the explanation why in your case you can see objects in focus at a distance of 10 cm.
{ "source": [ "https://physics.stackexchange.com/questions/372653", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/28359/" ] }
373,250
Discuss whether this statement is correct: “In the absence of air resistance, the trajectory of a projectile thrown near the earth’s surface is an ellipse, not a parabola.” Is the above statement right? To the best of my knowledge, a particle projected from the earth's surface follows a parabolic trajectory under constant acceleration (of course an approximation) One of my friends pointed out that in case of variable acceleration, one which follows the inverse square law; the path is an ellipse. So, what is correct? If it is indeed an ellipse, I'm having trouble deriving the equation of its trajectory. Could someone please post a solution, or method to derive the actual trajectory's equation? P.S. If the parabolic trajectory is an approximation, then making appropriate changes in the equation obtained should yield a parabola's equation, right?
A parabola and an ellipse are both conic sections , which can be constructed in a plane as all the points where the distances from some reference point (the "focus") and some reference line (the "directrix") have some ratio $e$ (the "eccentricity"). An ellipse has $0<e<1$ a parabola has $e=1$. In a typical intro physics "Billy throws a baseball"-type problem, the distance between the focus and the directrix for the "parabolic" trajectory might be a few meters. If the trajectory is secretly an ellipse due to Earth's gravity, Kepler's Laws predict that the other focus of the ellipse is the Earth's center of mass, and symmetry requires the path goes only a few meters from that point as well. That means we can estimate the eccentricity directly. Using the standard notation, By Klaas van Aarsen [ GFDL or CC BY-SA 3.0 ], via Wikimedia Commons we have semimajor axis $a$ about half of Earth's radius $\rm 10^{6.5}\,m$, the distance from the focus to the end of the ellipse $a-c$ of order a few meters, and eccentricity $$ e = \sqrt{1-\frac{b^2}{a^2}} = \frac ca \approx 1 - \mathcal O\left(10^{-6}\right). $$ That's a very good approximation of a parabola. That also suggests that if you wanted to worry about the difference between a parabolic path and an elliptical path at the part-per-thousand level, you'd start to worry about paths where the distance between the path and the focus (or equivalently, for scaling purposes, the distance between the launching and landing points for your projectile) of a few kilometers or tens of kilometers. Which is, in fact, where you start to hear about people taking into account Earth's curvature in engineering projects --- for example a very long suspension bridge, where the towers cannot be both "all vertical" and "all parallel."
{ "source": [ "https://physics.stackexchange.com/questions/373250", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/131341/" ] }
373,276
What would seem to be a silly question actually does have some depth to it. I was trying to scoop out some of my favorite soft name-brand ice cream when I noticed it was frozen solid, rather than its usual creamy consistency. After leaving it out for 10 minutes, it was nice and creamy again. Notably, that amount of time wasn't long enough for it to "melt", as it would still not flow were the container flipped upside down, just long enough to "soften". So why is it that ice cream becomes stiffer/harder when cooled and softer when allowed to warm slightly? My Hypothesis: I think the "hardness" of the ice cream is largely determined by the properties of the ice crystals within, which change somehow with temperature. Now while I don't know how they do change, I'm fairly certain of how they don't . You could view the temperature of the ice cream as infinitely many concentric layers, from a layer of maximum temperature on the outside of the ice cream given the ice cream warms from the outside in, to the point of minimum temperature at roughly the center of the ice cream. Therefore, it's fair to assume that partial melting and therefore decrease in size of each ice crystal is unlikely, as that would require a very thick "band" of ice cream layers to be at a transition temperature between completely crystalline and completely molten, which would only be likely with very slow cooling.
A couple of decades ago I was peripherally involved with some research on the properties of ice cream being done by the company Walls in the UK . The work was on relating the consistency of the ice cream to the microstructure, so it was quite closely related to your question. Anyhow, ice cream has a surprisingly complicated microstructure. It contains ice crystals, liquid sugar solution, fat globules and air bubbles (the proportions of these change with the type and quality of the ice cream). At temperatures from zero down to typical domestic freezer temperatures it is not frozen solid because the sugar depresses the freezing point of water and the concentrated sugar solution remains liquid. The amount of the liquid phase present decreases with decreasing temperature. If you imagine starting at zero C then as you lower the temperature crystals of ice form, which pulls water out of the fluid phase and increases the sugar concentration in the fluid phase. This depresses the freezing point until it matches the freezer temperature at which point the system is in equilibrium. Lower the temperature further and this forms more ice crystals, increases the sugar concentration still further and depresses the freezing point of the liquid phase still further. And so on. The liquid phase doesn't disappear completely until you get down to very, very low temperatures at which point the remaining sugar solution freezes as a glass. It's this change in the amount of the liquid phase present that is causing the changes you have observed. As you warm the initially very cold ice cream you melt some of the ice crystals and get more fluid phase, plus the viscosity of the fluid phase decreases as it gets more dilute. Both of these soften the ice cream. I should emphasise that this is a very simplified account of a very complicated rheological behaviour, but it should give you a basic idea of what is going on. The details are endlessly fascinating if you're a colloid scientist (or just like ice cream). For example sugar poisons the surface of the ice crystals and changes their morphology. In ice cream the crystals tend to be rounded blobs rather than the jagged crystals ice usually forms. This also affects the rheology since the rounded crystals flow over each other more easily.
{ "source": [ "https://physics.stackexchange.com/questions/373276", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/28942/" ] }
373,298
I would say it should warm up faster because the difference in temperature between the room and heater is higher. Edit: I am talking about a convection heater.
Because you're only changing the temperature at which the heater is supposed to stop working. It is always working at the same power, regardless the temperature difference. But for higher temperature it will have to heat at this same power for longer time. So in short: You don't change the difference in temperature between heater and the room.
{ "source": [ "https://physics.stackexchange.com/questions/373298", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
373,357
The WP article on the density matrix has this remark: It is now generally accepted that the description of quantum mechanics in which all self-adjoint operators represent observables is untenable.[17][18] The first footnote is to the appendix in Mackey, Mathematical Foundations of Quantum Mechanics. I have a copy of Mackey, but I am unable to connect the material in the appendix to this statement. I also don't know anything about C* algebras -- and Mackey doesn't seem to mention them either. Can anyone explain this at an elementary level, maybe with a concrete example of a hermitian operator that wouldn't qualify as an observable? I assume this issue only arises in infinite-dimensional spaces...? Intuitively, I don't see how it could be an issue in a finite-dimensional space.
The appendix of Mackey talks about superselection rules, and indeed superselection is the phenomenon where there are self-adjoint operators that are not observables. Whether this is obvious or not depends on how one defines "superselection". The standard definition would be that the Hilbert space $H$ splits into the direct sum $H_1\oplus H_2$ such that for all $\lvert\psi\rangle\in H_1, \lvert \phi\rangle\in H_2$ and all observables $A$ we have that $\langle \psi \vert A\vert \phi \rangle = 0$, which implies that $AH_1 \subset H_1,AH_2\subset H_2$ for all observables, but which is clearly not true for all self-adjoint operators on $H_1\oplus H_2$. An easy (albeit somewhat artificial) example of a superselected system is when we take $H_1$ to be the state space of a boson and $H_2$ the state space of a fermion, see also this answer of mine . Other examples can arise from theories with spontaneous symmetry breaking where states belonging to different VEVs cannot interact with each other and form superselection sectors.
{ "source": [ "https://physics.stackexchange.com/questions/373357", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/-1/" ] }
373,449
From my knowledge of magnetism, if a magnet is heated to a certain temperature, it loses its ability to generate a magnetic field. If this is indeed the case, then why does the Earth's core, which is at a whopping 6000 °C — as hot as the sun's surface, generate a strong magnetic field?
The core of the Earth isn't a giant bar magnet in the sense that the underlying principles are different. A bar magnet gets its magnetic field from ferromagnetism while Earth's magnetic field is due to the presence of electric currents in the core. Since the temperature of the core is so hot, the metal atoms are unable to hold on to their electrons and hence are in the form of ions. These ions and electrons are in motion in the core which forms current loops. The individual currents produce magnetic fields which add up to form the magnetic field around the Earth.
{ "source": [ "https://physics.stackexchange.com/questions/373449", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/170325/" ] }
373,780
According to Stoke's law, the retarding force acting on a body falling in a viscous medium is given by $$F=kηrv$$ where $k=6π$ . As far as I know, the $6π$ factor is determined experimentally. In that case, how is writing exactly $6π$ correct since we obviously cannot experimentally determine the value of the constant with infinite precision?
It is not determined experimentally, it is an analytical result. It is verified experimentally. As @Mick described it is possible to derive the velocity and pressure field of a flow around a sphere in the Stokes flow limit for small Reynolds numbers from the Navier-Stokes equations if the flow is further assumed to be incompressible and irrotational. Once the flow field is determined, the stress at the surface of the sphere can be evaluated: $$\left.\boldsymbol{\sigma}\right|_w = \left[p\boldsymbol{I}-\mu\boldsymbol{\nabla}\boldsymbol{v}\right]_w$$ from which follows the drag force as: $$\left.\boldsymbol{F}\right|_w = \int_\boldsymbol{A}\left.\boldsymbol{\sigma}\right|_w\cdot d\boldsymbol{A}$$ From this it follows that the normal contribution of the drag force (form drag) is $2\pi\mu R u_\infty$ , while the tangential contribution (friction drag) of the drag force is $4\pi\mu R u_\infty$ , where $u_\infty$ is the free-stream velocity measured far from the sphere. The combined effect of these contributions is evaluated as $6\pi\mu R u_\infty$ or the total drag force. This result is also found by evaluating the kinetic force by equating the rate of doing work on the sphere (force times velocity) to the rate of viscous dissipation within the fluid. This shows nicely there are often many roads to the same answer in science and engineering. For details i suggest you look at the Chapter 2.6 and 4.2 from Transport Phenomena by Bird, Steward & Lightfoot.
{ "source": [ "https://physics.stackexchange.com/questions/373780", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/157583/" ] }
373,788
I am going through E. Hecht's "Optics", and I am currently trying to solve some problems from the book. However, I need help with one of them. The equation for a driven damped oscillator is $m_e\frac{d^2x}{dt^2} + m_e\gamma\frac{dx}{dt} + m_ew_0^2x = qE(t)$ (c) Derive an expression for the phase lag, $\alpha$ and discuss how $\alpha$ varies as $w<<w_0$ to $w=w_0$ to $w>>w_0$. I got the following result for the phase lag: $\alpha = arctan(\frac{\gamma w}{(w_0^2 - w^2)})$ I have the solution manual, and this result, according to it, is correct. However, I do not understand how $\alpha$ should vary. According to the solution manual: $\alpha$ ranges continuously from 0 to $\pi/2$ to $\pi$. But: $arctan$ function range from $-\pi/2$ to $\pi/2$. How it then can be continuous from 0 to $\pi/2$ to $\pi$? For $w=w_0$ case we actually have division by 0. So, again, how it then can be continuous? How to derive exect values for $\alpha$? Thanks.
It is not determined experimentally, it is an analytical result. It is verified experimentally. As @Mick described it is possible to derive the velocity and pressure field of a flow around a sphere in the Stokes flow limit for small Reynolds numbers from the Navier-Stokes equations if the flow is further assumed to be incompressible and irrotational. Once the flow field is determined, the stress at the surface of the sphere can be evaluated: $$\left.\boldsymbol{\sigma}\right|_w = \left[p\boldsymbol{I}-\mu\boldsymbol{\nabla}\boldsymbol{v}\right]_w$$ from which follows the drag force as: $$\left.\boldsymbol{F}\right|_w = \int_\boldsymbol{A}\left.\boldsymbol{\sigma}\right|_w\cdot d\boldsymbol{A}$$ From this it follows that the normal contribution of the drag force (form drag) is $2\pi\mu R u_\infty$ , while the tangential contribution (friction drag) of the drag force is $4\pi\mu R u_\infty$ , where $u_\infty$ is the free-stream velocity measured far from the sphere. The combined effect of these contributions is evaluated as $6\pi\mu R u_\infty$ or the total drag force. This result is also found by evaluating the kinetic force by equating the rate of doing work on the sphere (force times velocity) to the rate of viscous dissipation within the fluid. This shows nicely there are often many roads to the same answer in science and engineering. For details i suggest you look at the Chapter 2.6 and 4.2 from Transport Phenomena by Bird, Steward & Lightfoot.
{ "source": [ "https://physics.stackexchange.com/questions/373788", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/177180/" ] }
373,909
At night, I went outside. I had a box with two slits in it. I directed torch light towards it, but I saw only two bands of light on the wall and shadow from the rest of the box. Why did it not produce interference like a double slit experiment should?
In order to see the interference fringes, four conditions must be fulfilled: Your light source either has to be point-like or very far away from the slits, Your light source must be monochromatic* (i.e. emit only at a single wavelength), The slits must be very close together, and The slits must be very thin. Failing to meet any of these will generate enough noise to completely obscure any signal you hope to measure. Probably the first one to fix in your case is to switch from a torch (which is neither point-like nor monochromatic) to a laser (which is much closer to being both of those things). *In order to see the usual interference fringes, you must use monochromatic light. There is a similar effect you can do with a point source of white light, though, as shown in the comments.
{ "source": [ "https://physics.stackexchange.com/questions/373909", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/177210/" ] }
373,916
Im trying to program an active ragdoll animation system for my game, and ive been stuck on this question for a while. Lets imagine a body falling backwards as shown on the scheme below. My question is - purely mechanically, what prevents the butt muscle there from providing enough torque to make the body stand upright again? Or to even slow down the fall? Im asking cause right now im applying the same constant amount of torque, to make my ragdoll stand upright. And when body starts to fall, this torque acts unnaturally, making it stand under forces that should destabilize it, or slow down the fall, when it actually gets destabilized.
In order to see the interference fringes, four conditions must be fulfilled: Your light source either has to be point-like or very far away from the slits, Your light source must be monochromatic* (i.e. emit only at a single wavelength), The slits must be very close together, and The slits must be very thin. Failing to meet any of these will generate enough noise to completely obscure any signal you hope to measure. Probably the first one to fix in your case is to switch from a torch (which is neither point-like nor monochromatic) to a laser (which is much closer to being both of those things). *In order to see the usual interference fringes, you must use monochromatic light. There is a similar effect you can do with a point source of white light, though, as shown in the comments.
{ "source": [ "https://physics.stackexchange.com/questions/373916", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/178364/" ] }
375,010
This is probably a dumb question. I guess what I'm trying to ask is if radio waves travel the same speed as gamma rays, how do gamma photons carry more energy than radio photons? Do they spin faster? What other energy sources could they carry if they are moving the same speed through space?
Higher energy photons have shorter wavelengths. This means they are higher frequency. We can look at the equations, like E=h, and see directly that shorter wavelengths have more energy, but I think you're going to want a more intuitive example. Let's haul out the ropes! Battle ropes are an exercise tool. You try to set up waves that propagate down the ropes. If we visualize ourselves pumping these ropes, we see that if we want to create higher frequencies and shorter wavelengths, we have to put more energy into the system. We have to accelerate the ropes up and down at higher rates, and that requires more energy. This is true even if we keep the amplitude of the ropes the same. Photons don't move up and down like this, but they do create oscillating electric and magnetic fields (which are often visualized in a form similar to battle ropes). Oscillating this field more rapidly involves more energy, in the same way as the higher frequency battle ropes did. Like with the battle ropes, the light waves travel at the same speed, regardless of whether they are high frequency (high energy) or low frequency (low energy). The energy is seen in how rapidly the rope changes position (or the fields change strength).
{ "source": [ "https://physics.stackexchange.com/questions/375010", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/178986/" ] }
375,027
I imagine that with a big enough telescope, I would be able to zoom in and see the Mars rover in enough detail to make out the details (like the wheels, cameras, etc.). How large would the telescope have to be? (or how can I calculate this value?)
Telescope resolution is all about apparent angles. From the sounds of it, the lowest resolution you'd settle for would be something capable of resolving about $1 \operatorname{cm}$ objects, right? Well, the distance between the Earth and Mars varies, depending on the time of year, from around $0.5\operatorname{AU}$ to $2.5\operatorname{AU}$ ($7.5\times 10^{10} \operatorname{m}$ to $3.7\times 10^{11} \operatorname{m}$). At those distances, a $1$ centimeter object subtends an angle of $$\theta = \frac{s}{d},$$ which is $1.5\times 10^{-13}\operatorname{rad}$ to $2.7\times 10^{-14}\operatorname{rad}$. The resolution of a circular telescope is given by the formula $$\theta = \frac{1.22\lambda}{D}.$$ So, assuming you're using visible light, with $\lambda \approx 500\operatorname{nm}$, to resolve those $1$ centimeter objects it would require telescopes with a diameter of $D=4.6\times 10^6\operatorname{m}$ to $7.4\times 10^7\operatorname{m}$. For reference, the diameter of Earth is about $1.3\times 10^7\operatorname{m}$. Note that the sheer size is only one of the challenges. In order to achieve this theoretical resolution you would need the surface of the mirror to have the correct shape everywhere to within about a wavelength of light. In other words, this Earth-sized mirror could not have any imperfections larger than about $500\operatorname{nm}$. To see some of the information related to getting ordinary lenses and mirrors correct to this level see the Wikipedia article on optically flat .
{ "source": [ "https://physics.stackexchange.com/questions/375027", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/169706/" ] }
375,209
Once, Dirac said the following about renormalization in Quantum Field Theory (look here , for example): Renormalization is just a stop-gap procedure. There must be some fundamental change in our ideas, probably a change just as fundamental as the passage from Bohr's orbit theory to quantum mechanics. When you get a number turning out to be infinite which ought to be finite, you should admit that there is something wrong with your equations, and not hope that you can get a good theory just by doctoring up that number. Has this fundamental change come along afterward, and if so, what is the nature of this "fundamental" change? Is it an attempt to unify quantum mechanics with general relativity (of which the two main streams are String Theory and Loop Quantum Gravity, and of which I don't think they correspond with reality, but that aside)? Is there something more exotic? Or was Dirac just wrong by assuming that the procedure is just a "stop-gap" procedure?
There are a lot of projects going on, and I'll try to sum them up with pithy one-liners that are as accurate as my own (admittedly limited) understanding of them. The solutions include: Classical renormalization: it's the predictions that matter, and renormalization is just the only (admittedly complicated) way of taking the continuum limit we have. Wilsonian renormalization: it's simply not possible to construct a non-trivial theory that is not a low energy effective theory, and the non-renormalizable constants are those that don't affect low energy effective theories. String theory: this whole 4-d space-time is an illusion that is built from the interaction of interacting 2-d space-times (strings). Because all interactions are renormalizable in 2-d, the problems go away (though there are many compactified space-like dimensions that we have yet to see). Loop quantum gravity: the problem comes from taking the continuum limit in space-time, so let's just throw out the idea of a continuum altogether. I don't find any of these approaches particularly satisfying. My own inclination is to favor the "more derivatives" approach because it involves the fewest technical changes, but it requires an enormous philosophical change. The cause of that philosophical change comes about from the requirement that the theory be Lorentz invariant; it would, in principle, be possible to make theories not just renormalizable, but UV finite, by adding some more spatial derivatives. Because of Lorentz invariance, though, adding more space derivatives necessarily entails adding more time derivatives. Ostrogradsky showed in classical physics alone that more than two derivatives necessarily entails the Hamiltonian no longer having a lower bound (a good technical overview is given in Woodard (2007) and Woodard (2015) ). It is generally considered so important that the Hamiltonian serves as the thing that constrains the theory to a finite volume of phase space that it is half of one of the axioms that goes in to QFT ; in sum: there exists an operator that corresponds to the Hamiltonian that serves as the generator of time translations (and to the Noether charge conserved due to the time invariance of the laws of physics), and the eigenvalues of the generator of time translations are positive semi-definite (or, have a lower bound). The content of the Källen — Lehmann representation ( Wikipedia link , also covered in section 10.7 of Weinberg's "The Quantum Theory of Fields", Vol. I ) is that the above postulate, combined with Lorentz invariance, necessarily implies no more than two derivatives in the inverse of the propagator. The combination of Ostrogradsky and Källen—Lehmann seems insuperable, but only if you're insistent on maintaining that "Hamiltonian = energy" (here, I use "Hamiltonian" as shorthand for the generator of time translations, and "energy" as shorthand for "that conserved charge that has a lower bound and confines the fields in phase space"). I suspect that if you're willing to split those two jobs up that the difficulties in higher derivative theories disappear. The new version of the energy/time translation postulate would be something like: the generators of space-time translations are conserved (Hamiltonian, 4-momentum), there exists a conserved 4-vector operator that takes on values in the forward light cone, and The operators in 1 and 2 coincide for low frequency (classical physics correspondence). A key paper in this direction is Kaparulin, Lyakhovich, and Sharapov (2014) "Classical and quantum stability of higher-derivative dynamics" (and the papers that cite it, especially by the same authors), which shows that the instability only becomes a problem for the Pais—Uhlenbeck oscillator when you couple the higher derivative sector to other sectors in certain ways, and it's stable when you limit the couplings to other ways. All of that said, more derivatives wouldn't be a panacea. If you try to remove the divergences in a gauge theory by adding more derivatives, for instance, you'll always add interaction terms with more derivatives in such a way as to keep the theory as divergent as it was in the beginning. Note, that "more derivatives" is mathematically equivalent to Pauli—Villars regularization (PV) by partial fraction decomposition of the Fourier transform of the propagator. PV is known to not play well with gauge theory precisely because of this issue, although it's usually worded as violating gauge invariance because the higher order couplings with more derivatives required to keep gauge invariance are left out.
{ "source": [ "https://physics.stackexchange.com/questions/375209", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/98822/" ] }
375,210
In Wikipedia's description of the Observer Effect wrt particle physics, we have this:- For an electron to become detectable, a photon must first interact with it, and this interaction will inevitably change the path of that electron. Surely a photon is not the only way to detect an electron. Photon emission means that we might actually see the electron's presence (with a magnifying glass or something). But electrons have charge and interact with other charged thingies, so they could be detected indirectly through other means couldn't they? And that's not even getting into detection via the electron's mass or momentum. Is this just an example of sloppy language, or are photons always involved even if electrons interact with other +/- charges?
There are a lot of projects going on, and I'll try to sum them up with pithy one-liners that are as accurate as my own (admittedly limited) understanding of them. The solutions include: Classical renormalization: it's the predictions that matter, and renormalization is just the only (admittedly complicated) way of taking the continuum limit we have. Wilsonian renormalization: it's simply not possible to construct a non-trivial theory that is not a low energy effective theory, and the non-renormalizable constants are those that don't affect low energy effective theories. String theory: this whole 4-d space-time is an illusion that is built from the interaction of interacting 2-d space-times (strings). Because all interactions are renormalizable in 2-d, the problems go away (though there are many compactified space-like dimensions that we have yet to see). Loop quantum gravity: the problem comes from taking the continuum limit in space-time, so let's just throw out the idea of a continuum altogether. I don't find any of these approaches particularly satisfying. My own inclination is to favor the "more derivatives" approach because it involves the fewest technical changes, but it requires an enormous philosophical change. The cause of that philosophical change comes about from the requirement that the theory be Lorentz invariant; it would, in principle, be possible to make theories not just renormalizable, but UV finite, by adding some more spatial derivatives. Because of Lorentz invariance, though, adding more space derivatives necessarily entails adding more time derivatives. Ostrogradsky showed in classical physics alone that more than two derivatives necessarily entails the Hamiltonian no longer having a lower bound (a good technical overview is given in Woodard (2007) and Woodard (2015) ). It is generally considered so important that the Hamiltonian serves as the thing that constrains the theory to a finite volume of phase space that it is half of one of the axioms that goes in to QFT ; in sum: there exists an operator that corresponds to the Hamiltonian that serves as the generator of time translations (and to the Noether charge conserved due to the time invariance of the laws of physics), and the eigenvalues of the generator of time translations are positive semi-definite (or, have a lower bound). The content of the Källen — Lehmann representation ( Wikipedia link , also covered in section 10.7 of Weinberg's "The Quantum Theory of Fields", Vol. I ) is that the above postulate, combined with Lorentz invariance, necessarily implies no more than two derivatives in the inverse of the propagator. The combination of Ostrogradsky and Källen—Lehmann seems insuperable, but only if you're insistent on maintaining that "Hamiltonian = energy" (here, I use "Hamiltonian" as shorthand for the generator of time translations, and "energy" as shorthand for "that conserved charge that has a lower bound and confines the fields in phase space"). I suspect that if you're willing to split those two jobs up that the difficulties in higher derivative theories disappear. The new version of the energy/time translation postulate would be something like: the generators of space-time translations are conserved (Hamiltonian, 4-momentum), there exists a conserved 4-vector operator that takes on values in the forward light cone, and The operators in 1 and 2 coincide for low frequency (classical physics correspondence). A key paper in this direction is Kaparulin, Lyakhovich, and Sharapov (2014) "Classical and quantum stability of higher-derivative dynamics" (and the papers that cite it, especially by the same authors), which shows that the instability only becomes a problem for the Pais—Uhlenbeck oscillator when you couple the higher derivative sector to other sectors in certain ways, and it's stable when you limit the couplings to other ways. All of that said, more derivatives wouldn't be a panacea. If you try to remove the divergences in a gauge theory by adding more derivatives, for instance, you'll always add interaction terms with more derivatives in such a way as to keep the theory as divergent as it was in the beginning. Note, that "more derivatives" is mathematically equivalent to Pauli—Villars regularization (PV) by partial fraction decomposition of the Fourier transform of the propagator. PV is known to not play well with gauge theory precisely because of this issue, although it's usually worded as violating gauge invariance because the higher order couplings with more derivatives required to keep gauge invariance are left out.
{ "source": [ "https://physics.stackexchange.com/questions/375210", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/173064/" ] }
375,463
I'm a psychotherapist by training so go easy on me here. I would like to know, in simple terms if possible, the basic mechanics of how Hubble can see back in time. I pretty much understand, in this case, that light has to travel extremely long distances to be captured by a lens. I think the key point I'm missing is how long did, for instance, the light from the deep field shot, take to reach Hubble's lens? Really appreciate any help. And please keep the answer on beginner level please?
You seem to already know the answer. You "see back in time" exactly the same way you can "hear back in time" during a thunderstorm... You know how they tell you to start counting seconds when you see the lightning, stop counting when you hear the thunder, then divide your count by five, and that's how many miles away the storm is? So the lightning arrives almost instantaneoulsy, while the thunder travels much more slowly. So when you finally hear that thunder, you're hearing what happened in the past. Indeed, five seconds in the past for every mile away the storm is. And that's simply because it takes thunder (i.e., sound) five seconds to travel one mile. Exactly the same thing for light. One year in the past for every six trillion miles away the star (or whatever you're looking at) is. And that's simply because it takes light one year to travel six trillion miles. (And, by the way, they colorfully name six trillion miles "one light-year").
{ "source": [ "https://physics.stackexchange.com/questions/375463", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/179204/" ] }
376,161
In nuclear fusion, the goal is to create and sustain (usually with magnetic fields) a high-temperature and high-pressure environment enough to output more energy than put in. Tokamaks (donut shape) have been the topology of choice for many years. However, it is very difficult to keep the plasma confined within the walls because of its high surface area (especially in the inner rings). Why hasn't anyone used spherical magnetic confinement instead (to mimic a star's topology due to gravity)? - Apart from General Fusion E.g. injecting Hydrogen into a magnetically confined spherical space and letting out the fused energy once a critical stage has been reached?
Not an expert, but I believe the answer lies in the hairy ball theorem . You see, for a magnetic field to turn charged particles back from a surface, the field must be parallel to the surface, which means that to have a fully confining geometry you must have a smooth, everywhere non-zero, and continuous vector field mapped onto a surface. But the theorem say that you can't do that on a sphere (or topologically equivalent shape).
{ "source": [ "https://physics.stackexchange.com/questions/376161", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/179567/" ] }
376,658
Today I saw the phenomenon in picture below. It was not raining (at least nearby me). What can that be? What is the technical explanation? Edit: Just seen today in Southern Brazil another circumhorizon arc, this time together with a halo. Really beautiful.
This is not a "rainbow" . Quoting from the linked site, "not all colored patches in the sky are rainbows". It instead is a circumhorizon arc . Rainbows are caused by internal by refraction and reflection in water droplets. Halos such as the circumhorizon arc portrayed in the question are caused by internal refraction and reflection in ice crystals.
{ "source": [ "https://physics.stackexchange.com/questions/376658", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/63097/" ] }
376,866
While reading the article The Lessons of Leonardo: How to Be a Creative Genius in the Wall Street Journal, I came across a century old question than Leonardo da Vinci wrote in his notebook. The question was: Why is the fish in the water swifter than the bird in the air when it ought to be the contrary, since the water is heavier and thicker than the air? I pondered a while on this question. Indeed, water is denser than air. What I thought was that the fish might have travelled along the water currents to move about, but can't the same apply to birds (travelling with the winds)? Does anyone have a more detailed understanding to this question?
The speed of any object is a balance between the drag force on the object and the thrust the object can create. Attaining high speeds, or possibly more relevant in this case, high acceleration requires making the thrust as high as possible while keeping the drag as low as possible. Water is a lot denser than air, but this affects the drag mostly when the flow is turbulent and the drag is dominated by inertial forces. In this regime the drag is effectively due to having to push the medium out of the way, and it's easier to push aside low density air than high density water. However if you can keep the flow laminar the density isn't as big a factor and the drag is dominated by the viscosity of the medium. Water is a lot more viscous than air (as well as denser) but for streamlined objects the drag due to viscosity can be kept remarkably low. Where water wins is that it's much easier to develop a high thrust in water than in air. In a fluid medium, where there is nothing solid to push against, you produce thrust in basically the same way that a rocket does. If you push away some mass of water $m$ with a velocity $v$ then the momentum of the water changes by $mv$, which means that your momentum changes by $-mv$. So you push the water in one direction and you accelerate in the other direction. The thrust you generate is simply the rate of change of momentum of the water. And it should now be clear why it's easier to generate a high thrust in water than in air. Because air is low density you can't push a high mass of it (unless you're very large) so it's hard to change its momentum by very much. So to summarise: in water the drag is high but it's easy to generate a high thrust in air the drag is low but it's hard to generate a high thrust. How the speeds in water and air compare depend on the exact tradeoff between drag and thrust. Sailfish can reach speeds of 68 mph, but they do this mainly by being very streamlined so they can keep the drag as low as possible while exploiting the high thrust they can get from water. Birds generally don't reach speeds this high because although the drag in air is small they simply can't generate the thrust required for high speeds. Peregrine falcons can reach speeds of 200 mph, far faster than a sailfish, but they do it only in dives where gravity provides the thrust.
{ "source": [ "https://physics.stackexchange.com/questions/376866", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/179737/" ] }
377,212
Recently I read up on spacecrafts entering earth using a heat shield. However, when exiting the Earth's atmosphere, it does not heat up, so it does not need a heat shield at that point of time yet. Why is this so? I know then when entering earth, the spacecraft will heat up due to various forces like gravity, drag and friction acting upon it, thus causing it to heat up. This is the reason why a spacecraft entering Earth's atmosphere would need a heat shield. Why wouldn't an exiting spacecraft experience this too? Any help would be appreciated.
Aerodynamic heating depends on how dense the atmosphere is and how fast you are moving through it; dense air and high speed mean more heating. When the rocket is launched, it starts from zero velocity in that portion of the atmosphere which is densest and accelerates into progressively less dense air; so during the launch profile the amount of atmospheric heating is small. Upon re-entry, it is descending into the atmosphere starting not at zero velocity but at its orbital velocity, and as it falls towards the earth it is picking up speed as the radius of its orbit decreases. By the time it runs into air dense enough to cause heating it is moving at tremendous speed and it gets very, very hot.
{ "source": [ "https://physics.stackexchange.com/questions/377212", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/179737/" ] }
377,352
From the principle of least (or stationary) action, we get that a classical system will evolve according to Euler-Lagrange equations: $$\frac{d}{dt}\bigg (\frac{\partial L}{\partial \dot{q_i}}\bigg) = \frac{\partial L}{\partial q_i} .$$ I have often read and heard from physicists that this differential equation encapsulates all of classical mechanics. A glorious reformation of Newton's laws that are more general, compact and much more efficient. I get that if you plug in the value of the Lagrangian, you re-obtain Newton's second law. But Newtonian mechanics is based on 3 laws, is it not? The law of inertia is a special consequence of the second law, so we don't need that, but what about the third law, namely that forces acts in pairs; action equals minus reaction? My question is, can we obtain Newton's third law from this form of Euler-Lagrange equation? I understand that Newton's third law for an isolated $2$-body system follows from total momentum conservation, but what about a system with $N\geq 3$ particles? If not why do people say that it's all of classical mechanics in a nutshell?
Newton's third law is that for every action, there is an equal and opposite reaction. This is a statement of momentum conversation. In the Euler-Lagrange equation, the last term $$ \frac{\partial\mathcal{L}}{\partial q_i} $$ is a generalized force. Similarly, the generalized momentum is $$ \frac{\partial\mathcal{L}}{\partial \dot q_i}. $$ If the generalized force is zero, then $$ \frac{d}{dt} \frac{\partial\mathcal{L}}{\partial \dot q_i} = 0 $$ Mathematically, this means that the generalized momentum is constant over time, i.e. it is conserved, which is Newton's third law. We don't even need the Lagrangian to summarize all of Newton's laws. As you likely know, Newton's second law $F=ma$ is a special case of $F = \frac{dp}{dt}$. This generally accounts for all of Newton's laws: Newton's 1st Law - an object will persist in a state of uniform motion unless compelled by an external force: If $F = 0$, then $\frac{dp}{dt} = 0$, and thus $p$ is constant. Newton's 2nd Law - $F = ma$: if m is constant, then $$ F = \frac{dp}{dt} = \frac{d(mv)}{dt} = m\frac{dv}{dt} = ma.$$ Newton's 3rd law - see the below derivation. So, generally speaking, Newton's third laws are slightly redundant in the sense that they can all be described by $$ \vec F = \frac{d\vec p}{dt}. $$ (or by the Euler-Lagrange equation, as you argue.) Edit: Derivation of N3L with conservation of momentum Consider a system with total momentum $\vec p_{\rm tot}$ and two particles with momenta $\vec p_1$ and $\vec p_2$ such that $\vec p_{\rm tot} = \vec p_1 + \vec p_2$. If the system is closed, then the total momentum is conserved, so $$ \frac{d\vec p_{\rm tot}}{dt} = 0.$$ Differentiating both sides, you get $$ \frac{d}{dt}(\vec p_{\rm tot}) = \frac{d}{dt}(\vec p_1 + \vec p_2)$$ $$ 0 = \frac{d\vec p_1}{dt} + \frac{d\vec p_2}{dt} = \vec{F_1} + \vec{F_2}$$ $$ \vec F_1 = -\vec F_2 $$
{ "source": [ "https://physics.stackexchange.com/questions/377352", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/94020/" ] }
377,546
Why do we use capacitors in defibrillators and not batteries? I know that capacitors are used to store electrical energy but isn't the function of a battery just the same? Moreover, I know that batteries are used to make capacitors work in a defibrillator, but isn't a battery just enough to make it work? Why is a capacitor so fundamental in a defibrillator? And the last thing that makes my doubts stronger is that a battery normally has a much higher voltage compared to a capacitor.
Batteries usually use electro-chemical reactions to store energy. These reactions have a limit to how fast they can transfer that energy. For example, a typical lead acid car battery can only draw so much energy; after a certain point it begins to break down, producing hydrogen gas which then can ignite with free oxygen in the air. An analogy would be a gravity battery, like a large dam of water at a higher gravitational energy level. Opening a door would allow water to flow and could maybe run a circuit at some voltage for a month straight. However it might never be able to go past that voltage level if it is much higher because there is no way to harness all the energy - like if the dam just completely opened all at once. So there are clear limits to rate it can be discharged. Capacitors can better store large potential differences; however they cannot often sustain the voltages for extended periods of time. This is because capacitors simply use an electric field and various geometry to store energy. So if you need only a short burst of energy, you can reduce the size of battery required by using a capacitor. Basically the capacitor stores up a higher voltage than the battery terminals, and then releases it. A much larger battery would otherwise be required, but with the larger battery you would get a more sustained voltage than a capacitor. Look up the "Amp hours" of a battery. The battery contains more energy than the capacitor, yet the capacitor can out put a higher voltage. Also see specific energy or energy density of various types of batteries and then for capacitors. Also due to the capacitor's limited energy, perhaps this prevents the possibility of some kind of stuck circuit where energy is allowed to continuously flow. Maybe more complex circuitry would be needed with a battery to get a short voltage spike, closing then opening quickly. You can get sparks and noise etc. With the capacitor once the circuit is closed it can be left closed and the capacitor will just dump its potential and that's it.
{ "source": [ "https://physics.stackexchange.com/questions/377546", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/172394/" ] }
377,599
If I take a periodic wavefunction $\psi\left(\vec{r}\right)$ and then take the Fourier space dispersion of the wave function as defined below $$ \psi(\vec{k})=\iiint_{-\infty}^{+\infty}\psi\left(\vec{r}\right)e^{-\vec{k}\cdot\vec{r}}\mathrm{d}^3\vec{r} $$ Is there a reason for calling $\psi(\vec{k})$ the momentum space representation of the wavefunction? (I understand the fact that the vector space $\vec{k}$ gets quantized in accordance to the formulation, $\vec{k}\cdot\vec{R}=2\pi$, where in $\vec{R}$ is the lattice translation vector periodicity of $\psi(\vec{r})$ in a crystal lattice), but is there some other reason for calling it momentum space?
As per my username, I feel it is partially my responsibility to address this question. I said it before and I'll say it again: The Fourier Transform is not an accident. There are countless reasons it has the precise form it has. Let $F[f]$ denote the Fourier Transform of $f$, and let $\boldsymbol P=-i\boldsymbol \partial$ denote the momentum operator. We have $$ F[\boldsymbol Pf]=\boldsymbol p F[f]\tag1 $$ so that $F$ diagonalises $\boldsymbol P$. Indeed, the plane-wave basis $\mathrm e^{i\boldsymbol p\cdot \boldsymbol x}$ satisfies $$ \boldsymbol P\,\mathrm e^{i\boldsymbol p\cdot \boldsymbol x}=\boldsymbol p\,\mathrm e^{i\boldsymbol p\cdot \boldsymbol x} \tag2 $$ which automatically implies $(1)$, as claimed. From this we learn that any operation that includes $\boldsymbol P$ becomes trivial if we work with $F[f]$ instead of with $f$ -- if we work in Fourier space. Thus, Fourier space is known as momentum space. Convenient, right? In a nutshell, $F[\psi]$ is to momentum what $\psi$ is to position. This is a direct consequence of the (formal) fact that $$ F[\langle \boldsymbol x|]=\langle \boldsymbol p|\tag3 $$ which means that both sides agree when they act on $|\psi\rangle$.
{ "source": [ "https://physics.stackexchange.com/questions/377599", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/64360/" ] }
378,647
When I turn on a heater, it's supposed to be roughly 100% efficient. So it converts electricity to heat with great efficiency, but why can't we do the reverse: generate electricity by absorbing heat? I have been searching the internet and from what I have read it seems completely pointless because it is so inefficient, like ridiculously inefficient, as in 10% efficient. So why can't we do the reverse? I get that energy is lost when converting from one form of energy to another but how can we get such great efficiency going from one form but have horrid efficiency going back? I also read online that one way to cool the earth down could be to radiate the heat off the planet. Anyways, sorry about my mini debate, can anyone answer how we could potentially cool the earth, because to me it would seem funny if we couldn't, and if we could then global warming wouldn't be as bad of a thing as it is now, would it?
tl;dr - Current technology absorbs temperature gradients, not heat. As temperature gradients become arbitrarily large, their information content nearly approaches the heat's information content, such that the apparent thermal efficiency,$$ {\eta}_{\text{Carnot~efficiency}}~~{\equiv}~~\frac{E_{\text{useful}}}{E_{\text{heat}}}~~{\approx}~~1-\frac{T_{\text{cold}}}{T_{\text{hot}}} \,,$$nearly approaches unity, showing that we can almost absorb heat while a temperature gradient is sufficiently large. Hypothetical/future technology: Absorbing heat for energy You could harness heat with near-perfect efficiency! Just requires finding Maxwell's demon . Maxwell's demon can be tough to find, but Laplace's demon could tell ya where it's at. The fun thing about Maxwell's demon is that it likes to separate stuff out based on its highly precise perception and movement: . So, you basically tell Maxwell's demon to let out high-speed particles when they're at nearly-tangential velocities to power a dynamo . And, bam! Electricity. One trouble with this scheme is that we don't really know what heat is. I mean, we get the gist that particles are bouncing around and such, but we don't know all of the exact locations and velocities and such for all of the particles. And given that ignorance, we're basically unable to do anything with heat. Except, of course, when our ignorance isn't complete. At the macroscopic level, we can appreciate stuff like temperature gradients; the larger the temperature gradient, the more information we have about relative motion of the particles at different temperatures. And we can exploit this information, up to the point at which we've drained it away. For example, we can use heat to boil water, producing steam and thus raising pressure, using that pressure to turn a turbine. As the steam turns the turbine by going from a region of higher pressure to lower pressure, we again lose discriminating information about the system until our ignorance is again complete; but, we get useful energy out of the deal. Conceptually, it's all about information. Whenever we have information about something, we may be able to turn that information into effect until the point at which we cease having information. Though we might say that we don't necessarily lose all of the information, as the energy that we get out of the deal isn't so much actually " energy " quite so much as it's a system that we have relatively more information about, and thus can exploit more readily. Maxwell's demon and Laplace's demon are powerful critters because they have tons of information. By always having information, they can always construct systems that they can exploit for the extraction of energy. By contrast, humans tend to be limited in what information we have. And that's the problem with just arbitrarily absorbing " heat " : heat is a vague description about stuff moving around. In fact, even knowing a temperature is fairly useless information by itself; rather, we need temperature gradients, i.e. discriminating information, to knowingly construct a system that behaves how we want it to, e.g. a power generator. In real life, there's interest in creating molecular machines , like observed in the classical example of ATP synthase , as a future technology. As @J... pointed out , Maxwell's demon in the above is acting as a thermal rectifier which are currently being researched ( example ). Current technology: Absorbing temperature gradients, not heat Why is it so inefficient to generate electricity by absorbing heat? The above describes a system for generating electricity from heat. However, current technology never does this. With current technology, we absorb temperature gradients . This may sound pedantic, but the fact that we're absorbing gradients and not heat itself is precisely why we can't get the energy equal to the heat out of the process. Since we absorb the gradients, the Carnot efficiency tends to increase with the size of the gradient,$$ {\eta}_{\text{Carnot~efficiency}}~~{\approx}~~1-\frac{T_{\text{cold}}}{T_{\text{hot}}}. $$ Conceptually, the reason for this is that, as the temperature gradient$$ {\Delta}T~~{\equiv}~~T_{\text{hot}}-T_{\text{cold}} $$becomes arbitrarily large, the information contained in knowing the temperature gradient approaches the information that Laplace's demon would know, at which point efficiency would approach unity:$$ \lim_{{\Delta}T{\rightarrow}\infty}{\left(1-\frac{T_{\text{cold}}}{T_{\text{cold}}+{\Delta}T}\right)}~~{\rightarrow}~~1, $$i.e. 100% efficiency. This is, sure, you wouldn't know the exact velocities of all of the particles, but what you don't know is dwarfed by what you do know, i.e. the extreme relative temperature gradient.
{ "source": [ "https://physics.stackexchange.com/questions/378647", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/180891/" ] }
378,837
The term ' pomeron ' was apparently important in the early stages of QCD. I can't find any reference to it in modern QFT books, but older resources sometimes refer to it offhand, and I've yet to find any explanation of what it actually is. Old theoretical sources such as this throw up a wall of math and seem to say that a pomeron is a purely mathematical object, whose meaning is not clear to me: The formal definition of Reggeon is the pole in the partial wave in t-channel of the scattering process. [...] Pomeron is a Reggeon with the intercept close to 1. [...] “Hard” Pomeron is a substitute for the following sentence: the asymptotic for the cross section at high energy for the “hard” processes which occur at small distances of the order of $1/Q$ where $Q$ is the largest transverse momentum scale in the process. But old experimental sources say that the pomeron is a particle , and that its exchange explains some features of hadron scattering cross sections. That mismatch confuses me, but Wikipedia goes even further and says the pomeron has been found: By the 1990s, the existence of the pomeron as well as some of its properties were experimentally well established, notably at Fermilab and DESY. The pomeron carries no charges. The absence of electric charge implies that pomeron exchange does not lead to the usual shower of Cherenkov radiation, while the absence of color charge implies that such events do not radiate pions. This makes me really confused. If the pomeron has been found, how come no modern sources ever talk about it? Is it some other particle, a glueball or a meson, maybe, under a different name? Or have pomerons been ruled out? Are the cross sections they were invented to explain now well-understood? If not, why does nobody talk about pomerons anymore? Edit: after searching around some more, I'm getting the impression that the pomeron is an 'effective' particle, the result of the exchange of one of a whole infinite family of particles that lie on a particular Regge trajectory. But what really mystifies me is that every source steadfastly refuses to say what those particles are, i.e. their quark and gluon content. This is apparently part of the spirit of the bootstrap program , where such questions are just not allowed to be asked, but shouldn't we be able to understand this in conventional QCD?
Before the quark model became the standard model for particle physics, the prevailing model for elementary particle scattering was using the theory of Regge poles. At the time (1960s) electromagnetic interactions/scatterings could be described very well with Feynman diagrams, exchanging virtual photons. The study of strong interactions tried to reproduce this successful use of Feynman diagrams ; for example there was the vector meson dominance model : In particular, the hadronic components of the physical photon consist of the lightest vector mesons, ρ , ω and ϕ . Therefore, interactions between photons and hadronic matter occur by the exchange of a hadron between the dressed photon and the hadronic target. The Regge pole theory used the complex plane and Regge trajectories to fit scattering crossections, the poles corresponding to resonances with specific spins at the mass of the resonance but arbitrary ones off. The exchange of Regge poles ( instead of single particles) was fitted to scattering crossection data. See this plot for some of the "fits" . At the time , when it seemed that the Regge pole model would be the model for hadronic interactions, it was necessary to include elastic scattering, i.e. when nothing happened except some energy exchanges. The Regge trajectory used for that was called the Pomeron trajectory. the particles on this trajectory have the quantum numbers of the vacuum. If you really want to delve into the subject here is a reference . With the successes of the standard model the Regge theory was no longer mainstream, but considered old fashioned. This abstract for , The Pomeron and Gauge/String Duality is revisiting the pomeron . The emergence of string theories though revived the interest in regge theory and particularly the veneziano model which describes the regge poles and considers the resonances as excitations of a string.
{ "source": [ "https://physics.stackexchange.com/questions/378837", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/83398/" ] }
379,193
From what I can tell, if you pick a color near the extreme of the visible light spectrum, let's say red, and trace a path across the spectrum until you are outside of the visible range, at some point the red color will begin to darken and dim until it's invisible (ie. black), indicating that you are now outside the range. If this is the case, does that mean that if you were to observe a powerful enough light source emitting solely that dim frequency, that it could be blinding to your eyes? By blinding, I don't think I mean literally blocking the rest of your vision, but more as in painful or overly-stimulating, the same way a bright white or bright blue light can affect a person's sight. It is hard to imagine being blinded by a dim light, because usually when you increase the intensity of the light, the saturation of the color will increase, until it appears bright. In this special case though, the color is already fully saturated to begin with, so no matter how high you increased the intensity, it will always appear 'dim'. Is this right? EDIT: Comments have shown that the word I was searching for is dazzled , not blinded. There are some great answers here explaining the harmful effects of this type of light, and this is a legitimate interpretation of what I am looking for. The essence of the question, however, is to understand if a near-IR or near-UV light could have a dazzling effect on the observer's eyes.
Yes indeed, infrared light (the wavelengths beyond those of red light) can be very harmful to your eyes even though you don't see them. The same applies for ultraviolet light (the wavelengths beyond those of violet light). You can read more under the topic of laser eye safety . People that work with lasers need to use safety glasses if these lasers fall within certain categories. These include infrared and ultraviolet lasers.
{ "source": [ "https://physics.stackexchange.com/questions/379193", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/126412/" ] }
379,280
Is there anything preventing the following experiment from being done right now? Imagine that a human ran from point 'a' to point 'b' while light particles that reflected off a clock moved through a special medium from point 'a' to point 'b' as well. Could a human arrive at point 'b' before light? (As for the special medium, I'm imagining something like a complex maze of mirrors submerged in a very dense material.) If so, if the human waited at the finish line to view light arriving in second place, would they see the time that the race began on the clock?
No physical laws are being broken in this thought experiment. If you are concerned with the relativistic requirement "nothing can go faster than the speed of light", that only applies to the speed light goes in a vacuum: $c = 3 \times 10^8$ m/s. The reference to light in that relativity postulate makes it sound like if you could only find a situation where you slowed light down, you could break the laws of physics; not so. A better statement of the postulate would be "nothing can go faster than $3 \times 10^8$ m/s, which happens to also be the speed light travels at in a vacuum." I don't see anyone going faster than $3 \times 10^8$ m/s in this thought experiment, so no physics violations. As for what the human at the end of the race sees: He sees a blinding blue light from the all the Cherenkov Radiation from even the slightest charged particle passing through the medium. And perhaps the time at the start of the race. It's exactly what you would imagine since we are talking non-relativistic speeds. What an anti-climactic answer, eh?
{ "source": [ "https://physics.stackexchange.com/questions/379280", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/169706/" ] }
379,892
I'm going through physics with my 5th grade child. There is a question and answer that indicates that a airborne ball at the top of the trajectory does not have kinetic energy. The diagram below shows the path taken by a ball after it was kicked. The ball hit the ground initially at D and eventually stopped moving at E. At which position(s) did the ball have no kinetic energy? B only A and E only B and E only B, D, and E only This is the explanation given in the book: Answer: 3. B and E only At A and C, the ball had both kinetic energy and (gravitational) potential energy. At the maximum height at B, the ball had only (gravitational) potential energy but no kinetic energy. At D, the ball had kinetic energy but no (gravitational) potential energy as it was at the ground level. At E, the ball stopped moving, so it had no kinetic energy. The ball also had not (gravitationa) potential energy as it was at ground level. Ignoring the "complicated" fact that anything with heat has kinetic energy internally, is there some reason the ball wouldn't continue to have kinetic energy? There is no longer vertical motion, but it is still in forward motion.
The answer is wrong. Some author confused the situation when the ball is moving only vertically (and a graph as a function of time) with this case where there is horizontal motion. The horizontal component of the velocity is constant in a ballistic trajectory , it is the same at points A, B, and C. The kinetic energy is zero only when the ball is stationary, and the ball is stationary only at E: so this is the only point where the kinetic energy is zero. So... do not trust this book.
{ "source": [ "https://physics.stackexchange.com/questions/379892", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/181529/" ] }
380,155
In a children's museum, I ran across this fountain. You can adjust the flow rate with a valve, visible at the bottom. At low flow rates, the sheet of water does more or less what you'd expect: it curves downward, eventually falling more or less vertically. When you increase the flow rate, a surprising thing happens: the sheet of water curls back in toward the center. What causes this? Whatever it is, it seems that it must overcome the additional outward momentum of the water and then some. It is hard for me to believe that it is a result of surface tension for this reason--why should surface tension be so much stronger at the higher flow rate? Is it an aerodynamic effect? I did a brief internet search and didn't find anything.
This is probably caused by a ring-shaped whirl of air (similar to a "smoke ring") under the water sheet which is driven by the speed of the water. This whirl flow produces a lower air pressure at the inner side of the falling water sheet that sucks it to the inside.
{ "source": [ "https://physics.stackexchange.com/questions/380155", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/180385/" ] }
380,415
Quantum mechanics is near-universally considered one of the most difficult concepts to grasp, but what were the persistently unintuitive, conceptually challenging fields physicists had to grasp before the emergence of quantum mechanics? The aim of asking this is, mainly, to gain insight on how one may approach a subject as unintuitive as quantum mechanics.
Influence at a distance. This is commonly how gravity in Newtonian gravitation was understood. Newton himself described it as anti-intuitive and could not see how any man trained in natural philosophy (the physics of his day) could accept it. In fact, this notion goes much further back. Over two millennia ago, Aristotle described external force as that which can cause a change in the natural motion of another and this by contact . (He also had a concept of Internal force. This causes internal growth and change but there is no notion of contact here. This is implicit in Dylan's poem 'The force that drives the flower...' but I very much doubt he was referring to Aristotle's theory explicitly). Yet Newton's theory had exactly this. It was accepted because of the success of his theory a posteriori in explaining many many things. Its very success hid its conceptual problematics, but there were people who struggled over it looking for a mechanism that would explain gravity in a local manner. This is at the root of that forgotten concept called the aether and there is, in fact, a long history of such attempts if one cares to look into it. The solution to this anti-intuitive mechanism was finally found three centuries later. First Faraday discovered the field concept, and this was applied to Electromagnetism by Maxwell. Then it was taken up by Einstein in his theory of gravity. He identified the field of gravity with spacetime itself and this, understood correctly, finally made sense of that stillborn concept, the aether . In fact, where physicists had gone wrong with the aether was to try and conceptualise it mechanically. The field was not mechanical, and much more flexible. It remains a pervasive concept in modern physics and much elaborated upon. It's chastening to think a similar time-scale might apply in sorting out the puzzles associated with QM. In which case there are two centuries to go! And one is very much aware of a tangled history of attempts to make conceptual sense of QM. To turn it from an operational theory to an ontic one - we would very much like to know what is there and not merely operate an efficient machine that tells us the answers to our questions. Another example might be imaginary numbers. This, however, was the discovery of mathematicians rather than physicists.
{ "source": [ "https://physics.stackexchange.com/questions/380415", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/165685/" ] }
380,748
When I am heating water on a gas stove, it begins to boil after some time and bubbles of air can be seen escaping out. However, as soon as I increase the amount of heat in the stove, the rate of escape of air bubbles increases immediately, and as soon as I turn off the stove, the air bubbles stop coming out right then. In this case, I am boiling, water in a steel utensil. Steel is a good conductor of heat. Then why does this change take place so quickly with the change in heat?
In large part because under normal circumstances water doesn't get hotter than boiling - at that point it becomes steam, as you know. You can add heat and boil it away faster , but the water can only get so hot. When you remove the source of heat the water will quickly drop below this threshold. You're right on the knife edge of temperature.
{ "source": [ "https://physics.stackexchange.com/questions/380748", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/181963/" ] }
381,828
Towels (and coats) are often stored on hooks, like this: To the untrained eye, it looks like the towel will slide off from its own weight. The hook usually angles upwards slightly, but a towel does not have any "handle" to string around and hang on to the hook -- this makes it seem like it will simply slide off. Yet these hooks hold towels well, even heavy bath towels. Why? I have three ideas: There is sufficient friction between the towel and the hook to counteract the force of the towel pulling down. The hook is angled such that the force is directed into the hook, not directed to slide the towel off of it. The center of mass of the towel ends up below the hook, since the towel is hanging against the wall. Which of these ideas are likely correct? I am also happy with an answer based purely on theoretical analysis of the forces involved.
There is some contribution from the friction of the various surfaces, but the main factor is the balancing of weight. It's important to note that the hook is set slightly away from the wall, which allows almost the entire weight of the towel to move alongside or behind the front of the hook tip. The manner in which the towel is cast over the tip of the hook creates "wings" that droop down the sides and behind the tip of the hook. Weight in the wings that is supported by fabric on either side of the hook tip, does not contribute to sliding off (provided the towel is hooked in its middle and the amount of weight on each side is balanced). Therefore, the weight of the fabric forced into the "throat" of the hook (and the wings which hang from it), needs only offset the weight of the fabric that remains on the front side of the hook, which is only a very small amount of the overall weight of the towel (and therefore only needs a very small amount of fabric in the throat of the hook to offset it). Incidentally, even silk fabric on a smooth hook can be hooked in this manner - the reduced friction simply requires more fabric to be accumulated in the throat, whereas rough fabrics on rough hooks can get away with relying less on balance and more on friction.
{ "source": [ "https://physics.stackexchange.com/questions/381828", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/37643/" ] }