Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Gradient of the potential originated from two similar magnetic vector potentials is not the same The magnetic vector potential $\textbf{A}$ can be defined up to a gradient of a field. Adding or subtracting such gradient should not change the physics of the problem. The same reasoning is applied for the time derivative of $\textbf{A}$. Imagine a simple vector potential such that $$\frac{\partial\textbf{A}}{\partial t} = (0, -z, y)$$ Inside a finite conductor (e.g., a cube), such field originates a charge accumulation at the boundaries, that creates a total electric field $$\textbf{E} = -\nabla \phi -\frac{\partial\textbf{A}}{\partial t} $$ where the gradient of the potential is due to the charge accumulation. So.. When $\textbf{A}$ is shifted, there shouldn't be any change, but what I see doing FEM simulations is that $\phi$ and $\nabla \phi$ change radically, producing the same total electric field in the two cases. A second vector potential is such that: $$\frac{\partial\textbf{A}}{\partial t} = (0, -z-1, y-1)$$ Looking at the electric field equation, that makes sense. But if the gradient of the potential is due to the charge accumulation, then there are two different charge densities for the two presented situations. Since I believe the physics is right, there should be an error in my reasoning. Can someone please point where?
Hint: OP's gauge transformation of the magnetic vector potential $$\vec{A}^{\prime}~=~\vec{A} +\nabla \Lambda$$ can e.g. be described by a time-dependent gauge parameter $$\Lambda~=~ -(y+z)t. $$ But this means that the electric scalar potential $$\phi^{\prime}~=~\phi -\partial_t \Lambda$$ transforms as well.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/80974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Math / Physics Help - Barrel Pressure and Velocity Back in 1993 I derived the following equations to calculate projectile velocity and barrel pressure. Recently, I have noticed that I need to double the calculated results in order to obtain real word results. I need some help to find out why I have to double the derived results to make the calculation work. The doubled results appear at the bottom of the image. A Microsoft Excel Spreadsheet Download using the doubled (2x) results can be obtained here: question.xls (removed) Thanks in Advance.
The values you used for barrel pressure (55 kpsi for .50BMG, 60 kpsi for 7.62mm) are all PEAK chamber pressures not $\bar P$. $v_p=\sqrt\frac{2\bar P A L}{m}$ this is the last equation you wrote that was correct. I looked at your excel sheet and found that you did $\sqrt\frac{2\bar P A L}{m}\over 12$ to obtain a $v_p$ in ft/s. This is a mistake. length dimension for $P$, $A$ and $L$ must all be converted from in to ft for $v_p$ to come out as $ft/s$. So if you want to enter pressure area and length in terms of inches, you must do $\sqrt\frac{2\bar P A L}{12m}$ for velocity in $ft/s$. Once you apply this correction, you will obtain $v_p\approx 5000 ft/s$ then it will be clear that using 55,000 psi as average pressure is wrong--it is definitely lower than 55k psi from your graph. A rough estimate of 30 kpsi would give $v_p\approx 3700 ft/s$. Still too high from the real world value of 2900. Here enters more facts: * *when the combusted gas expands, it pushes the gun and bullet in opposite directions thereby doing work on the gun (and whatever is holding on to the gun) *the bullet is forced to rotate because of rifling thereby gaining rotational kinetic energy from the work done by the gas these are two unaccounted forms of energy, so the work done by the expanding gas cannot all go into the translational KE of the bullet. This is why when we make that assumption in $v_p=\sqrt\frac{2\bar P A L}{m} $, velocity is much higher than what is observed in real life--even if you used the correct $\bar P$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How accurate is Newtonian Gravity? I know that really fast moving things need Relativity rather than Newtonian physics. I also know the quirk of the Mercury´s orbit. But just how much more accurate is General Relativity than Newton´s Law of Gravitation for predicting say the orbit of Earth or Neptune? Can the "slingshot" effect where we use another planet´s gravity to accelerate a space probe be done with Newton or does that require General Relativity? Is the speed of Jupiter (18 km/s I think) fast enough to make a difference in the accuracy of GR v Newton´s Law of Gravity?
The anomalous perihelion shift of Earth is 3.84 arc-seconds per century, or about one tenth the size of Mercury's shift. The anomalous perihelion shift of Jupiter is 0.0622 arc-seconds per century, or about one thousandth the size of Mercury's shift. I can't find calculations for the effect of GR on slingshots, but I believe it to be negligable. The only significant deviation that has been found is the flyby anomaly, and GR does not explain this.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why does Newton's third law exist even in non-inertial reference frames? While reviewing Newton's laws of motion I came across the statement which says Newton's laws exist only in inertial reference frames except the third one. Why is it like that?
This is a tricky question. I will show a counter example: Assuming only one thing exist in the world, a ball, and a frame. The frame accelerates with constant $a$ as we wish since it is just something imaginary, and you see the ball has an acceleration as well, and conclude there must be a force acting on the ball:$$F=ma$$ and you remember Newton's III law. But where is the reaction force? The law states something like "forces must appear in pair". There is only one ball in the whole world! Therefore, Newton's III law does not have to hold in simple accelerating frames.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 4 }
Diffeomorphism Invariance of General Relativity I'm sorry I know this has been asked before, but I'm still a bit confused. I understand that an active diffeomorphism $\varphi:M\to M$ can be equivalently viewed as a coordinate transformation so that since the equations of general relativity are tensorial $\varphi^*g$ will be a solution to Einstein's equations if $g$ is. However I don't see how that same reasoning doesn't imply that other physical theories are diffeomorphism invariant. What's the difference between general relativity and other physical theories, like classical mechanics? Why can't diffeomorphisms be viewed as coordinate transformations in both (or am I confused?).
The diffeomorphism invariance of GR means we're operating in the category of natural fiber bundles, where for any bundle $Y\to X$ of geometric objects that appear in the theory, we have a monomorphism $$ \mathrm{Diff} X \hookrightarrow \mathrm{Aut} Y $$ Any diffeomorphism of space-time $X$ needs to lift to a general covariant transformation of $Y$, which are not mere coordinate transformations. These transformations play the role of gauge transformations of GR, but are different from the gauge transformations of Yang-Mills theory: The latter are related to the inner automorphisms of the group and are vertical, ie they leave space-time alone. I'm not sure about the naturalness of the various geometric formulations of classical mechanics - I'd be interested in that as well (but am too lazy to look into it right now).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 0 }
Oscillation of a Bose Einstein condensate in a harmonical trap We were asked to try to make a theoretical description of the following phenomenon: Imagine a 2D Bose Einstein condensate in equilibrium in an harmonical trap with frequency $\omega$. Suddenly the trap is shifted over a distance a along the x-axis. The condensate is no longer in the center of the trap and will start oscillating in the trap. First I thought about using a 2D trial wavefunction in the Gross-Pitaevski equation or the hydrodynamical equations for condensates, but then we were told that we should actually look at how the energy of the condensate depends on certain parameters (position, width,...) and do something with the fact that, for small deviations of such a parameter, a second order expansion can be made, which will introduce a restoring force. This makes sense for classical motion, but in this case it confused me, cause I don't know if the energy that is meant here is the original potential energy of the harmonic trap or the Gross-Pitaevskii energy that is calculated with the GP energy functional. This last one, that was calculated in an earlier exercise for a variational Gaussian wave function, turned out to be $E = \hbar \omega \sqrt{1+Na_s}$ (with $a_s$ the scattering length for the interaction energy) and so it doesn't even depend on the position. Does anyone has got any idea how I should start or approach this theoretical description?
As long as you consider the BEC without inter-particle interactions (because they are negligible for instance) you can simply use the Schrodinger equation. However, if you want to take interactions into account you may want to consider to take the Thomas-Fermi approximation. When interactions are dominant in the dynamics of the system and the kinetic energy is small then this approximation works.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Infinite Wells and Delta Functions In considering a delta potential barrier in an infinite well, I can just enforce continuity at the potential barrier-it doesn't have to go to zero. Why then does it need to go to zero at the walls of the infinite well? These two cases seem to be very similar to be, I even feel like the well wall is equivalent to a summation of delta functions... Where is my logic faulty?
Why then does it need to go to zero at the walls of the infinite well? Because the proper way to find $\psi$ is to solve Schr. equation for finite potential well first and find how $\psi$ depends on the parameters of the potential. Then try to make the limit to the infinite potential well and look what happens to the $\psi$ function. One cannot solve Schr. equation for something like "infinite potential" directly, because "infinite potential" is not a valid function. Due to the requirement of normalizability of $\psi$, in the case of finite well, the $\psi$ function decays to zero for large $x$, and the limiting procedure leads to continuous $\psi$ even in the limit and to boundary condition where $\psi$ is zero outside the well I even feel like the well wall is equivalent to a summation of delta functions... True, the constant potential $V_0$ outside the finite well can be written as $$ V(x) = V_0 \int_{(-\infty,-a)+(a,\infty)} \delta(x-x_0)dx_0. $$ However, in general the solution of the Schr. equation, $\psi$, is not given by linear operator acting on the potential $V(x)$ figuring in the Schr. equation. There is no reason to expect that the $\psi$ function for $V(x)$ will be sum of functions $\psi_{x_0}$ that are solutions to Schr. equation with delta potential $V_0 \delta(x-x_0)$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 4 }
How can a block which is not receiving the direct force have a greater acceleration? I solved it like this: $$F(\text{st max})=5\text{ N}$$ For the top block, $$\begin{align} 6\text{ N} - 5\text{ N} &= 1a \\ a &= 1\ \mathrm{m/s^2} \end{align}$$ For the lower block, the driving force will be the frictional force, so $$\begin{align} 2a &= 5\text{ N} \\ a &= \frac{5}{2} = 2.5\ \mathrm{m/s^2} \end{align}$$ I am confused as to how the lower block could have acceleration greater than the upper block, since the force is acting on the top block.
Your calculations are wrong. The basic assumption that friction = u x N u = coefficient of friction N = Normal force (in this case the weight of the block) Above assumption is valid only if there is relative motion between the two blocks i.e a case of sliding motion, but before we consider that sliding occurs we should verify whether the block are moving relative to each other of not i.e. checking for static friction. Now the maximum value static friction can reach is uN i.e. Sliding/kinetic friction but can also be lesser than that. Taking that into account and assuming friction to be f (a variable) and no relative motion between the blocks. No relative motion means that both blocks will have same acceleration. Calculations : 6N− f =a m/s2 (for small block) f = 2a m/s2 (for big block) substituting f=2a for small block 6N - 2a = a m/s2 6N = 3a m/s2 2 m/s2 = a both blocks having same acceleration, hence no relative motion. Value of friction in this condition is 2 x 2 = 4N which is less than uxN = 5N
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Block and inclined plane (INPhO Problem) The figure shows two blocks on an inclined plane of mass 1kg each.The coefficient of static as well as kinetic friction is $0.6$ and angle of inclination is $30^\circ$ . Find the acceleration of the system. (string and pulley are ideal). Take $g=10m/s^2$. [This is not the real image given in the question and they have not specified the direction of acceleration] I attended the question like any other block-inclined plane problem assuming block $m_2$ to be moving down and got the answer to be $-5.1m/s^2$. But, the answer says if you take the other block to be moving down then you get an acceleration of $-0.1 m/s^2$. Since there are two negative values for acceleration we can conclude that the acceleration is zero. It doesn't make any sense to me. Please explain this. The answer is given in such a way that the static friction has no role. Why is it so?
The answer lies in the fact of how frictional forces work."Frictional forces always try to oppose relative motion between surfaces(and if possible they will completely eliminate the motion between the surfaces)." We can write the equations of motions as(assuming $a$ as stated in the figure) as: $m_1g-T=m_1a$ $T-m_2gsin(\theta)-f=m_2a$ where f is the frictional force directed downwards along the incline. Solving this for $a$ we get, $a=-\frac{f}{m_1+m_2}+\frac{m_1-m_2sin\theta}{m_1+m_2}g$ Therefore, $a=mf+c$ The static frictional force acting on a object is always less than the limiting frictional force$=\mu N$(which also happens here to be the kinetic frictional force).Also the direction of friction is unknown so that leads to the negative values of f.(http://en.wikipedia.org/wiki/Friction#Dry_friction) Therefore,we have, $-\mu mg \cos(\theta)\leq f \leq \mu mg \cos(\theta) $ So using the given extreme values for f,we get that at one extreme the block moves up and at the other it moves down,so there must be a value of $f$ for which $a=0$ since the graph of acceleration vs $f$ is s straight line. Put simply,the extreme values of $a$ imply that the line of $a$ vs $f$ has an x-intercept which in turn will imply that friction will self adjust to give a zero acceleration.(That is exactly what the frictional force wants-no relative motion.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Have there been more distinctive names suggested for neutrino mass states $\nu_1, \nu_2, \nu_3$? The different mass states of neutrinos are generally named $\nu_1, \nu_2, \nu_3$. By comparison, the names of quark mass states (up, down, strange, and so on) or the names of mass states of charged leptons (electron, muon, tau(on)) appear more distinctive, or whimsical. Have there been perhaps any suggestions of correspondingly less generic "proper names" for each of the (three distinct) neutrino mass states?
No. The data is analyzed with $\nu_1, \nu_2,\nu_3 $, well defined by their mixing matrices in the PDG, but, before the resolution of the hierarchy, they cannot be identified firmly with the classroom poster placeholder names $\nu_L, \nu_M,\nu_H $, Lightest, Middle, Heaviest. In the normal hierarchy, the two sets identify ordinally; ultimately, after the resolution of the hierarchy, somebody will think of good names--but not as good as my linked answer's: Huey, Dewey, and Ratatouille, needless to say... The PDG poster above is a vast advance w.r.t. the older version of that poster featuring the oxymoronic weak-charged-current "eigen"states $\nu_e,\nu_\mu,\nu_\tau$, regrettably still featured in dark corners of WP, and often referred to as "lepton flavor eigenstates", an absurd and confusing name indicating lepton flavor is thereby ipso facto violated!!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Intuitive understanding of the entropy equation In thermodynamics, entropy is defined as $ d S = \dfrac{\delta q_{\rm }}{T}$. This definition guarantees that heat will transfer from hot to cold, which is the second law of thermodynamics. But, why do we denote entropy as$\dfrac{\delta q_{\rm }}{T}$ other than $\dfrac{\delta q_{\rm }}{T^2}$,$\dfrac{\delta q_{\rm }}{e^T}$,or something else? Is there an intuitive explanation for this $\dfrac{\delta q_{\rm }}{T}$?
The answer is plain and simple differential calculus $$ds = \frac{\partial s}{\partial e} \large{|}_vde +\normalsize{\frac{\partial s}{\partial v}} \large{|}_e dv$$ What does the differential change $$ds=\frac{\delta q}{T}$$ have to do with the first? For starters, an important question on everyones mind should be what is temperature? Is it a physical quantity that we have intuition about what it represents? In certain circumstances perhaps, but in general we have no intuition about what temperature truly represents. So what is temperature? It is simply defined by $$T=\frac{\partial e}{\partial s}\large{|}_v$$ If you want to go start your own country and define temperature some other way feel free to do so, but no one is going to follow you. The first equation is thus $$ds = \frac{de}{T} +\normalsize{\frac{\partial s}{\partial v}} \large{|}_e dv$$ So how do you get to the second equation? Two simple assumptions: no volume change occurred, in other words no physical work, and the internal energy change was strictly due to heat transfer ($dv=0$ and $de=\delta q$). Voila!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 3, "answer_id": 2 }
Correct way to do a Thomas-Fermi approximation for cold gases I have calculated the total Gross-Pitaevskii energy for a 2D Bose-Einstein condensate in an harmonical trap, using a variational gaussian wave function with a variational parameter b. Now I want to compare the variational energy to the Thomas-Fermi result. I know that the Thomas-Fermi approximation means that you neglect the total kinetic energy in comparison to the interaction energy, but I was wondering how to do it specifically in this case. I namely have three different possibilities in mind: 1) Just remove the kinetic energy term from the energy expression I found with the variational wave function, and keep the value of the variational parameter b as it was before. 2) Remove the kinetic energy term from the energy expression I found with the variational wave function, and calculate a new value for the variational parameter b for this specific case. 3) Use the Thomas-Fermi approximation in the GP equation to find a new expression for the wave function (instead of the one I used before) and use this one to calculate the energy. I can't seem to decide which of these three is the right one. Can anyone give a convincing argument as to which method I should use?
You simply take $|\psi|^2=1/g[\mu-V(x)]$. This because now your time independent GPE is $\mu\psi=(V+g|\psi|^2)\psi$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/81949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Purposes of QEC stabilizers I am going through the idea of stabilizer formalism. Defined what is a Pauli group $P_n$ and its properties, we describe a stabilizer set $S$ as: $$S\subset P_n$$ The stabilizer set establishes valid codewords for a state if the equation $$s\left|\psi\right\rangle=\left|\psi\right\rangle,\;\;\;\forall s \in S \;\;\;\;\; (1)$$ is satisfied. That means $\left|\psi\right\rangle$ is a +1 eigenstate of $s$. Each valid codeword belongs to $V$, that is a set of qubits stabilized by $S$. Therefore, if $(1)$ is satisfied, then $\left|\psi\right\rangle \in V$. Let's consider the Steane code of 7 qubits. The followings are the stabilizer codes for such encode: $$ K^1 = IIIXXXX $$ $$ K^2 = XIXIXIX $$ $$ K^3 = IXXIIXX $$ $$ K^4 = IIIZZZZ $$ $$ K^5 = ZIZIZIZ $$ $$ K^6 = IZZIIZZ $$ These reduce the $2^7$ Hilbert space into a two-dimensional subspace. These stabilizers generate valid codewords for the Steane code: $$ \left|0\right\rangle_L \equiv \frac{1}{\sqrt{8}}(\left|0000000\right\rangle + \left|1010101\right\rangle + \left|0110011\right\rangle + \left|1100110\right\rangle + \left|0001111\right\rangle + \left|1011010\right\rangle + \left|0111100\right\rangle + \left|1101001\right\rangle) $$ $$ \left|1\right\rangle_L \equiv \frac{1}{\sqrt{8}}(\left|1111111\right\rangle + \left|0101010\right\rangle + \left|1001100\right\rangle + \left|0011001\right\rangle + \left|1110000\right\rangle + \left|0100101\right\rangle + \left|1000011\right\rangle + \left|0010110\right\rangle) $$ Here my doubt come; Each stabilizer is used as "filters of the input", so if an input, on which are applied one or more of those stabilizers, does not satisfy the equation $(1)$ ($\left|\psi\right\rangle$ -1 eigenvalue of $s$?), then we can say that an error occurred. Through syndrome measurement we can identify where the error occurred and correct it. Another issue: verifying $(1)$ means, for example, $\;K^1 \left|1010101\right\rangle = \left|1011010\right\rangle$. Since both $\left|1010101\right\rangle$ and $\;\left|1011010\right\rangle$ represent $\left|0\right\rangle_L$, we say that $(1)$ is satisfied? Finally: $\;K^4 \left|1010101\right\rangle = ?$ Thank you. Added Last trouble: The state of the system is represented by: $$\left|\psi\right\rangle_F={1\over 2}(\left|\psi\right\rangle_I+U\left|\psi\right\rangle_I)\left|0\right\rangle + {1\over 2}(\left|\psi\right\rangle_I-U\left|\psi\right\rangle_I)\left|1\right\rangle$$ We apply $K^1,K^2,K^3$ to the input and we measure the ancilla qubits to verify the integrity of the input (if $\left|\psi\right\rangle_I$ is +1 eigenstate of $K^1,K^2,K^3$). If the equation $(1)$ is not satisfied, then the corrupted qubit is corrected with a $Z$ gate addressed by syndrome measurement of ancilla qubits. This is how does the system work?
1) If there is an error $E_j$, the new states $E_j|0\rangle_L$ and $E_j|1\rangle_L$ are eigenvectors, with eigenvalue $-1$, of all the stabilizers $s_j$ belonging to some set subset $S_j$ of $S$. (the elements of $S_j$ anticommute with $E_j$).This subset $S_j$ identifies uniquely the error $E_j$. 2) $|0\rangle_L$ and $|1\rangle_L$ are eigenvectors, with eigenvalue $1$, of all the stabilizers $s$ belonging to $S$ (this is not true for the "components" of $|0\rangle_L$ and $|1\rangle_L$ like, for instance, $|1010101\rangle$).For a stabilizer $s$, you just calculate $s|0\rangle_L$ and $s|1\rangle_L$, and you check that the result is $|0\rangle_L$ or $|1\rangle_L$. For instance : $K^1\left|0\right\rangle_L = (IIIXXXX) \\\frac{1}{\sqrt{8}}(\left|0000000\right\rangle + \left|1010101\right\rangle + \left|0110011\right\rangle + \left|1100110\right\rangle + \left|0001111\right\rangle + \left|1011010\right\rangle + \left|0111100\right\rangle + \left|1101001\right\rangle)= \\ \frac{1}{\sqrt{8}}(\left|0001111\right\rangle + \left|1011010\right\rangle + \left|0111100\right\rangle + \left|1101001\right\rangle + \left|0000000\right\rangle + \left|1010101\right\rangle + \left|0110011\right\rangle + \left|1100110\right\rangle)\\ =\left|0\right\rangle_L$ 3) $K^4 \left|1010101\right\rangle = IIIZZZZ |1010101\rangle$. With $Z |0\rangle = |0\rangle$, and $Z |1\rangle = -|1\rangle$, you get : $K^4 \left|1010101\right\rangle = |1010101\rangle$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does a heavy body move with the slightest force on a frictionless surface? If I apply horizontal force on a body resting on the ground, my force will be opposed by the frictional force and the body will accelerate at the point where my force exceeds the force of friction = $\mu\, \mathrm{N}$ ($\mathrm{N}$ being the normal and $\mu$ being the coefficient of friction). In this case, the threshold value will be $\mu mg$ where $m$ is the mass of the resting body since $\mathrm{N} = mg$. Is the following statement then true: Regardless of the mass/weight of the body, if the body is placed on a frictionless surface, the body will move with the slightest force?
(Classical Physics only) Any massive body has a property known as inertia, thus even a body floating in outer space would require some kind of force to be accelerated. Using Newtons second law, you would find $$\tag{NII} \sum \vec{F} = \frac{\mathrm{d}}{\mathrm{d}t}\vec{p},$$ which for constant mass and one-dimensional motion simplifies to $$\tag{NII'} F = m a,$$ where $F$ is the force mentioned by the OP and $a$ is the acceleration of the center of mass of the body ($m$ is its mass). For example, suppose the body has a huge mass of $10^{10}$ kg and that you push it with a force of 1 N. This gentle (gentle is relative here of course) force would then give the body an acceleration of $$\frac{1}{10^{10}}\text{m/s^2} = 10^{-10}\text{m/s^2}. $$ Integrating this you would get the velocity (as a function of time) of the huge gigantic body to be $$v(t) = 10^{-10}\cdot t,$$ where $t$ is time (we've taken starting velocity to be $v(0) = 0$ m/s). Now according to this site a garden snail can move with a speed of 0.03mph or in m/s $$v_{snail} = 0.0134112 \mathrm{~m/s} $$ so in order for the huge body to move at the speed of a snail, you would have to put a constant force of 1 N for a time period given by $$T = \frac{0.0134112}{10^{-10}}\mathrm{~s} \approx 51 \text{months}. $$ As for the frictionless surface: since the force of gravity on earth would be perpendicular to the surface, the above analysis would apply in the horizontal direction (if the force of gravity would not be perp to the frictionless surface, well then the huge body would move due to the pull of gravity any way).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Simple QM question about Sy matrix Given a spin 1/2 particle in state $|\alpha\rangle=\begin{bmatrix}a \\b\end{bmatrix}$, what is the probability of it being measured in the $S_{y+}$ state. Is this equivalent to, if $S_y$ is measured on this particle, what is the probability the result being $\hbar/2$? I think it's supposed to be like $|\langle S_{y+}|\alpha\rangle|^2$ right? And $|S_{y\pm}\rangle=\frac{1}{\sqrt{2}} \begin{bmatrix}1 \\\pm i\end{bmatrix}$ But then I get $a^2/2 +b^2/2$ for both. That doesn't make sense. What am I doing wrong?
Figured out what I was doing wrong. I had to write it out explicitly considering, a,b complex. The answer is $\frac{1}{2}(|a|^2 + |b|^2 + i(a^*b - b^*a))$ Does anyone know a more succinct way of writing that last term? It's definitely real so no worries about that.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does light have an unending journey? When we shine a torch in a room its light travels to the back of the room. What happens to the light of a star ? I dont suppose we can say it continues to travel to the back of the universe as the universe has no back. Does the light continues its journey forever ? Or does it somehow end into black holes which might work as light sinks ?
Light beams are electromagnetic radiations, they stops when they meet some material that can stop them, in space generally they continue propagate until they have energy.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Practical example of stabilizer codes Given the Steane code $$ \left|0\right\rangle_L \equiv \frac{1}{\sqrt{8}}(\left|0000000\right\rangle + \left|1010101\right\rangle + \left|0110011\right\rangle + \left|1100110\right\rangle + \left|0001111\right\rangle + \left|1011010\right\rangle + \left|0111100\right\rangle + \left|1101001\right\rangle) $$ $$ \left|1\right\rangle_L \equiv \frac{1}{\sqrt{8}}(\left|1111111\right\rangle + \left|0101010\right\rangle + \left|1001100\right\rangle + \left|0011001\right\rangle + \left|1110000\right\rangle + \left|0100101\right\rangle + \left|1000011\right\rangle + \left|0010110\right\rangle) $$ and its relative stabilizers: $$ K^1 = IIIXXXX $$ $$ K^2 = XIXIXIX $$ $$ K^3 = IXXIIXX $$ $$ K^4 = IIIZZZZ $$ $$ K^5 = ZIZIZIZ $$ $$ K^6 = IZZIIZZ $$ The stabilizer set establishes valid codewords for a state if the equation $$s\left|\psi\right\rangle=\left|\psi\right\rangle,\;\;\;\forall s \in S \;\;\;\;\; (1)$$ is satisfied. That means $\left|\psi\right\rangle$ is a +1 eigenstate of $s$. We then consider a practical example of the usage of these stabilizers The state of the system is represented by: $$\left|\psi\right\rangle_F={1\over 2}(\left|\psi\right\rangle_I+U\left|\psi\right\rangle_I)\left|0\right\rangle + {1\over 2}(\left|\psi\right\rangle_I-U\left|\psi\right\rangle_I)\left|1\right\rangle$$ where $U \in \left\lbrace K^1,K^2,K^3\right\rbrace$. We apply $U$ to the input state and we measure the ancilla qubits (syndrome measurement) to verify the integrity of the input (if $\left|\psi\right\rangle_I$ is +1 eigenstate of $K^1,K^2,K^3$). If the equation $(1)$ is not satisfied, then the corrupted qubit is corrected with a $Z$ gate addressed by the syndrome measurement. This is how does the system work?
It is correct. We may summarize all the operations : 1) Encoding one logical qubit as $n$ physical qbits (codeword) , $\alpha|0\rangle + \beta |1\rangle \to \alpha|0\rangle_L + \beta |1\rangle_L$, here $n$ = 7 for the Steane code. 2) Preparing $m$ ancilla qbits, here $m = 3$, in your schema allowing to detect phase-flip errors $Z_i$ 3) During transmission of the codeword, ther is exposition to noisy environment, and the codeword may suffer errors (bit -flip $X_i$, phase-flip $_iY$, bit-phase flip $Z_i$). In your schema, we are only intereseted in phase-flip errors $Z_i$. 4) With the help of the generators, we compute the syndrome and store it in the ancilla qbits without altering the $n$-qbit world. 5) Use the information provided by the error syndrome to locate the error of any one of the $n$ qbits of the codeword. In your case, the generators $K_1, K_2, K_3$ are the only needed to check any error $Z_i$, and each syndrome $xyz$ identifies precisely the bit $i$ corresponding to the error $Z_i$. Here the syndromes for the $Z_i$ are $010,001,011,100,110,101,111$ 6) Correct the error Ref : Marinescu/Marinescu, Classical and Quantum information, Elsevier, p $462$, p $509$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
momentum conservation and gluons The process is the following: $$e^-e^+ \rightarrow photon \rightarrow quark + antiquark$$ Regarding the momentum conservation law, how come we have a photon of spin 1 and at the end some meson with spin 0? Are gluons "behind this"? If this is correct, at which point are they radiated? From quarks? Or? Is this photon a virtual photon or not? I'm a bit confused here.
Ignore the gluon for the moment Regarding the momentum conservation law, how come we have a photon of spin 1 and at the end some meson with spin 0? First of all spins are angular momentum not momentum. Secondly the two quarks have a spin 1/2 which will add to either 0 or 1, and 1 conserves the angular momentum at the vertex. All intermediate lines in Feynman diagrams describe a virtual particle, in this case the line describes a particle with the quantum numbers of a photon, but not the mass. It is off mass shell and by definition virtual. Are gluons "behind this"? No. Quarks couple to electromagnetic fields because they are charged. The vertex is electromagnetic. Quarks couple to gluons because they have color charge., and actually no free quarks exist because the attraction created by the virtual gluons ties quarks up into mesons and baryons; this can happen at a later stage as the green gluon shows here, which should end up to another quark antiquark line to make maybe a rho-meson (spin one, used to be called vector meson dominance).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Drag Force per point in the body Let us consider the common equation for drag force for any body. $F_D = \frac{1}{2}\rho v^2C_dA$ Here the A is the representative area which is the so called area of cross section of the body for most shapes under conditions of stable velocity (that is the angle of attack/velocity/viscosity of the medium is not so much). Now my question is about the distribution of these forces. * *Given an extended body, how will the drag force be distributed across the points on the surface of the body. That is given say a sphere, how is this force distributed throughtout the surface of the body? Say there are 60 points uniformly distibuted in the sphere. The entire sphere is moving forward with a velocity of $v$. So each point in the sphere has a velocity of $v$. In that case, will the drag force at each point be equal to $F_D$ ? *Next if we consider an extended body where the velocities at each point is not the same, then will the same equation be applied to calculate the drag force at each point ? Because my viewing of the drag force is kind of like a "whole body thing" and this contradicts with this notion. I will post more clarification if required. EDIT(1) : Posted more clarity on the questions.
Each point on the surface will have a pressure (force normal to surface) and drag (force tangential to surface). By integrating over the entire surface you get the overall effect which is sometimes expressed in force/moments as $$ F_D = \frac{1}{2} \rho v_{body}^2 A_{body} C_D \\ M_D = \frac{1}{2} \rho v_{body}^2 \ell A_{body} C_M $$ The $v_{body}$ used is just a convenient scaling factor to get things in the right units. The same with $A_{body}$. You will not use these equations to get the forces on a small area of a body. They work only for the entire body. To get into the details of the forces in each infinitesimal surface area patch ${\rm d}A$ you will need to solve the fluid dynamics equations for continuity and momentum which will give you the velocity vector and pressure at each location.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are we living in the $q$ part of the phase space? In Hamilton mechanics and quantum mechanics, $p$ and $q$ are almost symmetric. But in the real world, the $p$ space isn't as intuitive as the $q$ space. For example, We can uniquely identify a person by its position, but not its momentum. Two fermions can easily have the same momentum while they cannot hold the same position. Are there particles that cannot have the same momentum while can hold the same position? What causes the breaking of the symmetry of $p$ and $q$?
We are not living in the $q$ part of phase space : we indeed live in the full phase space since we're definitely not fixed-momentum objects. However we give more importance to our position than to our velocity/momentum; then the question gets out of physics into psychology. In my opinion, part of it may be because of the way we gather knowledge : we fix it on fixed-$q$ supports: books, paper,... and we like to stop things to study them (draw a curve of a movement).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Book Recommendation: Classical Relativistic Fields My bare bookshelves are crying out for the addition of a new family member, more specifically a book: * *Discussing the classical Klein-Gordon field, spinor fields, gauge fields and all other matter fields in a generally co-variant fashion. *Discussing of the Schrodinger (non-relativistic scalar) field. *Detailing the application of fields to things like inflation, dark matter, condensed matter etc. *Possessing nice, thorough derivations (like the single-particle, relativistic Lagrangians from the complex scalar field) and other such items of interest which show how single-particle mechanics follow from classical fields. Discussion of conformal symmetries, first class and second class constraints are also desired. *With some discussion of field quantization. I have possession of some papers covering these topics and some books (like Landau's Classical Theory of Fields), but they are outdated, restricted to EM fields and very often bypass all discussion of classical fields to quantize them right away. Since I asked for a generally co-variant approach, there should be extensive co-ordinate free representations.
Some of the topics you mentioned seem to be discussed in this book: Mark Burgess, Classical Covariant Fields, Cambridge University Press, 2005 http://www.cambridge.org/us/academic/subjects/physics/theoretical-physics-and-mathematical-physics/classical-covariant-fields#contentsTabAnchor
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does this perpetuum mobile not work?(Gases and Densities) I recently came up with the following concept. It is very simple, and may have been thought of before. A picture says more than a thousand words, so here is it explained in a picture: Note that water was used to make the example easier to understand. Another gas (denser than Gas A and B) could be used instead, resulting in less friction than when using water. At first glance, it seems that this machine could run for an indefinite amount of time. But I do not deem it possible to break the law of energy conservation. However, I have a hard time finding out what kind of force would cause this machine to slow down and stop.
In addition to Bernhard's answer, just because three gases (Gas A,B and air - which is itself a mixture of nitrogen, oxygen, and other gases) have different densities, it does not mean they will remain seperated when in a container. In fact, as entropy of the system increases over time, Gas A, B and air will make an even (if heterogeneous) mixture.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/82934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 0 }
Explain reflection laws at the atomic level The "equal angles" law of refection on a flat mirror is a macroscopic phenomenon. To put it in anthropomorphic terms, how do individual photons know the orientation of the mirror so as to bounce off in the correct direction?
There are a few ways of approaching this. Visible light is about 500nm, while typical atomic diameters are on the order of 0.5 nm to be generous (citation for carbon is 0.2 nm). So from this point of view the rough properties of the surface can't be resolved. However, each individual atom will absorb and re-radiate depending on the electrons that surround it, and the energy levels that the electrons occupy is heavily dependent on the material (it's band structure). For example, glass lets through a lot of visible light because there aren't any available energy levels for electrons to go into when they absorb visible light, but may block UV light because those energy levels are available. Furthermore, as we get to X-rays the wavelength is short enough that individual atoms can be resolved. Because photons from neighbouring atoms are significantly out of phase, they interfere and instead of a nice specular reflection like from a mirror, you get strong diffraction minima and maxima. This is the basis of X-ray crystallography.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/83105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 4, "answer_id": 2 }
What is the relationship between Maxwell–Boltzmann statistics and the grand canonical ensemble? In the grand canonical ensemble one derives the expectation value $\langle \hat n_r\rangle^{\pm}$ for fermions and bosons of sort $r$: $$ \langle \hat n_r\rangle^{\pm} \ \propto \ \frac{1}{\mathrm{exp}[(\varepsilon_r-\mu)/k_B T] \mp 1} . $$ For $(\varepsilon_r-\mu) / k_B T\gg 0$, we find $$ \langle \hat n_r\rangle^{\pm} \ \approx \ \frac{1}{\mathrm{exp}[(\varepsilon_r-\mu)/k_B T]} \ \propto \mathrm{exp}[-(\varepsilon_r-\mu)/k_B T].$$ The same motivation seems to be found in this Wikipedia article. However, on the same page, right at the beginning, that intuitive statement is made: In statistical mechanics, Maxwell–Boltzmann statistics describes the average distribution of non-interacting material particles over various energy states in thermal equilibrium, and is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible. Now from my derivation above, it seems that "temperature is high enough" does the opposite of helping $(\varepsilon_r-\mu) / k_B T\gg 0$ to be fulfilled. What is going on?
The introductory paragraph you quote with horror says temperature ''high enough'' to avoid quantum effects. (It did not say anything like ''arbitrarily large''.) If the temperature is too low, things like Bose--Einstein condensation can occur, which invalidate Maxwell--Boltzmann statistics. The temperature should be high enough so that it is unlikely to have a quantum effect, but not so high that pair production occurs (yet another quantum effect). These conditions have nothing to do with your analysis of the validity of dropping the plus or minus one in the denominator, which is yet another condition for the validity of the Maxwell--Boltzmann statistics. Another condition is that the interaction between the particles should be weak: these are all independent conditions. Boltzmann's constant is rather small by macroscopic standards: $1.3806488 \times 10^{-23} \mathrm{m^2} \mathrm{kg/s^2 K}$ so you can see that $T$ would have to be enormous before it would make the quantity you are worried about, $(\epsilon−\mu)/\mathrm{kT}$, much less than 22.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/83173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 2 }
Does antimatter curve spacetime in the opposite direction as matter? According to the Dirac equation, antimatter is the negative energy solution to the following relation: $$E^2 = p^2 c^2 + m^2 c^4.$$ And according to general relativity, the Einstein tensor (which roughly represents the curvature of spacetime) is linearly dependent on (and I assume would then have the same mathematical sign as) the stress-energy tensor: $$G_{\mu \nu} = \frac{8 \pi G}{c^4}T_{\mu \nu}.$$ For antimatter, the sign of the stress-energy tensor would change, as the sign of the energy changes. Would this change the sign of the Einstein tensor, causing spacetime to be curved in the opposite direction as it would be curved if normal matter with positive energy were in its place? Or does adding in the cosmological constant change things here?
The sign of the stress-energy tensor does not change for antimatter. There are various energy conditions (ANEC, WEC, etc.) that stipulate various bounds on the stress energy tensor, but the only things that violate them are small scale quantum effects such as the Casimir force, the scalar inflaton field, and dark energy (which we don't yet know what it is, but could be, for example, the cosmological constant). The ALPHA experiment demonstrates that antimatter (in this case, anti hydrogen) behaves the same as matter in a gravitational field: http://www.nature.com/ncomms/journal/v4/n4/full/ncomms2787.html
{ "language": "en", "url": "https://physics.stackexchange.com/questions/83307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 10, "answer_id": 4 }
How does that Boltzmann distribution interact with entropy? In an ideal gas, the Boltzmann distribution predicts a distribution of particle energies $E_i$ proportional to $ge^{-E_i/k_bT}$. But, doesn't entropy dictate that the system will always progress towards a state of maximum disorder? In other words the system evolves towards a macro-state which contains the maximum possible number of indistinguishable micro-states. This happens when all particles have the same energy, which seems to contradict the Boltzmann distribution. I'm pretty sure I've misinterpreted entropy here, but I'd be please if someone could explain how!
What you say is not true. Macrovariables of a system will evolve towards and fluctuate around equilibrium values which maximize/minimize the thermodynamic potential corresponding to the constraint imposed on your system. For an isolated system, this corresponds to maximum entropy states while for a system in contact with thermostat it corresponds to minimizing the free energy. It is easy to see from the Boltzmann distribution you wrote: $p(E) = \frac{ge^{-\beta E}}{Q}$ Where $g$ can be written as $g=e^{S(E)/k_B}$. This gives in the end $p(E) = \frac{e^{-\beta( E - T S(E))}}{Q}$ and hence the most probable energy state is the one that minimizes the free energy. For a finite system, the energy per particle fluctuates around the most probable value until, eventually, in the thermodynamic limit, the magnitude of these fluctuations vanishes.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/83386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Does time expand with space? (or contract) Einstein's big revelation was that time and space are inseparable components of the same fabric. Physical observation tells us that distant galaxies are moving away from us at an accelerated rate, and because of the difficulty (impossibility?) of defining a coordinate system where things have well defined coordinates while also moving away from each other without changing the metric on the space, we interpret this to mean that space itself is expanding. Because space and time are so directly intertwined is it possible that time too is expanding? Or perhaps it could be contracting?
The simple answer is that no, time is not expanding or contracting. The complicated answer is that when we're describing the universe we start with the assumption that time isn't expanding or contracting. That is, we choose our coordinate system to make the time dimension non-changing. You don't say whether you're at school or college or whatever, but I'm guessing you've heard of Pythagoras' theorem for calculating the distance, $s$, between two points $(0, 0, 0)$ and $(x, y, z)$: $$ s^2 = x^2 + y^2 + z^2 $$ Well in special relativity we have to include time in the equation to get a spacetime distance: $$ ds^2 = -dt^2 + dx^2 + dy^2 + dz^2 $$ and in general relativity the equation becomes even more complicated because we have to multiply the $dt^2$, $dx^2$, etc by factors determined by a quantity called the metric, and usually denoted by $g$: $$ ds^2 = g_{00}dt^2 + g_{11}dx^2 + g_{22}dy^2 + ... etc $$ where the $... etc$ can include cross terms like $g_{01}dtdx$, so it can all get very hairy. To be able to do the calculations we normally look for ways to simplify the expression, and in the particular case of the expanding universe we assume that the equation has the form: $$ ds^2 = -dt^2 + a(t)^2 d\Sigma^2 $$ where the $d\Sigma$ includes all the spatial terms. The function $a(t)$ is a scale factor i.e. it scales up or down the contribution from the $dx$, $dy$ and $dz$, and it's a function of time so the scale factor changes with time. And this is where we get the expanding universe. It's because when you solve the Einstein equations for a homogenous isotropic universe you can calculate $a(t)$ and you find it increases with time, and that's what we mean by the expansion. However the $dt$ term is not scaled, so time is not expanding (or contracting).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/83531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 0 }
Does the moon affect the Earth's climate? So, this morning I was talking to a friend about astronomical observations, and he told me that lately there has only been good weather when there was a full moon in the sky, which was a shame. I jokingly said: 'maybe there's a correlation!', but then I started thinking: wait, if the moon can affect the oceans, why shouldn't it also make an impact on the atmosphere, which is just another fluid. So... are there atmospheric tides? Does the moon affect the weather or the climate in a significant way?
I think the moon and sun do affect the weather more than we are led to believe, the moon is getting farther away and the sun is getting larger, these both must have an impact on earth. The suns solar flares and mass corona ejections also play a part. I believe small changes in the moons orbit and distance and the suns swelling has a bigger impact on the weather than anything man can achieve. Scientists are getting to hung up on the man made climate change Idea and need to broaden there thinking and not be afraid to speak out.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/83574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 5, "answer_id": 3 }
A simple conjecture on the Chern number of a 2-level Hamiltonian $H(\mathbf{k})$? For example, let's consider a quadratic fermionic Hamiltonian on a 2D lattice with translation symmetry, and assume that the Fourier transformed Hamiltonian is described by a $2\times2$ Hermitian matrix $H(\mathbf{k})=a(\mathbf{k})\sigma_x+b(\mathbf{k})\sigma_y+c(\mathbf{k})\sigma_z $ and has a finite energy gap, then the Chern number $N$ can be determined. If $H(-\mathbf{k})=H(\mathbf{k})$ holds for all $\mathbf{k}\in BZ$, then the Chern number $N$ is always an even number, am I right? This seems to be true from the geometrical interpretation of Chern number as a winding number covering a unit sphere, but I have not yet found a rigorous mathematical proof. Remark: The necessary condition finite energy gap ($\Leftrightarrow$ The map $(a(\mathbf{k}),b(\mathbf{k}),c(\mathbf{k}))/\sqrt{a(\mathbf{k})^2+b(\mathbf{k})^2+c(\mathbf{k})^2}$ from BZ(2D torus) to the unit sphere is well defined) is to ensure that the Chern number/winding number is well defined.
I agree with your argument, but I thought I would just rephrase the same logic in a slightly different way, similar to how one would prove it in an algebraic topology course. (I would have done this as a comment, but it's a bit too big for that.) Basically the Chern number measures the topology of the map $\newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{ \ \ #1\ \ }\phantom{}\kern-1.5ex}T^2 \;\ra{n} \; \;S^2$, where $n(\mathbf k) = (a(\mathbf k),b(\mathbf k),c(\mathbf k))$. More exactly the map $n$ induces a map of the homology groups $\newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{ \ \ #1\ \ }\phantom{}\kern-1.5ex}H_2(T^2) \; \;\ra{n} \; \;H_2(S^2)$ and the Chern number is given by $n ([T^2]) = C_1 *[S^2]$, where $[X]$ is the generator of the group $H_2(X)$. Now because of the property of $n$, we can write down the following commutative diagram: $\newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{ \ \ #1\ \ }\phantom{}\kern-1.5ex} \begin{array}{ccccc} T^2 && \ra{\qquad\quad n \qquad\quad} & &S^2 \\ &\searrow_g&&\nearrow_h& \\ && (T^2/\sim) \;\simeq S^2& \end{array}$ where $g$ is the map that identifies the points $\mathbf k$ and $-\mathbf k$ on the torus. As you point out, this quotient space is homotopically equivalent to $S^2$. The above also implies a commutative diagram for the homology groups, such that $n([T^2]) = h \circ g ([T^2])$, but it is clear that $g ([T^2]) = 2 [S^2]$. (One can justify that rigorously using the fact that every point in $S^2$ has two pre-images in $T^2$.) Hence we have proven that $C_1 [S^2] = 2 \; h([S^2])$, i.e. $C_1$ is even.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/83650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Why does $[xp_{y},x]$ commute? I'm looking at a solution in my book that says $[xp_{y},x]$ commutes. Does bracket notation imply: $[A,B]=AB-BA$ so that $[xp_{y},x]=xp_{y}x-xxp_{y}$ Taking the comment from Max Graves and solving a slightly different commutation relation: \begin{align} -[yp_{x},x]f &= yi\hbar\frac{\partial}{\partial x}(xf)-xyi\hbar \\ &=i\hbar y \bigg( (x\frac{\partial f}{\partial x} -\frac{\partial x}{\partial x}f)-x\frac{\partial f}{\partial x} \bigg) \notag\\ &=yi\hbar \bigg( x\frac{\partial f}{\partial x}+ f-x\frac{\partial f}{\partial x} \bigg) \notag\\ &=yi\hbar f\ \Rightarrow -[yp_{x},x] = yi\hbar \end{align} Does this look correct? Do the first and last terms cancel even though the order is not exactly the same?
You may just not bother to use a test function, here. This problem is so easy you can work it all just using the properties of the commutator. $$[xp_y,x]=x[p_y,x]+[x,x]p_y$$ Now $[p_y,x]$ vanishes because of the fundamental commutation relation between $p_i$ and $x_i$ which is $$[p_i, x_j]= -i\hbar \delta_{ij}$$ On the other hand $[x,x]=0$ because anything commmutes with itself.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/83754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is Planck mass much larger than the smallest mass that we actually know about? The three fundamental constants $h$, $c$ and $G$ are manipulated and rearranged in different ways to get the Planck time, Planck mass etc. The Planck time is said to be the smallest time possible and Planck length the smallest length(If I'm not mistaken). But, why the Planck mass doesn't fit to this list?
These things don't have to be 'smallest' or 'largest'. They are simply (what especially high-energy physicists would agree to be) the most natural units in which to carry out calculations when doing fundamental research. The crux is realizing that things like a 'second' and a 'meter' or a 'kilogram' are purely invented because they are convenient in everyday life situations for humans. This nice convention is, however, ridiculous when you're working with very tiny or maybe very large things. Therefore, the question naturally arises: "What can we use as units to measure physical quantities, independent of our (essentially) arbitrary vantage point as humans?" The answer is: use the units that you find to be unity when you set all fundamental natural constants to unity. Thus, the prescription for finding natural units is: set all fundamental constants to 1, and rearrange them in different ways to get all kinds of derived units. This does not say anything about whether they are the smallest, largest, or whatever-est quantity. EDIT: the link in Qmechanic's comment has a nice explanation by Ron Maimon on the particular case of the Plank mass.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/86052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Understanding the Eötvös experiment The aim of the Eötvös experiment was to "prove" that for every (massive) particle, the quotient $\frac{m_g}{m_i}$ is constant, where $m_g$ is the gravitational mass and $m_i$ is the inertial mass. The experiment: Consider two objects with coordinates $x(t)$, $y(t)$ and with masses $M_i$, $M_g$, $m_i$, $m_g$ (on the earth where the gravitational field $\mathbf g$ can be considered constant), connected by a rod of length $r$, and suspended in a horizontal orientation by a fine wire. The Newton's second law says that $$\ddot {\mathbf{x}}(t)=-\frac{M_g}{M_i}\mathbf g$$ $$\ddot {\mathbf{y}}(t)=-\frac{m_g}{m_i}\mathbf g$$ so if we experiment that the quantity $\eta:=\frac{2|\ddot {\mathbf x}(t)-\ddot{\mathbf y}(t)|} {|\ddot {\mathbf x}(t)+\ddot{\mathbf y}(t)|}$ is very small, then when can conclude that $\frac{M_g}{M_i}=\frac{m_g}{m_i}$ and so we are done. Now textbooks say that if $\ddot {\mathbf{x}}(t)\neq\ddot {\mathbf{y}}(t)$ then we will have a torque $$N=\eta\, r(\mathbf g\times \mathbf{e_2})\cdot \mathbf{e_1}$$ and misuring this torque we can give an estimation of $\eta$. My problem: Even if the two accelerations are different, I don't understand where is the torque, the point ${\mathbf{x}}(t)$ will move down and ${\mathbf{y}}(t)$ will move up in my opinion. The angular momentum is along $\mathbf{e_3}$, so the rotation is in the plane $\left<\mathbf{e_1},\mathbf{e_2}\right>$.
The torque about $\bf{e}_3$ is zero when the masses weights are balanced on the rod. For example the distance from $M$ to the support must be equal to $c = \frac{m_g}{M_g+m_g} r$ There is torque about $\bf{e}_2$ if the centrifugal forces are not propotional to the weights. $$ \tau_2 = M_i c \ddot{x}_3 - m_i (r-c) \ddot{y}_3 $$ where $\ddot{x}_3$, $\ddot{y}_3$ are the observed accelerations in the $\bf{e}_3$ direction. Combining the torque the balance equation above gives $$ \tau_2 = r \left( \frac{M_i m_g}{M_g +m_g} \ddot{x}_3 - \frac{M_g m_i}{M_g+m_g} \ddot{y}_3 \right) $$ which is obviously zero when $\boxed{\frac{M_i}{M_g} = \frac{m_i}{m_g}} $ and $\ddot{x}_3 = \frac{M_g}{M_i} g$, $\ddot{y}_3 = \frac{m_g}{m_i} g$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/86219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What is the meaning of $U''(x)=0$? Most potentials with a minimum can be described approximately as a harmonic oscillator. So the procedure is to Taylor expand $U(x)$: $$U(x)=U(0)+U'(0)x+\frac{1}{2}U''(0)x^2 +...$$ If we suppose that the potential is cero at the origin an it has a minimum there, we get: $$U(x)=\frac{1}{2}U''(0)x^2$$ We take $U''(0)$ to be the spring constant $k$. So the angular frecuency is given by: $\omega=\sqrt{\frac{k}{m}}$ But what if $U''(0)=0$ and there is still a minumum at zero, like a potential $U(x)=x^4$? In this case if you blindly apply the formula you get zero frecuency, which is false. Does it just mean that to a small approximation a body will not oscillate?
Does it just mean that to a small approximation a body will not oscillate? It means that you must always remember the context in which a formula is valid and not "blindly" apply it. Where does the formula come from? Consider the homogeneous differential equation for the harmonic oscillator: $$\ddot x + \dfrac{k}{m}x = 0$$ with solutions $$x(t) = Ae^{i\omega t} + Be^{-i\omega t}$$ where $$\omega = \sqrt{\dfrac{k}{m}} $$ But, for a quartic potential, the force on the mass is $$F = -k'x^3 $$ thus, the differential equation is non-linear: $$\ddot x + \dfrac{k'}{m}x^3 = 0 $$ and so one should not expect the motion to be a pure (single frequency) sinusoid. And, since there is no linear term in $x$, there is no linear approximation and thus no context in which to apply the frequency formula for the harmonic oscillator.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/86396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Expanding Universe Balloon Analogy - Anything Similar for Time? It is difficult to imagine the infiniteness of space and how it itself is expanding rather than the universe expanding into something else. A helpful analogy is that of drawing little dots (representing galaxies or some other sub-universal structure) onto a deflated balloon and then blowing it up. The surface expands in all directions, with each dot moving away from every other dot. Although the analogous surface (the outside of the balloon) is effectively 2 dimensional, it's possible to imagine its translation into 3 dimensions. As for time, though, I have a hard time picturing its "before / during / after" states, and I realize those words aren't even accurate. Time supposedly began at the Big Bang and may end at the Big Crunch. But I'm wondering if anyone knows of an analogy for time, similar to the balloon analogy that applies to space. Is there a way to imagine time in some comprehensible way?
time will disappear in the same way it appears I don't know how to explain it but in easy way the the end is just the beginning and it will look like a cycle as it began it will end
{ "language": "en", "url": "https://physics.stackexchange.com/questions/86445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do all planets have an electric charge? Do all planets have an electric charge? If yes, is positive or negative? And how much each magnitude? I have read some articles which really confused me. Some of these articles said that all planets have a negative charge and the sun has a positive charge. Some other articles said the the exact opposite.
In general, I would think that planets would not have a net electric charge at all. The reason is that planets are constantly being struck by various charged particles (protons & electrons with some metal1 ions). If a planet had a net negative charge, it would repel the electrons and attract the protons & ions; if it had a net positive charge, it would repel the protons & ions and attract electrons. This process would continue until the charge was balanced. Maybe there could be some minor oscillations between net negative and net positive, but for the most part it ought to be neutral. 1) Astronomers consider any element heavier than Helium to be a metal
{ "language": "en", "url": "https://physics.stackexchange.com/questions/86529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is the correct Hamiltonian for a system of coupled quantum oscillators? The Hamiltonian (see Eqn. 1 in Appendix 2 of this paper) for a system of coupled quantum oscillators is given as $$H=\frac{1}{2}∑_{i}p^{2}_{i}+\frac{1}{2}∑_{j,k}A_{jk}q_{i}q_{k}$$ Yet, in my QM course, the Hamiltonian for such a system was given as $$H=\frac{1}{2m}∑_{i}p^{2}_{i}+\frac{k}{2}∑_{i}x^{2}_{i}+\frac{K}{2}∑_{i}(x_{i}-x_{i+1})^{2}$$ where the third term represents the coupling between the oscillators. Why aren't these equations equivalent?
They almost are. Clearly you replace $q_j$ by $x_j$ since the canonical commutation relationships are the same between these two and the $p_j$. Your QM course equation is then a special case of the one in the paper: if you then expand the last term in your QM course equation, you have equivalence if $A_{11} = A_{NN} = k+K$, $A_{jj} = k+2 K,\;j\neq 1, N$ and $A_{j,k} = 0$ if $|j-k|>1$ but $A_{j,j+1} = k - 2 K$ for $j=1,2,\cdots N-1$ (if you have $N$ oscillators indexed by $1,2,\cdots,N$. That the equation in the paper is "correct" follows from the fact that the system of oscillators can be diagonalised into an equivalent system of uncoupled quantum harmonic oscillators. Note that it is stated in section 2 of the paper that $A_{jk}=A_{kj}$, a condition which must hold if the Hamiltonian is to be an Hermitian operator. This assumption is also crucial to the diagonalisation of the oscillator system into an equivalent uncoupled oscillator system. Your QM course equation is for $N$ identical oscillators "in a row" such that there is only nearest neighbour coupling between the oscillators and furthermore such that the nearest neighbour coupling strength is the same for each pair of neighbours.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/86622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the symmetry group of two spin 1/2 particles $SU(2) \times SU(2)$ or $SU(4)$? This is a simple question. Please forgive me, as I am a lowly experimentalist. Suppose we have two free spin 1/2 particles, i.e. a 4-fold degenerate system. What is the set of symmetry operations on this system? Is it $SU(2) \times SU(2)$, $SU(4)$, or something else? Or am I misunderstanding all of this group jargon entirely? My current understanding is that $SU(2)$ rotates a single spin 1/2 particle, and $SU(2) \times SU(2)$ rotates both particles (but not necessarily with the same axis and angles). Furthermore, when we do this addition of angular momentum magic, we are taking $SU(2) \times SU(2)$ and decomposing it into irreducible representations of $SO(3)$ because we want to rotate the spins together (with the same axis and angle). Am I wrong about any of this? I ask this because people in the graphene field say that a "fourfold spin–valley degeneracy lead[s] to an approximate SU(4) isospin symmetry." This was confusing to me because I previously thought that two spin 1/2 degrees of freedom led to $SU(2) \times SU(2)$ symmetry. However, now I'm led to believe that $SU(4)$ describes the symmetries of a 4-fold degenerate system, and that $SU(2) \times SU(2) \subset SU(4)$ with some entangled states not represented by rotating two spins (i.e., if I prepare two spin up particles, I can't get every possible state by simply rotating them).
My answer is in two parts. First part. $SU(2)$ has representations of any dimension $2j+1$ with integer or half-integer j. Direct product of two $j=1/2$ representations is reducible to a direct sum $j=0$ (singlet) and $j=1$ (triplet). All remain representations of $SU(2)$ which defines the spin in the first place. Now, if you have energy degeneracy in a 4 dimensional space, then it remains invariant under a much wider class of transformations -- it is a $SU(4)$ "singlet". Since in the quoted case the two $SU(2)$'s are different (real spin and valley pseudo spin) then extra symmetries $\subset SU(4)$ and $\not \subset SU(2) \otimes SU(2)$ are possible, which makes the buzzword of "approximate SU(4)" not necessarily empty.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/86769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Hydrostatic pressure? What what I understand, hydrostatic pressure is the "weight" of the water pushing against objects. But if this is true, why is hydrostatic pressure perpendicular to the surface it acts on instead of always going down? For example, if you placed a book on a desk, the book's weight would push against the desk, but gravity is pulling it "down". But if you put another book beside this book the first book wouldn't apply any force on it. The weight exerted from the book always goes down. Similarly, why doesn't hydrostatic pressure always go down? I understand that if you put a plate on the seabed, the weight of the water would push down on it, but if you just had a vertical plate standing on the bed, why would force push it from the sides? To sum up, why does hydrostatic pressure act perpendicular to the object instead of always down?
Hydrostatic pressure concers pressure that happens on perfect fluids in equilibrium. A perfect fluid is slippery, devoid of viscosity: when it is in equilibrium, it cannot exert nor resist shear (tangential force). Therefore, on the walls of a vessel sustaining a perfect fluid at rest, the normal component is solely responsible for resisting the weight of the liquid.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/86895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the use of this formula 1 Tesla = 1 Newton/Ampere/Meter? What does Newtons/Ampere/Meter stand for? From this formula: 1 Tesla = 1 Newtons/Ampere/meter what can it be used for? To do what? Ampere/meter Is the same unit as a field's intensity H? Or what is it? Edit by public: How can this equation be used with regards to finding the dimensions of different variables?
From Wikipedia: A particle carrying a charge of 1 coulomb and passing through a magnetic field of 1 tesla at a speed of 1 meter per second perpendicular to said field experiences a force with magnitude 1 newton, according to the Lorentz force law. So 1 Tesla = 1N / (1C . 1m/s), and one Coulomb per second is one Ampere giving us 1 Tesla = 1N / (1A . 1m).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/86976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why don't metals bond when touched together? It is my understanding that metals are a crystal lattice of ions, held together by delocalized electrons, which move freely through the lattice (and conduct electricity, heat, etc.). If two pieces of the same metal are touched together, why don't they bond? It seems to me the delocalized electrons would move from one metal to the other, and extend the bond, holding the two pieces together. If the electrons don't move freely from one piece to the other, why would this not happen when a current is applied (through the two pieces)?
While simple contact between metals isn't enough for most metals to bond, relative motion will achieve the fusion between the metals (at small contacts). A common occurrence is seizing up of mechanical devices due to insufficient lubrication. I don't think screws stick due to metal-metal bonding- its mostly simple distortion particularly of the threads and body of the screw. Damage a screw and insert in a tight space and you won't that screw out again.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "297", "answer_count": 10, "answer_id": 6 }
how long do large hadron collider experiments take? This travel stackexchange answer has kinda got me wondering... how long do experiments involving the large hadron collider usually take? I'd expect you run it for a few seconds and bam - higgs boson detected or whatever. Maybe it'd take a few months to set the experiment up but once it's setup it doesn't seem like it'd take that much time at all to run the experiment? I mean, maybe you'd want to run it a few times to verify your results but if each run takes just a few seconds it seems like you could still be done with your multiple runs even in a single day. Any ideas?
This document (NB it's a pdf) contains details of the beam operation. Here's a key graph nabbed from the presentation: At the end of an experimental run the beam is dumped, and it takes about an hour and a half to get the beam back up to full energy and intensity. Once the beam is at full strength the LHC generates data continuously for somewhere between 10 and 20 hours before the beam intensity is too low and the beam needs to be dumped again. Note that the LHC isn't an experiment that runs once and generates one result, then repeated to generate a second result and so on. Once the beam is live it generates data continuously and this data builds up for days and months. Because signals like the Higgs are so weak you need months and months worth of data, i.e. months and months of beam time, to get enough data to see these small signals. CERN have made an animation showing how the Higgs signal built up over time, which you can see here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Lorentz covariance of the Noether charge The invariance under translation leads to the conserved energy-momentum tensor $\Theta_{\mu\nu}$ satisfying $\partial^\mu\Theta_{\mu\nu}=0$, from which we get the conserved quantity$$P^\nu=\int d^3\mathbf x\Theta^{0\nu}(x)$$But I cannot see explicitly how this quantity is a four-vector covariant under Lorentz transformation, since $d^3\mathbf x$ is part of the invariant $d^4x$, $\Theta^{0\nu}(x)$ is part of the covariant tensor $\Theta^{\mu\nu}(x)$, neither of which transforms covariantly. So can someone show me how this becomes correct? And generally, how to show that a Noether charge $Q$ corresponding to the Noether current $j^\mu$, $$Q=\int d^3\mathbf x j^0(x)$$ , is a Lorentz scalar?
You may use the following notation for hypersurfaces in four dimensions : $d\sigma_\mu = \epsilon_{\mu\alpha\beta\gamma}dx^\alpha dx^\beta dx^\gamma$ For instance $d\sigma_0= d^3x$ The expression of the momentum-energy is then : $P_\nu = \int d\sigma^\mu \Theta_{\mu\nu}$ The same kind of expression could be used with the charge : $Q = \int d\sigma^\mu j_{\mu}$ [EDIT] How make the connection with the OP formulae ? One may adopt the following point of view, take for instance the formula for the charge $\tilde Q = \int d\sigma^\mu j_{\mu}$, this means : $ \tilde Q = \int d\sigma^0 j_{0} + \int d\sigma^1 j_{1} + \int d\sigma^2 j_{2}+ \int d\sigma^3 j_{3} \\ =\int dx~ dy~ dz ~j_{0}+\int dy~ dz~ dt ~j_{1}+\int dz~ dt~ dx ~j_{2} + \int dt~ dx~ dy ~j_{3} \\=Q + \int dy~ dz~ dt ~j_{1}+\int dz~ dt~ dx ~j_{2} + \int dt~ dx~ dy ~j_{3}$ Now, take one of the residual integrals, for instance $I_1=\int dy~ dz~ dt ~j_{1}$, it is an integral at $x$ constant, and one may choose $x=\pm\infty$. At infinity, we may suppose that the current is zero : $j_1(\pm \infty)=0$. So, assuming a zero current $j_1$ at spatial $x$ infinity, we get $I_1=0$, and one may have the same demonstration for the other 2 integrals. So, finally , with the hypothesis of taking spatial slicing of the residual integrals at spatial infinity, and vanishing currents at spatial infinity, we have $\tilde Q = Q$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Absorption of radiation due to temperature I was wondering if the temperature of an object affects the amount of radiation it absorbs. For example, if I have a box that is hotter, will it absorb more energy as compared to the same cooler box?
The rules of Black Body radiation say: no. Assuming the composition of the box doesn't change, its absorptivity is the same regardless of temperature. What does change is the amount of energy it'll radiate, which is a direct function of temperature (black body again). This often confuses folks, as the spectral absorption curve (i.e. percent of photons at a given wavelength) has the same shape as the spectral emission curve. However, the actual absorption depends on the incident radiation, while the actual emission depends on the temperature of the object. For example, a cold black object will absorb all the visible radiation but emit primarily in the long-infrared. edit: to respond to the comment, here's a quote from wiki. "A gray body is one where α, ρ and τ are uniform for all wavelengths. This term also is used to mean a body for which α is temperature and wavelength independent." A gray body is not 100% emissive, and a 'color body' will have a nonuniform emission curve, but in general these characteristics are most definitely not temperature dependent.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is negative Energy/Exotic Energy? So I have been researching around a little as I am highly interested in Astrophysics and I came across an energy I have never heard of before; negative energy also commonly known as exotic energy. Now I started to research this however I found the concept rather hard to grasp due to a simple lack on information around on the Internet. Could somebody kindly explain (if possible using real life analogies) what exactly negative energy is or at least the whole concept/theory behind it.
In general relativity, the energy content of a region is given in terms of a stress-energy tensor. The elements of this tensor are not given by general relativity itself and can differ depending on what matter and fields are present. To try to draw general conclusions about what is allowed and forbidden in general relativity, physicists have tried to place restrictions called energy conditions on the properties of the stress-energy tensor. These energy conditions take the form of requiring certain quantities derived from the stress energy tensor to be positive since such restrictions forbid things like the existence of singularities outside black holes, the non-existence of traversable wormholes and that sort of thing. Fields and matter that violate such restrictions are said to have negative energy. There are lots of subtle mathematical results on the extent to which violations of the energy conditions might be possible. See http://arxiv.org/abs/1302.2859 http://arxiv.org/abs/gr-qc/0205066 and references therein.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Why do we must know the Weyl tensor for 4-dimensional space-time? I heard that we must know the Weyl tensor for fully describing the curvature of the 4-dimensional space-time (in space-time with less dimensions it vanishes, so I don't interesting in cases of less dimensions). So I have the question: what is physical (or geometrical) sense of the Weyl tensor and why don't we need only Riemann tensor for describing the curvature? Does it connected with gravitational waves directly?
The Riemann tensor encapsulates all information about the 4-dimensional space-time. This information can generally divided into two sectors: * *Information about the curvature of space-time due to the existence of matter. This is given by the Ricci tensor according to the Einstein equation $$ R_{\mu\nu} - \frac{1}{2} g_{\mu\nu} R = 8 \pi G T_{\mu\nu} $$ *Information about the structure of gravitational waves in the space-time. This is given by the trace-free part of the Riemann tensor, namely the Weyl tensor. Often, we are not quite interested in the exact structure of the space-time, but only if gravitational waves can exist or their structure. In these cases, one studies the Weyl tensor rather than the Ricci tensor. For example, in the setup of quantum gravity, one requires to study the asymptotic structure of spacetime. In these theories, a good understanding of the Weyl tensor is more important.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is two cars colliding at 25 mph the same as one car colliding into a wall at 50 mph in reference to injuries? This question has been asked using 50 & 100 mph reference, see this Phys.SE post and links therein. However, I am interested in the potential injuries to occupants of the autos. As the one going into the wall has occupants going 50 miles per hour. The 2 cars have occupants in each car only going 25 mph at time of crash. Would the occupants of the 2 cars not have as much damage as the car with the wall as the 2 cars would decelerate based on the crushing of the cars. Knowing that the 2 cars would be absorbing the energy more than the 1 into the wall at 50 mph.
Severity of injury is going to be proportional to the rate of change of momentum. Two cars colliding head-on will have a lower value of rate of change of momentum than one car striking a typical wall. Reason: a typical wall will not cushion an impact as well as a typical car. More cushioning means the actual collision will take place over a longer time span. Time taken is higher and rate of change of velocity lower.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Do low frequency sounds really carry longer distances? It is a common belief that low frequencies travel longer distances. Indeed, the bass is really what you hear when the neighbor plays his HiFi loud (Woom Woom). Try asking people around, a lot of them believe that low sounds carry longer distances. But my experience isn't as straightforward. In particular: * *When I stand near someone who's listening loud music in headphones, it is the high pitched sounds that I hear (tchts tchts), not the bass. *When I sit next to an unamplified turntable (the disc is spinning but the volume is turned off), I hear high pitched sounds (tchts tchts), not the bass. So with very weak sounds, high frequencies seem to travel further? This makes me think that perhaps low frequencies do not carry longer distances, but the very high amplitude of the bass in my neighbor's speakers compensates for that. Perhaps also the low frequencies resonate with the walls of the building? Probably also the medium the sound travels through makes a difference? Or perhaps high frequencies are reflected more by walls than low frequencies? I found this rather cute high school experiment online, which seems to conclude that low and high frequencies travel as far, but aren't there laws that physicist wrote centuries ago about this?
Another thing that happens that can lead you to think that low frequency sounds attenuate quicker is that if you record yourself one time being close to the microphone and another time being farther away, you'll notice that the farther you are the more the lowest frequencies are picked up. This is due to the proximity effect and not to the low frequency sounds being attenuated. http://en.wikipedia.org/wiki/Proximity_effect_(audio)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "74", "answer_count": 5, "answer_id": 1 }
Will the heat flow of Joule heat be different, if the Joule heat is dissipated in a material that has a temperature gradient beforehand? Let us assume one dimensional heat transfer, for example a finite length wire starting at point $0$ and ending at point $\ell$. If the current passes the wire, the Joule heat $I^{2}R$ will be generated and dissipated into the wire and its thermal surroundings. Had the wire had a constant temperature $T$, the half of the power $I^{2}R / 2$ will be passing the left end, the other half will be passing the right end. Will the situation change if the non-zero temperature gradient $\nabla T $ is present before the Joule heating starts? I cannot grasp, which principle has "higher priority" in this case - be it either principle of dissipation of heat which should be considered "a random walk" or the second thermodynamic principle which states that on average more heat will flow from colder to hotter parts. Motivation for this question are heat transfer equations in thermoelectricity. Thank you in advance for any answer of insightful comments!
Had the wire had a constant temperature T, the half of the power $I^2R/2$ will be passing the left end, the other half will be passing the right end. I think this is a wrong statement. This is a common assumption used in a thermo-electric circuit theory to derive the equations. I would argue that this is valid in the case where the properties of the material are uniform, or assumed to be, and had nothing to do with the temperature gradient. After all, in a thermoelectric material, you would not have current flow to begin with if you did not have a temperature gradient, as that is how the seebeck effect works. So how can you now turn around and assume that the wire has a constant temperature. Also, The joule heat is not something that is generated locally, it is generated over the entire volume. If you model the equations accurately, you can see it is far more complicated than you mention. I would encourage you to read, Callen, Thermodynamics and an Introduction to Thermostatistics ISBN-10: 0471862568
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Reference request for exactly solved models in statistical mechanics Can someone recommend a textbook or review article that covers exactly solved models in statistical mechanics, such as the six- or eight-vertex models? If there is literature at the undergraduate level, that would be ideal. I'm only familiar with Baxter's classic text on the subject, but this is tough reading for an undergraduate student.
I would like to add that it is very important to know what you can find out without knowing the exact solution. Kardar's book "Statistical physics of fields" teaches that in a very engaging way. To get quickly started, I suggest the topics of Scaling theory, and real space renormalization in 1-d ising model. Your motivation in keeping up with Baxter will grow due to this book.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Photons from stars--how do they fill in such large angular distances? It would seem that far-away stars are at such a distance that I should be able to take a step to the side and not have the star's photons hit my eye. How do stars release so many photons to fill in such great angular distances?
So starlight propagates spherically and each human eyeball creates localized photons just at the intersection of wavefront and retina. No matter where you are in relation to the star some part of this wavefront will reveal the photon stream. Some kind of sensor that could image the path of all the photons/wave functions as they were emitted would reveal a solid hemisphere of light expanding away from the star...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/87986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 7, "answer_id": 6 }
What applications would room temperature super conductors have? I've heard that a room temperature super conducting material would be a major discovery. How likely is this within the next century and if discovered what would be possible?
As said in the comments, this is a very broad question, so instead of writing a very long post, I point you to a good article titled "Superconductivity and the environment: a Roadmap": http://iopscience.iop.org/0953-2048/26/11/113001 . The article lists a lot of emerging technologies that make use of superconductors. The applications of room temperature superconductors would be the same as the applications of normal superconductors, but these applications would just be much easier to realize if cryogenic environment is not needed. Many items listed in the article would become preferred over non-superconducting way of doing things if an easy-to-use material with room temperature superconductivity was found. Since there is no complete theory as for what causes superconductivity in high temperatures, it is impossible to guess when (if ever) a RTS is found. Finding these materials is basically educated guessing an a lot of trial-and-error. It could be that someone stumbles upon such material tomorrow or it could be that room temperature superconductors don't even exist. There is no way to know.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/88074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why are rockets so big? I'm curious why rockets are so big in their size. Since both the gravitational potential one need to overcome in order to put thing into orbit, and the chemical energy burned from the fuel, are proportional to the mass, so if we shrink the rocket size, it would seem to be fine to launch satellites. So why not build small rocket say the size of human? I can imagine small rocket would be easier to manufacture in large quantities and easier to transport. And maybe someone can make a business out of small rocket, carrying one's own satellite.
Consider the problem in the from of a ratio, what is the ratio of mass used to lift the rocket(fuel), to the mass finally put into orbit(cockpit). That proportion will be much the same regarding smaller objects that must be put into orbit. If you use the same ratio or proportion to calculate the needed fuel mass for a small craft, you will find you can't even carry the device holding your fuel. This is also why rockets use stages. The type of fuel used also has an impact, but those are details that need a new question.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/88145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60", "answer_count": 6, "answer_id": 1 }
Is there a way for an astronaut to rotate? We know that if an imaginary astronaut is in the intergalactic (no external forces) and has an initial velocity zero, then he has is no way to change the position of his center of mass. The law of momentum conservation says: $$ 0=\overrightarrow{F}_{ext}=\frac{d\overrightarrow{p}}{dt}=m\frac{d\overrightarrow{v}_{c.m.}}{dt}$$ But I don't see an immediate proof, that the astronaut can't change his orientation in the space. The proof is immediate for a rigid body (from the law of conservation of angular momentum). But the astronaut is not a rigid body. The question is: can the astronaut after a certain sequence of motions come back to the initial position but be oriented differently (change "his angle")? If yes, then how?
For those that are cat-challenged, here's an alternative explanation and demonstration you can try at home! This demonstration was taught to me by my math lecturer. All you will need is: A swivel chair and a heavy object (e.g. a big textbook) Stand on the seat of the chair (watch your balance now) holding the heavy object. Extend your arms forward with the object. From top-down, you look something like this (please excuse my poor drawing skills): (the triangle thing is your nose; it shows which direction you are facing) Holding the object, swivel your arms to the left. Notice that your body (and the chair) rotates clockwise in response to this motion. Then pull the object towards yourself. Still holding the object close to you, move it to your right. Notice that your body and chair rotate anti-clockwise in response, but not nearly as much as when you had your arms extended. You can continue repeating these motions... Congratulations! You are now freely rotating in the swivel chair, without any bracing. Whilst this is a very inefficient way of rotating yourself, the principle is exactly the same as the cat rotation example.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/88322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73", "answer_count": 6, "answer_id": 3 }
Electromagnetism: Conductors Even though the thermal velocity of electron in a conductor is comparatively high, the thermal velocity is not responsible for current flow? Why is this the case?
For something like a metallic crystal, if you apply an electric field then the (Bloch) electrons just keep accelerating until they reach the end of the Brillouin zone (the momentum space box that they occupy), and then "wrap around" to the opposite end so that their average momentum is zero (Bloch oscillations). So a perfect crystal at 0 temperature would not conduct. What is needed is scattering by lattice defects or phonons (lattice vibrations) to break the symmetry and allow conductivity. What mostly effects the conductivity is the mobility of the electrons.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/88392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Commutator evolution operator and position operator Let $H= \frac{p^2}{2m}$, then I am supposed to calculate $[x,e^{-iHt}]$. My idea was to use $[x,p^n]=i \hbar n p^{n-1}$ and so I ended up by using the series for the exponential function with $-\frac{t \hbar}{m} e^{-iHt}$. Could anybody tell me, whether this result is correct?
Yes. Also note that in the momentum representation, $x = i\hbar \frac{d}{dp}$, which is what your commutation relation proved as a special case. You could use this shortcut right off the bat.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/88459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Relationship between power and max. speed I'm talking about the maximum speed if let's say I have a car with the power $P = 1000 \text{W}$ and a force of friction of $5 \mbox{N}$ acting in the opposite direction. After some googling I found that the maximum speed is given by $P=Fv$, where $P$ is the power, $F$ is the force, and $v$ is the velocity. I understand that $W= Fs$ and that $P = W/t$ and $s/t$ is $v$, so yes I understand where the equation comes from, however wouldn't this be the average speed and not the maximum speed? And the force of friction is not the force that's doing the work, so why is it used in the equation? I hope I've made my question clear enough, thank you in advance!
Consider your simple example: a car with a fixed power output of $1000 \text{ W}$ accelerating against a constant frictional force of $5\text{ Newtons }$ (Even better, assume that there is no friction, but the car is climbing a very gentle slope, such that gravity exerts $5\text{ Newtons}$ of force back along the slope.) Assume that it has reached a stable top speed, $V\text{ m/s}$. In one second, the car will travel $V\text{metres}$. In doing this, it must exert a constant force of $5\text{ Newtons}$, since there is no acceleration, and thus the friction force must be exactly balanced. In doing this, exerting this force through this distance, the car does $(V\times 5) \text{ joules}$ of work. But in $1$ second, at a power output of $1000 \text{ W}$, the car produces $1000\text{ joules}$ of energy. This energy goes into driving the car, so $$(V\times 5) \text{ joules}=1000\text{ joules}$$More generally:$$W=F\times d$$Dividing by $t$, time:$$\frac{W}{t}=\frac{F\times d}{t}$$or:$$P=F\times v$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/88737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What is the physics of a spinning coin? When we spin a coin on a table, we observe 2 things: * *It slows down and stops after sometime. *It does not stay at just one point on the table but its point of contact with table changes with time. I was trying to explain quantitatively this but I am stuck at how to take frictional torques into account. Any help will be appreciated.
There is no easy way to model a spinning coin and calculate these observations. It slows down mostly because of air resistance and friction(here you must take velocity dependent friction-angular velocity in your case-) and it moves due to the combination of torque of gravity(a.k.a. precession) and friction. Velocity dependent frictions generally gives you non-linear differential equations which are often very hard to handle. When you write hamiltonian and canonical equations probably you will get some coupled non-linear partial differential equations which are worst combinations to solve. Moreover, after it slows enough, contact point(on the coin) will start to move and after that time you should consider rolling friction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/88965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 1 }
Are coherent states of light 'classical' or 'quantum'? Coherent states of light, defined as $$|\alpha\rangle=e^{-\frac{|\alpha|^2}{2}}\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}}|n\rangle $$ for a given complex number $\alpha$ and where $|n\rangle$ is a Fock state with $n$ photons, are usually referred to as the most classical states of light. On the other hand, many quantum protocols with no classical analog such as quantum key distribution and quantum computing can be implemented with coherent states. In what sense or in what regime should we think of coherent states as being 'classical' or 'quantum'?
Coherent states are quantum states, but they have properties that mirror classical states in a sense that can be made precise. To be concrete, let's consider coherent states in the context of the simple harmonic quantum oscillator which have precisely the expression you wrote in the question. One can demonstrate the following two facts (which I highly encourage you to prove to yourself); * *The expectation value of the position operator in a coherent state is \begin{align} \langle\alpha|\hat x|\alpha\rangle = \sqrt{\frac{\hbar}{2m\omega}}(\alpha + \alpha^*) \end{align} *The time evolution of a coherent state is obtained by simply time evolving its eigenvalue by a phase; \begin{align} e^{-it \hat H/\hbar}|\alpha\rangle = |\alpha(t)\rangle, \qquad \alpha(t):=e^{-i\omega t}\alpha. \end{align} In other words, if the system is in a coherent state, then it remains in a coherent state! If you put these two facts together, then you find that the expectation value of the position operator has the following time-evolution behavior in a coherent state: \begin{align} \langle\hat x\rangle_t:=\langle\alpha(t)|\hat x|\alpha(t)\rangle = \sqrt{\frac{\hbar}{2m\omega}}(e^{-i\omega t}\alpha + e^{i\omega t}\alpha^*) \end{align} but now simply write the complex number $\alpha$ in polar form $\alpha = \rho e^{i\phi}$ to obtain \begin{align} \langle \hat x\rangle = \sqrt{\frac{\hbar}{2m\omega}}2\rho\cos(\omega t-\phi) \end{align} In other words, we have shown the main fact indicating that coherent states behave "classically": * *The expectation value of the position of the system oscillates like the position of a classical simple harmonic oscillator. This is one sense in which the coherent state is classical. Another fact is that * *Coherent states minimize qauntum uncertainty in the sense that they saturate the heisenberg uncertainty bound; \begin{align} \sigma_x\sigma_p = \frac{\hbar}{2} \end{align} To the extent that uncertainty is a purely quantum effect, minimization of this effect can be interpreted as maximizing "classicalness."
{ "language": "en", "url": "https://physics.stackexchange.com/questions/89018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 7, "answer_id": 3 }
Asteroid collision debris calculation I wonder how to determine the directions, in which the collision debris is launched when 2 asteroids collide. I am aware of: m1*v1 + m2*v2 = m*v = m3*v3 + m4*v4 + m5*v5 + ... and this works just fine for the masses and valocity, however I find it difficult to determine the boundaries of the directions and under what circumstances shatter be produced or the asteroids will just "merge". All info is appreciated :)
I think that this problem doesn't have an exact answer. Some time ago, I talked about this with the astrophysicist Paolicchi (this is the asteroid named after him) who works on the field. The conclusion is that debris are produced at random and you can only impose some ("few") constraints globally, say on big branches of the asteroids belt or of planetary ring. There debris "termalize" after a big number of collision and remain at rest with respect to each others. In the case of just a collision the physics is complicated... I list just some points: * *the asteroids are typically non-self-gravitating objects, that is they're mainly taken together by (local) electric forces. Hence, they are not round and, typically, they doesn't merge, since matter globally is electrically neutral. The same is true for artificial asteroids; *since they spin and they have complex shapes their collisions are very difficult to modelize. You can apply the conservation of momenta but you have no boundaries on the velocities of each single fragment; *for some purposes, it could be useful to approximate the production of debris as proportional to the energy in the center of mass of the two colliding asteroids: $E_{\text{cm}}\propto n$ of fragments. Then assume a $n$-body decay, each with the same mass. Even in the case of $n\gtrsim 3$, you can only have a phase-space for these debris and some probability density functions for their production angles.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/89067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is Dark Matter evenly spread out in the universe? Is Dark Matter evenly spread out? If no, could we ever find a correlation between the amount of dark matter and matter in a specific place?
The existence of dark matter comes primarily from gravitational evidence - in other words, we predict some behavior because of the force of gravity, but we don't observe that behavior and infer that dark matter is to blame. Thus, most predictions for dark matter come from locations in the universe where there are massive objects which we can observe - for instance, Galaxies. Thus, the observational evidence is that dark matter is lumped in galaxies, galaxy clusters, and other conglomerations of matter. The distribution of dark matter in galaxies is also not uniform - dark matter is generally denser in the center of galaxies then at the outer edges. So also in this sense dark matter is not evenly distributed. If you mean on cosmic scales, then yes, there is not "more dark matter" in one large region of the universe then in any other. It is evenly distributed over the universe but in little globs where galaxies are. Think about sprinkling salt evenly across a table - the salt is evenly distributed but if you look closer there are regions where there is salt, and regions where there is none. So in short, dark matter and luminous matter are generally found in the same location.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/89122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Momentum conservation problem Lets a plastic ball of mass m which is collided with steel. After collision the ball is coming back with the half initial speed. If the steel doesn't move then how can I interpretate this ? Let the initial speed of the ball is $u_1$ and mass $m_1$ and mass of steel $m_2$ speed of steel before and after collision $0$. Therefore we can write according to the conservation of momentum, $$m_1 u_1 +m_2 u_2 = m_1v_1 +m_2 v_2$$ $$m_1 u_1 = m_1v_1 $$ $$ u_1 = v_1 $$ I have surmised $u_2 = v_2 = 0$. Therefore speed is same then how can the speed of the ball can be halved after the collision. Is this because of inelastic collision?
If $m_2 < \infty$ this is impossible. If $m_2 = \infty$ it is possible because $\infty \times 0$ is any number.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/89193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Mixed quantum states and "complete knowledge of the system" I ran across this statement in a professor's notes and I think it's just a typo, but I wanted to take the opportunity to check my understanding. So in his notes it says: even if we have complete knowledge of quantum systems, they still can be in the[a] mixed state As far as I understand, a mixed state is simply a classical mixture of quantum states. If we inherit the definition of maximum information from the von neumann entropy, then we should define maximum information to correspond to an entropy of ln(1) = 0. But since the density matrix is not a rank 1 projector in a mixed state, this condition is not met. Is there some other notion of maximum information that I am not aware of, do I have some misunderstanding, or is this just a typo?
The other thing that I can think of, is when you are not interested in some parts of your system(i.e. environment), so you trace it out. Now if the environment is not separable from the rest of the system, which is usually the case; what you are left with(the reduced state) is a mixed state. Note that in this case: $$\rho_{AB}\ne \text{Tr}_B(\rho_{AB})\otimes \text{Tr}_A(\rho_{AB})$$ What might be also interesting, is that the reverse of this procedure also works. Namely, the purification (which means that every mixed state acting on finite dimensional Hilbert spaces can be viewed as the reduced state of some pure state) holds.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/89348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Centrality and collision parameter b Can someone tell me what does it 20-30% collision centrality mean in terms of impact parameter b?
The most general relationship is $$c(b) = \frac{\int_0^b \frac{\mathrm{d}\sigma}{\mathrm{d}b}\mathrm{d}b}{\int_0^\infty \frac{\mathrm{d}\sigma}{\mathrm{d}b}\mathrm{d}b} = \frac{1}{\sigma_\text{inel}}\int_0^b \frac{\mathrm{d}\sigma}{\mathrm{d}b}\mathrm{d}b\tag{1}$$ (source, one of many). In practice, we usually use the Glauber model to describe heavy ion collisions, and this model predicts an impact parameter dependence of the differential cross section which can be (very roughly) approximated as $$\frac{\mathrm{d}\sigma}{\mathrm{d}b} \approx \begin{cases}2\pi b, & b \le b_\text{max} \\ 0, & b > b_\text{max}\end{cases}$$ where $\pi b_\text{max}^2 = \sigma_\text{inel}$. That reduces equation (1) to $$c(b) = \frac{\pi b^2}{\sigma_\text{inel}}$$ for $b < b_\text{max}$. You do have to be careful because sometimes (rarely) a different definition is used, $c(b) = 1 - \pi b^2/\sigma_\text{inel}$. Just pay attention to whether large centrality values correspond to peripheral (the former definition) or central (the latter) collisions. In practice, this is all somewhat approximate anyway, because you can't definitively identify the centrality of a collision from the information collected by a detector. All you can do is estimate the centrality based on how many particles come out and how strongly they are scattered. If you get a lot of particles coming out roughly perpendicular to the beamline (pseudorapidity $\eta\sim 0$), then that means a lot of nucleons were involved in the collision, and thus it is characterized as central. If there are few particles coming out perpendicular to the beamline, then few nucleons were scattered, meaning the collision was peripheral.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/89582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can one see that the Hydrogen atom has $SO(4)$ symmetry? * *For solving hydrogen atom energy level by $SO(4)$ symmetry, where does the symmetry come from? *How can one see it directly from the Hamiltonian?
I wanted to complement the answers above. For (1) $so(4) = so(3) \times so(3)$, one $so(3)$ is from the geometric 3D symmetry of the Hamiltonian, and the other $so(3)$ is from the potential term of $\frac{k}{r}$. For (2). the second $so(3)$ symmetry is a dynamic symmetry and only holds when potential term is inversely proportional to $r$. One has to do the calculation to find it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/89654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 3, "answer_id": 1 }
How would I explain Ohm's Law in terms of Electrical Fields and Force? In terms of current, resistance, and voltage, it's easy: Ohm's Law is the relationship between current, voltage, and resistance of a circuit. Boom, simple as that. How could I put this in terms of $E$ and $F$? I can sort of see a way to do it by relating the formulas $E=F/q$ and $I=q/t$ to Ohm's Law, $V=IR$, but I'm not entirely sure how I could explain this in words.
Imagine at the face of the resistor that $N$ electrons each with charge $q$ are collected and move along it with a constant average drift velocity $v$. So in time $dt$ a charge $dQ$ of $N q vdt$ moves past any point, that is, the current $I = \frac {dQ}{dt} = N q v$. Start with Peltio’s first equation above assuming that $v’ = 0$ (a reasonable approximation), so that $v = - q E / (m*gamma)$, where m is the mass of the electron. So $I$ and $v$ are shown to be proportional. Now we finish the argument by showing that also the voltage $V$ across the resistor and v are proportional as follows: The force $F$ on a single electron $F$ is $q E$, and the work $W$ to move it along a distance $s$, the length of the resistor, is $F \cdot s$. So $W = q E \cdot s$ = $- m*gamma*v*s$. So the voltage $V = \frac{W}{q}$$ = - m*gamma*v*s$$ = -\frac{m*gamma*s*I} {(N q)}$. Finally, $V = I [\frac{m*gamma*s}{N (-q)^2}] = I R$, where R is the bracketed expression.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/89706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
Why do solar cells have a window layer on top of the absorber layer and not below it? In solar cells there is a p-n junction. P-type semiconductor (for example CdTe) is often absorber layer because of its carrier lifetime and mobilities. In case of CdS/CdTe,* CdS is n-type window layer and everywhere it is said that it should be very thin and has large band gap – not to absorb any light and let it go through to the p-type absorber (that is why it is called a window layer). But why should it be on top of the absorber layer and not below it? If n-type layer is below, the light can hit the p-type absorber directly. I have some ideas that it is related to the distance between the place of absorption and p-n junction, but I am not sure. Image by Alfred Hicks/NREL (source). *A similar design is used in CIGS, CZTS and other thin film solar cell designs; this question applies to all of them - solar cells with a p-type absorber and an n-type window layer
I can't speak directly to this design, but can offer two general reasons for an overlayer. First, it may be necessary to protect or passivate the junction material. Second, a layer of appropriate thickness and index of refraction will reduce the overall reflectance, thus improving the collection efficiency of the device (solar cells are essentially a specialized type of solid-state detector, same as in a digital camera).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/89763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
If time isn't continuous, what is the best-known upper bound on the length of time intervals? There have been several questions about whether time is continuous or not and it seems like the answer isn't currently known. I know quantum mechanics treats time as continuous and any mathematics that involves integrating over some time interval treats time as continuous too. Surely though there are experiments that are quite sensitive to discrete time with large intervals. It seems the shortest laser pulse so far is only 67 attoseconds ($67 \times 10^{-18}\: \mathrm{s}$) but wouldn't this experiment actually constrain time intervals to much less than that? Are short laser pulses even a good experiment to determine if time is discrete or not? So, assuming time isn't continuous, what is the best-known upper-bound on time intervals? Also, which experiments have done the best to constrain how non-continuous time could be?
The lifetimes of the W and Z boson and top quark are each on the order of $10 ^ {-25}\,\rm{s}$ . The Z-boson lifetime is $2.64 \times 10^{-25}s$ from a decay width of $2.495 \pm 0.0023 \, \rm{GeV}$. Decay width for the W-boson is $2.085 \pm 0.042\, \rm{ GeV}$ If time were not intervals of less than this order of magnitude ($10 ^ {-25}\,\rm{s}$), I would expect narrowing of line width (longer lifetime) and maybe distortion of line shape.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/89975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 1, "answer_id": 0 }
What is the symmetry associated with the local particle number conservation law for fluid? According to Noether's theorem, every continuous symmetry (of the action) yields a conservation law. In fluid, there is a local particle number conservation law, which is $$\partial{\rho}/\partial{t}+\nabla \cdot \vec{j} ~=~0,$$ where $\rho$ and $\vec{j}$ is the density and current respectively. I just wonder is there any symmetry associated with this conservation law?
Noether's theorem in its usual form assumes that the system (in this case a fluid) is governed by an action principle. We assume for simplicity that the fluid consists of just one type of fluid particles. I) In the Lagrangian fluid picture, the (local) conservation of fluid particles is manifest from the onset, since the dynamical variables are the labels ${\bf a}$ of the fluid particles. We will assume that the labels are chosen such that the mass density in label ${\bf a}$-space (as opposed to position ${\bf r}$-space) is a constant. Then particle conservation is the same as mass conservation $$\tag{1} \frac{D\rho }{Dt} +\rho {\bf \nabla} \cdot {\bf u} ~\equiv~ \frac{\partial \rho }{\partial t} + {\bf \nabla} \cdot (\rho {\bf u})~=~ 0.$$ II) In the Eulerian fluid picture, the mass density $\rho$ is a dynamical field. The mass conservation (1) is imposed by the Euler-Lagrange equation for the unpaired variable $\phi$ in the Clebsch velocity potential $$\tag{2} {\bf u}~=~{\bf \nabla}\phi +\ldots. $$ The corresponding global symmetry is $\phi \to \phi+ \text{const}.$ References: * *R. Salmon, Hamiltonian Fluid Mechanics, Ann. Rev Fluid. Mech. (1988) 225. The pdf file can be downloaded from the author's webpage.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/90119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Would a black hole created on the surface burrow through the crust? If scientists created a microscopic black hole with an initial mass of one ton on the surface of the earth, would the gravitational attraction to the center be enough for it to "burrow" until it eats its way through the crust? It seems like there would be a bad outcome. How dense would rock have to be to form a barrier?
As dmckee said in his comment, the black hole would fall towards the center of the Earth. To specifically answer this portion of your question: How dense would rock have to be to form a barrier? There is absolutely no density of rock or anything else that would stop or even slow it down. Even if you created this microscopic black hole on the surface of a neutron star, it would fall right through to the center. It would oscillate back and forth through the center of the Earth, almost certainly evaporating faster than it consumed material. The evaporation rate would eventually be high enough that the Hawking radiation coming off of it would be extremely damaging to surrounding material. There is a lot of energy in a one ton microscopic black hole! I wonder though... There may be a density of material high enough (like a neutron star) that the microscopic black hole would consume material at a fast enough rate that instead of evaporating, it would grow and completely consume the surrounding planet / neutron star. I would be interested in seeing a calculation that tries to answer that!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/90175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Resonance Raman spectroscopy vs fluorescence In Resonance Raman Spectroscopy we often want to avoid the fluorescence. But what is the problem with fluorescence . What we want is a shift between exciting line and emitted radiation and both can show the vibration energy difference?? also from "What is the difference between Raman scattering and fluorescence?" I come to know the difference between Raman and fluorescence is that of lifetime of the molecule in the excited state, but how does a molecule come to know that it is studied for Raman spectroscopy so its lifetime in excited state is small or it is studied for fluorescence.
Let's put it clear first: for Raman scattering there is no excited state at all, the light just bounces of a molecule. If the photo has the right energy, it can bring the molecule to an excited state. Different things can happen to a molecule in this state - in most of the cases the energy will be dissipated through collisions, but in a rare case the molecule will fall back and emit a photon it will fluoresce. The energy of emitted photon depends on the configuration of the surroundings of the molecule - different molecules will emit photons with somewhat different energies and lines in the emission spectrum will broaden. Raman scattering does not involve an intermediate state, so it doesn't produce the side effects associated with it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/90273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
an Abelian complex statistical phase from exchanging non-Abelian anyons? We have some discussions in Phys.SE. about the braiding statistics of anyons from a Non-Abelian Chern-Simon theory, or non-Abelian anyons in general. May I ask: under what (physical or mathematical) conditions, when we exchange non-Abelian anyons in 2+1D, or full winding a non-Abelian anyon to another sets of non-Abelian anyons of a system, the full wave function of the system only obtain a complex phase, i.e. only $\exp[i\theta]$ gained (instead of a braiding matrix)? Your answer on the conditions can be freely formulated in either physical or mathematical statements. This may be a pretty silly question, but I wonder whether this conditions have any significant meaning... Could this have anyon-basis dependence or anyon-basis independence. Or is there a subset or subgroup or sub-category concept inside the full sets of anyons implied by the conditions.
If you put a non-Abelian anyon and its anti particle on a sphere, then moving the non-Abelian anyon around its anti particle only induces an Abelian phase. Also, twisting a non-Abelian anyon by 360$^\circ$ only induces an Abelian phase as well, which define the (fractional) spin of the non-Abelian anyon.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/90326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why Inox Steel doesn't interact with magnets? My dad has a HUGE magnet on his workshop. I love magnets, and when I saw it, I asked him what it was for. His reply was: "I don't know why, but inox steel bolts don't get attracted to it, so I use it to identify them." Thus I got curious, why a magnet don't attract inox steel bolts? Steel, even if a inox variation is still mostly iron, no?
The are various crystal forms that iron and steel can adopt, the common ones being ferritic, martensitic and austenitic. The ferritic and martensitic forms are ferromagnetic (or just magnetic in everyday terms) while the austenitic form is not. So it isn't simply that iron is magnetic and steel isn't, it is specifically austenitic iron and steel that isn't magnetic. However things may not be as simple as this since lumps of steel may well contain grains of more than one crystal type, so they may be partially magnetic. Now you're going to ask me why the austenitic crystal form isn't magnetic, and I don't think there is a good answer for that. Ferromagnetism is actually quite a subtle effect dependant on exactly how the electrons in the iron atoms interact. It's due to a quantum mechanical phenomenon called the exchange interaction. In the ferritic and martensitic crystals this effect is large enough to make the spins line up and generate macroscopically ordered domains. In the austenitic crystal it isn't. I don't know of an intuitive way to explain why.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/90400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
What kind of energy does superfluidity use? Liquid helium (and other similar fluids) can "climb up" the walls of their containers. Who does the work in this case, and what kind of energy does it use? I'm sure we can't make a perpetuum mobile out of this, so I guess some kind of energy must somehow be expended to make the fluid "climb up" the wall.
Courtesy of the book Carl found we have an answer! Consider the element of the liquid helium at a height $h$ above the fluid surface and distance $y$ from the wall. To raise that element above the fluid surface costs an energy $mgh$, but because there is a Van der Waals attraction between the helium atoms and the wall you get back an energy $E_{VdW}$. Dzyaloshinskii et al give the energy change per unit mass as: $$ \Delta E = gh - \frac{\alpha}{y^n} $$ where $\alpha$ is constant giving the strength of the Van der Waals attraction and $n$ is in the range 3 - 4 depending on the film thickness. So it is energetically favourable to lift the fluid up the wall if the Van der Waals attraction outweighs the gravitational potential energy making $\Delta E$ negative. Since $y$ can be taken arbitrarily small (well, at least down to a few times the He atom size) $\Delta E$ will be negative for all heights $h$ and the film covers the whole wall. The resulting equation for the film thickness $d$ as a function of height is given (without derivation) as: $$ d \approx \left( \frac{\alpha}{gh} \right)^{1/n} $$ Since the liquid film will have a non-zero thickness at the top of the container wall it can flow over the wall and then down the outside. Even though the film thicknesses work out to be only a few tens of nanometres the zero viscosity of the superfluid helium allows an appreciable flow rate. Indeed, later in the book flow velocities of 30 cm/s are mentioned. In principle this would apply to all fluids, however for normal fluids the flow rate in a film a few tens of nanometres thick would be infinitesimally small so the climbing is never observed. A few comments of my own: I note that this derivation ignores the interfacial tensions of the helium/air, helium/wall and air/wall interfaces. I have no figures for what these would be for superfluid helium and possibly they are negligable. The predictions of the Dzyaloshinskii theory are claimed to agree well with experiment. Also you should note that one of the references provided by Carl challenges the above explanation, though without coming to any firm conclusions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/90475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 0 }
Quantum mechanics problem? I had a test on Quantum mechanics a few days ago, and there was a problem which I had no clue how to solve. Could you please explain me? The problem is: Let's look at the $\hat H=E_0[|1 \rangle \langle 2| + |2 \rangle \langle1|]$ two-state quantum system, where $E_0$ is a constant, and $\langle i|j \rangle=\delta_{ij}$ $(i,j=1,2)$. \begin{equation} \hat O= \Omega_0 [3 |1 \rangle \langle1|- |2 \rangle \langle2|] \end{equation} is an observable quantity, and its expectation value at $t=0$ is: $\langle \hat O \rangle =-\Omega_0$, where $\Omega_o$ is a constant. What is the $|\psi(0) \rangle$ state of the system at $t=0$, and what is the minimum $t>0$ time, that is needed for the system to be in the state: $|\psi(t) \rangle =|1 \rangle$? I never came across a problem like this, I tried to construct the time evolution operator, $\hat U$, but I couldn't, and I have no idea how to start.
Part 1 The state vector can be written in terms of the two states at time $t$ as $$ \left|\psi\left(t\right)\right> = c_1\left(t\right) \left|1\right> + c_2\left(t\right) \left|2\right> $$ and at time $t=0$ as $$ \left|\psi\left(0\right)\right> = c_1\left(0\right) \left|1\right> + c_2\left(0\right) \left|2\right>. $$ We know $$ \begin{align} -\Omega_0 = \left<\hat{O}\right> &= \left<\psi\left(0\right)\right| \hat{O} \left|\psi\left(0\right)\right> \\ &= \Omega_0 \left(c^*_1\left(0\right) \left<1\right| + c^*_2\left(0\right) \left<2\right|\right) \left(3 \left|1\right>\left<1\right|-\left|2\right>\left<2\right|\right) \left(c_1\left(0\right) \left|1\right> + c_2\left(0\right) \left|2\right> \right) \\ &= \Omega_0 \left(c^*_1\left(0\right) \left<1\right| + c^*_2\left(0\right) \left<2\right|\right)\left(3c_1\left(0\right) \left|1\right> - c_2\left(0\right) \left|2\right> \right) \\ &= \Omega_0 \left(3 \left|c_1\left(0\right)\right|^2 - \left|c_2\left(0\right)\right|^2 \right), \end{align} $$ so $$ 3 \left|c_1\left(0\right)\right|^2 - \left|c_2\left(0\right)\right|^2 = -1. $$ Since the state vector must be normalized, $$ \left|c_1\left(0\right)\right|^2 + \left|c_2\left(0\right)\right|^2 = 1. $$ You can finish this part. Part 2 The Schrödinger equation tells us $$ i \hbar \frac{d}{dt} \left|\psi\left(t\right)\right> = \hat{H} \left|\psi\left(t\right)\right>, $$ or $$ i \hbar \left({\dot{c}}_1\left(t\right) \left|1\right> + {\dot{c}}_2\left(t\right) \left|2\right>\right) = E_0 \left(\left|1\right>\left<2\right| + \left|2\right>\left<1\right|\right) \left(c_1\left(t\right) \left|1\right> + c_2\left(t\right) \left|2\right>\right). $$ I'll let you take it from here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/90624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Gravity on the International Space Station - General Relativity perspective My question is an extension to this one: Gravity on the International Space Station. If all the outside views of the ISS was sealed, then the crew inside would not be able to tell whether they were in orbit around the earth in orbital speed or free floating in space beyond the orbit of Neptune, right? How would time dilation due to gravitation fields be affected? Supposing you have three atomic clocks: 1 - One on surface of the Earth, at Sea Level, 2 - One in the ISS, 3 - One in deep space beyond the orbit of Neptune. At what speed would each clock run compared to the other two?
Not only the position in the gravitational field is important, but also the velocity. Consider the Schwarzschild metric $$ \text{d}\tau^2 = \left(1 - \frac{2GM}{rc^2}\right)\text{d}t^2 - \frac{1}{c^2}\left(1 - \frac{2GM}{rc^2}\right)^{-1}\left(\text{d}x^2 + \text{d}y^2 +\text{d}z^2\right), $$ where $\text{d}\tau$ is the time measured by a moving clock at radius $r$, and $\text{d}t$ is the coordinate time measured by a hypothetical stationary clock infinitely far from the gravitational field. We get $$ \frac{\text{d}\tau}{\text{d}t} = \sqrt{ \left(1 - \frac{2GM}{rc^2}\right) - \left(1 - \frac{2GM}{rc^2}\right)^{-1}\frac{v^2}{c^2}}, $$ with $$v = \sqrt{\frac{\text{d}x^2}{\text{d}t^2} + \frac{\text{d}y^2}{\text{d}t^2} + \frac{\text{d}z^2}{\text{d}t^2}}$$ the orbital speed of the clock in the gravitational field (assuming a circular orbit, so that $r$ remains constant). For Earth, $GM=398600\;\text{km}^3/\text{s}^2$ (see wiki). Let us first calculate the time dilation experienced by someone standing on the equator. We have $r_\text{eq}=6371\,\text{km}$ and an orbital speed (due to the Earth's rotation) of $v_\text{eq}=0.465\,\text{km/s}$. Plugging in the numbers, we find $$ \frac{\text{d}\tau_\text{eq}}{\text{d}t} = \sqrt{ \left(1 - \frac{2GM}{r_\text{eq}\,c^2}\right) - \left(1 - \frac{2GM}{r_\text{eq}\,c^2}\right)^{-1}\frac{v_\text{eq}^2}{c^2}} = 0.99999999930267, $$ so 1 second outside Earth's gravity corresponds with 0.99999999930267 seconds on the equator. The ISS orbits the Earth at an altitude of $410\,\text{km}$, so that $r_\text{ISS}=6781\,\text{km}$, and it orbits the Earth with a speed of $v_\text{ISS}=7.7\,\text{km/s}$, and we get $$ \frac{\text{d}\tau_\text{ISS}}{\text{d}t} = \sqrt{ \left(1 - \frac{2GM}{r_\text{ISS}\,c^2}\right) - \left(1 - \frac{2GM}{r_\text{ISS}\,c^2}\right)^{-1}\frac{v_\text{ISS}^2}{c^2}} = 0.999999999016118. $$ The relative time dilation between someone on the equator and someone in the ISS is thus $$ \frac{\text{d}\tau_\text{eq}}{\text{d}\tau_\text{ISS}} = \frac{0.99999999930267}{0.999999999016118} = 1.00000000028655, $$ so 1 second in the ISS corresponds with 1.00000000028655 seconds on Earth. In other words, ISS astronauts age slightly less than people on Earth.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/90745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Estimating the number of atom/nuclei in a single quantum dot I often read in introductions about quantum dots that depending on the fabrication method, a single dot contains about 100 - 100000 atoms. Assuming a self-assembled dot of lens or pyramid (cone) shape, I did some rough estimation by considering the volume (or the area) of a dot divided by the volume (or area of a cell) of a single atom, taken to be $10^{-3}$$m^{-3}$. The estimation falls within the range of $100-100000$, which is not surprising given how wide this range is. Is there a typical and vigorous method that people use to estimate the number of atoms in a single dot?
The most accurate way to assess this number is by counting the relevant lattice sites with TEM; (from Chamard et al. Phys. Rev. B 69 (2004) 125327.) But it is ambiguous which sites at the edge of the dot to include, so the number of atoms is not well defined, at least in these self assembled dots. One could perhaps ask the question - is the number of atoms the important thing? The key thing that makes a quantum dot is the existence of a strong confining potential. In a gate defined quantum dot in, say, a GaAs/AlGaAs heterostructure (as located at the red dots in the SEM below), the confining potential is (very roughly) parabolic, so where is the edge of the dot? (From LMU-Munich)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/90842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Photons traveling backwards in time? Imagine that two widely separated charged particles $A$ and $B$ exchange a photon. Because they are far apart one can imagine that there is a major contribution to the photon propagator that travels at the speed of light from $A$ at a time $T_0$ to $B$ at a time $T_1$ where $T_1 > T_0$. But in that case is there also a major contribution to the photon propagator that travels backwards in time at the speed of light from $B$ at time $T_1$ to $A$ at time $T_0$? The forwards-in-time photon imparts momentum to particle $B$ whereas the backwards-in-time photon imparts a reaction momentum back to particle $A$.
Its impossible for a photon to travel backwards in time sense it keeps disappearing because it keeps giving up its energy to other particles like an electron, either part of it or all of it which means it wont have enough energy to warp space time or even have enough energy to create closed time like curve and travel backwards in time. Here's how it works when a photon is absorbed by an electron, it is completely destroyed. All its energy is imparted to the electron, which instantly jumps to a new energy level. The photon itself ceases to be.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/90953", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
The origin of contact noise? I was trying to measure the noise of a device with metal probes. I was not sure whether I should trust the results because I was told contact noise might contribute to some degree. I am a little confused about the notion of "contact noise". Is it because of the contact resistance (every resistor is a noise source)? Or is it something related to other factors such as probing materials or surface? Could anyone make a brief explanation? I am eager to know the origin of this "contact noise", and how I can evaluate such noise.
The noise observed at a contact is known as chattering. A stronger mechanical contact should impart a steadier electrical contact resistance (ECR) [1]. However the structure and cleanliness of the surface should also be considered in the design of this setup to minimise the presence of varying passivating layers. It should be noted that time further plays a role in stabilizing contact resistance.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/91015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
The geodesic line on Poincare half plane I was calculating the geodesic lines on Poincare half plane but I found I somehow missed a parameter. It would be really helpful if someone could help me find out where my mistake is. My calculation is the following: Let $ds^2=\frac{a^2}{y^2}(dx^2+dy^2)$, then we could calculate the nonvanishing Christoffel symbols which are $\Gamma^x_{xy}=\Gamma^x_{yx}=-\frac{1}{y}, \Gamma^y_{xx}=\frac{1}{y}, \Gamma^y_{yy}=-\frac{1}{y}$. From these and geodesic equations, we have $$\ddot{x}-y^{-1}\dot{x}\dot{y}=0$$ $$\ddot{y}+y^{-1}\dot{x}^2=0$$ $$\ddot{y}-y^{-1}\dot{y}^2=0$$ From the last equation, it's straightforward that $y=Ce^{\omega\lambda}$, where $C$ and $\lambda$ are integral constants. Then substitute the derivative of $y$ into the first equation, we have, $$\ddot{x}-\omega\dot{x}=0$$ Therefore we have $x=De^{\omega\lambda}+x_0$ where $D, x_0$ are integral constants. However, by the second equation, we have, assuming $C$ is nonzero, $$C^2+D^2=0$$ And this leads to a weird result which is $$(x-x_0)^2+y^2=0$$ But the actual result should be $(x-x_0)^2+y^2=l^2$, where $l$ is another constant.
You say $C,\lambda$ are constants of integration, but that gives $\ddot{x}-\lambda\dot{x} = 0$ instead. Since your followup would be inconsistent, I will assume you meant $C,\omega$ are constants of integration. You should not have three components to the geodesic equation, but rather two: $$\ddot{y} + \Gamma^y_{xx}\dot{x}\dot{x} + \Gamma^y_{xy}\dot{x}\dot{y} + \Gamma^y_{yx}\dot{y}\dot{x} +\Gamma^y_{yy}\dot{y}\dot{y} = 0\text{.}$$ You are also missing a factor of $2$ for your $\ddot{x}$ equation. I will give you a further hint to say that since the $x$-coordinate is cyclic, $\dot{x} = Ey^2$ for some constant $E$. If you're not familiar with Killing vector fields, you can see this from the Euler-Lagrange equations on $L = \frac{1}{2}g(u,u)$, where $u^\mu = (\dot{x},\dot{y})$, which is also a nice way to find geodesics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/91113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 0 }
Festive physics: gold flake vodka I have a bottle of vodka that has a load of gold flakes suspended in it. It has been sat still for over 24 hours and the flakes are all still suspended within the liquid: they have not risen to the surface or sunk to the bottom. Any ideas as to the physics behind this?
I'd imagine the viscosity of the vodka is pretty high, and this is why the gold flakes are not rising or sinking within the bottle. Moreover, the viscosity of Vodka has no absolute numerical value as brands vary, but it's pretty high. Here's an useful link on the viscosity of alcohols.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/91243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
Is the spin 1/2 rotation matrix taken to be counterclockwise? The spin 1/2 rotation matrix around the $z$-axis I worked out to be $$ e^{i\theta S_z}=\begin{pmatrix} \exp\frac{i\theta}{2}&0\\ 0&\exp\frac{-i\theta}{2}\\ \end{pmatrix} $$ Is this taken to be anti-clockwise around the $z$-axis?
The three generators of right-handed spinor rotations are given by $\left\{- i\sigma_x,-i\sigma_y,-i\sigma_z\right\}$, see for instance Peskin & Schroeder page 44, and the rotation matrix for a spinor rotation over an angle $\phi$ around a unit vector $\hat{s}$ is given by: $R~=~ \exp\left(-i\frac{\phi}{2}~\hat{s}\cdot\vec{\sigma}\right) ~=~ I\cos\frac{\phi}{2}+\left(-i\,\hat{s}\cdot\vec{\sigma}\right)\sin\frac{\phi}{2}$ Where $\vec{\sigma}=\{\sigma_x,\sigma_y,\sigma_z\}$ and $I$ is the unit matrix which is the same as $\sigma_o$. We can explicitly write the generators of (right-handed) rotation as follows starting from the definition of the Pauli matrices. :\begin{align} \sigma_x = \begin{pmatrix} ~~0&~~1\\ ~~1&~~0~~ \end{pmatrix} && \sigma_y = \begin{pmatrix} ~~0&-i\\ ~~i&~~0~~ \end{pmatrix} && \sigma_z = \begin{pmatrix} ~~1&~~0\\ ~~0&-1~~ \end{pmatrix} \, \end{align} :\begin{align} j_x = \begin{pmatrix} ~~0&-i\\ -i&~~0~~ \end{pmatrix} && j_y = \begin{pmatrix} ~~0&-1\\ ~~1&~~0~~ \end{pmatrix} && j_z = \begin{pmatrix} -i&~~0\\ ~~0&~~i~~ \end{pmatrix} \, \end{align} The specific rotation matrix as given in the question above is a left-handed rotation since the right-handed rotation matrix is defined by: $R~=~ \exp\left(\frac{\phi}{2} j_z\right) ~=~ I\cos\frac{\phi}{2}\phi~+~j_z\sin\frac{\phi}{2} ~=~ \begin{pmatrix} \exp-i\frac{\phi}{2}&0\\ ~~0&\exp i\frac{\phi}{2}~~ \end{pmatrix}$ Counter clockwise is right-handed if the rotation axis points towards you, but it is left-handed if the rotation-axis points away from you. It's up to your choice...
{ "language": "en", "url": "https://physics.stackexchange.com/questions/91483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is there is a reason for Pauli's Exclusion Principle? As a starting quantum physicist I am very interested in reasons why does Pauli's Exclusion Principle works. I mean standard explanations are not quite satisfying. Of course we can say that is because of fermionic nature of electrons - but it is just the different way to say the same thing. We can say that we need to antisymmetrize the quantum wavefunction for many electrons - well, another different way to say the same. We can say that it is because spin 1/2 of electron - but the hell, fermions has by the definition half-integral spin so it doesn't explain anything. Is the Exclusion Principle something deeper, for example in Dirac's Equation, like spin of the electron? I think it would be satisfying.
I think that while these "explanations" are all dancing around the same pole, they aren't created equal. I think the meat is in the fact that nature has a local Lorentz symmetry, so we expect to be able to decompose things into representations of the group $SO(3,1)$. It's a mathematical fact that this group (or it's algebra, rather) has integer and half-integer representations. Once you have this structure, then a few meagre assumptions about causality and unitarity lead to the Spin-statistics theorem. In order to understand the proof you'll need to first dig deeper into the representations of the Lorentz group, and how they label single-particle states.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/91691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
General relativity in terms of differential forms Is there a formulation of general relativity in terms of differential forms instead of tensors with indices and sub-indices? If yes, where can I find it and what are the advantages of each method? If not, why is it not possible?
It is not sensible to write any theory - GR included - in terms of differential forms. Differential forms are just completely antisymmetric tensors. The antisymmetric tensors are just one kind of irreducible representation of the general linear group GL(m,C); the completely symmetric tensors are another irrep and so are all the irreps that are labelled by Young's diagrams . Since physical quantities are irreps of groups, it is not sensible to set up any physical theory in a restrictive mathematical arena which only supports one irrep of GL(m,C).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/91867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Why did they used to make the mill chimneys so tall? Why did they used to make the mill chimneys so tall? This question was asked in an Engineering Interview at Cambridge University.
Two reasons - which matters more will depend on the context. * *making the chimney taller increases the flow through it due to the stack effect. This may be useful if you need to get rid of a lot of exhaust gases quickly as it avoids the cost of having to pump the exhaust gases. *if the exhaust is environmentally unpleasant then injecting it into the atmosphere as high as possible will reduce the chances of turbulence carrying it back down to ground level and poisoning people. It will probably also increase the dispersal rate as the wind speed is likely to be higher well above the ground.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/91980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
How can space and time arise from nothing? Lawrence Krauss said this on an Australian Q&A programme. "...when you apply quantum mechanics to gravity, space itself can arise from nothing as can time..." Can you elaborate on this please? It's hard to search for!
Even going back to Newton, space and time are the consequences of measurement. From the Principia's Scholium Relative time is a measure of duration by the means of motion; Relative space is a measure of the absolute spaces determined by the senses. So they came from measurement. Not from nothing.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/92037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 7, "answer_id": 6 }
Does a sound at 50dB at 1m have the same intensity of a sound of 51dB at 10m? Does a sound at 50dB at 1m have the same intensity of a sound of 51dB at 10m, and also the same intensity of a 52dB sound at 100m?
This depends on things like the shape of the pressure wave. You're probably thinking of a point-source w/ an expanding spherical pressure wave, in which case the equations for energy per unit area as a function of radius are pretty straightforward (but keep in mind the log function involved in dB). As presented at Wikipedia, [Edit: apologies for the errors induced when copypasting markup. it's correct now] When sound level Lp1 is measured at a distance r1, the sound level Lp2 at the distance r2 is Lp2=Lp1 + 20 * log10(r1/r2) However, if for example there were an infinite plane wave, there would be no decrease in sound pressure as it propogates. PS did you test it on a molpy?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/92088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Lax-Pair for principal chiral model This question concerns Eq. (2.10) of the paper https://arxiv.org/abs/hep-th/0305116 by Bena, Polchinski and Roiban. In section 2.1 they are showing that the infinite number of conserved quantities for the principal chiral model \begin{equation} L = \frac{1}{2\alpha_0} \mathrm{Tr}(\partial_\mu g^{-1}\partial_\mu g) \end{equation} are given by the fixed-time Wilson lines $U(\infty,t;-\infty,t)$ where \begin{equation} U(x;x_0) = \mathrm{P}\, e^{-\int_{\mathcal{C}}a} \end{equation} and $a$ is a 1-parameter family of flat connections given by Eq. (2.3). My question is what becomes of the last two terms (i.e. $-a_0a_1 +a_1a_0$) in the second line of Eq. (2.10). Do they cancel? I don't see why the should because the $a$'s are non-commuting (Lie algebra-valued).
Defining the Wilson loop without the minus sign in the exponent gives \begin{align} \partial_t U(y,t;z,t) & = \partial_t \mathrm{P} \, e^{\int_{(z,t)}^{(y,z)} dx^\mu a_\mu} \\ & = \partial_t \mathrm{P} \, e^{\int_z^y dx a_1} \\ & = \int_z^y dx \, U(x,t;z,t)\dot{a}_1(x,t)U(y,t;x,t) \\ & = \int_z^y dx \, U(x,t;z,t)[a_0' - a_0a_1 + a_1a_0]_{(x,t)}U(y,t;x,t) \\ & = \int_z^y dx \partial_x \left[ U(x,t;z,t) a_0(x,t)U(y,t;x,t) \right] \\ \end{align}
{ "language": "en", "url": "https://physics.stackexchange.com/questions/92316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Without apparatus can we say that the system is measured(decohered) by the environment? "Einselection" and "tridecompositional uniqueness theorem" seem to resolve the preferred basis problem. But the premise is that there are three parts in discussion.(system, apparatus, environment) However, it seems that in many situations we don't have the role of apparatus, and thus there are just system and environment. For instance, we usually say that the system is monitored by its environment and thus in a state with determinate classical physical value. In these situations, in which there are just two parts(system and environment) and the Schmidt form of the total system is not unique, can we say that the system is measured(decohered) by the environment?
At least to me, it is unclear what it means to be "measured by the environment". As far as decoherence is concerned the situation is however quite clear. Already the original "einselection" framework of Zurek is applicable to bipartite system/environment scenarios. Let $(| p\rangle)_p$ be a "pointer basis" for the system. Then any Hamiltonian of the form $$ H = \sum_p |p\rangle\langle p| \otimes H^{(p)} , $$ with $(H^{(p)})_p$ being some Hamiltonians of the environment, leads to a time evolution that, if the joint system starts in a product state, leaves the diagonal elements of the density matrix of the system invariant and "suppresses" the off-diagonal elements, in the sense that they are during the whole evolution never larger than they were initially and usually for most times very small. A similar phenomenon can also be shown for more generic Hamiltonians under the assumption that the coupling between the system and the environment is sufficiently weak (see for example http://arxiv.org/abs/0908.2921 and the references therein).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/92453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How do I show that the eigenstates of a Hamiltonian can be made orthonormal? I've been tearing my hair out over this all evening. It should be simple but I must be missing something somewhere. Can someone show me how to prove that the eigenstates of a Hamiltonian can be made orthonormal, please?
* *We first prove orthogonality of non-degenerate eigenvectors of the Hamiltonian. Consider the braket and act with the Hamiltonian in both directions, $ \left\langle\alpha | H |\beta\right\rangle = E_\alpha\left\langle\alpha |\beta\right\rangle = E _\beta\left\langle\alpha |\beta\right\rangle $ If the states are not orthogonal ($\left\langle\alpha |\beta\right\rangle \neq 0 $) then we would get a contradiction since we assume the states are non-degenerate ($E_\alpha\neq E_\beta $). So we must have $\left\langle\alpha |\beta\right\rangle = 0 $ for distinct states. *Now we need to prove that the braket of two eigenstates is equal to $1$ up to a phase. Consider the braket: $ \left\langle\alpha |\alpha\right\rangle = \sum_n \left\langle\alpha |n\right\rangle \left\langle n |\alpha\right\rangle = \left\langle\alpha |\alpha\right\rangle \left\langle\alpha |\alpha\right\rangle $ where we have inserted a sum over the states of the Hamiltonian and then used the orthogonality relation that we proved above. Now we can divide both sides by $\left\langle\alpha |\alpha\right\rangle $ to get $\left\langle\alpha |\alpha\right\rangle = 1 $ *Thus for we have only considered non-degenerate eigenvectors. Degenerate eigenvectors can't be distinguished and they don't need to be orthogonal to each other. However, for any set of linearly independent vectors (all wavefunctions of a Hamiltonian are linearly independent) there exists linear combinations of them that are orthogonal which can be found through the Gram–Schmidt procedure. Thus one can choose the vectors to be linearly independent.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/92625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The famous drop of $c$ In this (in my opinion) intriguing speech, Rupert Sheldrake tells the story of the drop in the measured value of $c$ between 1928 and 1945. When he goes to visit the Head of Metrology of the Physics Lab in Teddington, he says (I summarize): $c$ cannot change, it is a constant! We explain the drop you are talking about with "intellectual phase locking". Anyway, we have now solved the problem. We fixed the speed of light by definition in 1972. It might still change, but since we define the meter from $c$, we would never know. Is that true? If $c$ changed, would we be able to see it? And how does science explain the famous drop in the measured value of $c$?
* *The speed of light was defined at its present value in 1983, not 1972. *We could know that $c$ because $\alpha\propto1/c$ (fine structure constant) and we have better ways of determining $\alpha$ than $c$ * *Not actually true: we cannot determine if physical constants change, cf. this Physics.SE Q&A *"Official" science uses error bars when measuring things, "unofficial" scientists ignore these crucial components. (based on data from Wikipedia and Henrion & Fischhoff 1986 (NB: PDF)). The relevant section of Henrion & Feschhoff reads, A related measure [to the chi-squared statistic], the Birge ratio, $R_B$, assesses the compatibility of a set of measurements by comparing the variability among experiments to the reported uncertainties. It may be defined as the standard deviation of the normalized residuals:$$R_B^2=\sum_ih_i^2/(N-1)$$ Alternatively, the Birge ratio may be seen as a measure of the appropriateness of the reported uncertainties...If $R_B$ is much greater than one, then one or more of the experiments has underestimated its uncertainty and may contain unrecognized systematic errors...If $R_B$ is much less than one, then either the uncertainties have, in the aggregate, been overestimated or the errors are correlated. According to Henrion & Fischhoff, the Birge ratio in the range 1875-1941 was 1.47 while the range 1947-1958 had a ratio of 1.32; the combined ranges give $R_B= 1.42$. This means that pretty much all the data taken prior to the 1960's was not accounting for error correctly. Since then, we have improved (a) our experiments to reduce the errors and (b) our ability to correctly account for errors.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/92695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 1 }
How to treat differentials and infinitesimals? In my Calculus class, my math teacher said that differentials such as $dx$ are not numbers, and should not be treated as such. In my physics class, it seems like we treat differentials exactly like numbers, and my physics teacher even said that they are in essence very small numbers. Can someone give me an explanation which satisfies both classes, or do I just have to accept that the differentials are treated differently in different courses? P.S. I took Calculus 2 so please try to keep the answers around that level. P.S.S. Feel free to edit the tags if you think it is appropriate.
With the objective of keeping complexity to a minimum, the best "unifying" solution, is to think of differentials, infinitesimals, numbers, etc. as mathematical symbols to which certain characteristics, properties, and mathematical operations (rules), are applicable. Since not all rules are applicable to all symbols, you need to learn which rules are applicable to a particular set of symbols. Whether you are learning fractions, decimals, differentials, etc., just learn the symbols and their particular rules and operations and that will be sufficient for 99% of the time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/92925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "78", "answer_count": 9, "answer_id": 6 }
Negative sign in the Dirac term from the SUSY Kahler potential I want to calculate the Dirac term from the canonical Kahler potential, \begin{equation} K = \Phi ^\ast \Phi \tag{1} \end{equation} but I'm coming across a pesky negative sign in the final result. I am finding (see derivation below), \begin{equation} - i \bar{\psi} \bar{\sigma} ^\mu \partial _\mu \psi \tag{2} \end{equation} This agrees with lecture notes by Matteo Bertolini (pg. 87) but is off by sign from lecture notes by Fernando Quevedo (pg. 50), while they both supposedly use the same conventions. I would be okay with deciding Quevedo's notes have an error but my result also seems to contradict the regular Dirac Lagrangian, which in four-vector notation is \begin{equation} {\cal L} _D = + i\bar{\Psi} \gamma ^\mu \partial _\mu \Psi \tag{3} \end{equation} Any ideas where this problem is coming from? Here are my steps: The chiral and antichiral superfields take the form (I am using the (+---) metric), \begin{align} & \Phi = \phi + \sqrt{2} \theta \psi + \theta ^2 F + i \theta \sigma ^\mu \bar{\theta} \partial _\mu \phi - \frac{ i }{ \sqrt{2} } \theta ^2 \partial _\mu \psi \sigma ^\mu \bar{\theta} - \frac{1}{4} \theta ^2 \bar{\theta} ^2 \Box \phi \tag{4}\\ & \Phi ^\ast = \phi ^\ast + \sqrt{2} \bar{\psi} \bar{\theta} + \bar{\theta} ^2 F ^\ast - i \theta \sigma ^\mu \bar{\theta} \partial _\mu \phi ^\ast + \frac{ i }{ \sqrt{2} } \bar{\theta} ^2 \theta \sigma ^\mu \partial _\mu \bar{\psi} - \frac{1}{4} \theta ^2 \bar{\theta}^2 \Box \phi ^\ast \tag{5} \end{align} Calculating the Dirac term involves the product of the second term of $ \Phi ^\ast $ and the fifth term of $ \Phi $ and vice versa. I find: \begin{align} - i \bar{\psi} \bar{\theta} \theta ^2 \partial _mu\psi \sigma ^\mu \bar{\theta} & = - i \partial _\mu \psi ^\alpha \sigma ^\mu _{ \alpha \dot{\alpha} } \bar{\psi} _{\dot{\beta}} \bar{\theta} ^{\dot{\beta}} \bar{\theta} ^{\dot{\alpha}} \theta ^2 \tag{6}\\ & = \frac{ i }{ 2} \partial _\mu \psi ^\alpha \sigma ^\mu _{ \alpha \dot{\alpha} } \bar{\psi} ^{\dot{\alpha}} \bar{\theta} ^2 \theta ^2 \tag{7}\\ & = - \frac{ i }{ 2} \bar{\psi} \bar{\sigma} ^\mu \partial _\mu \psi \bar{\theta} ^2 \theta ^2 \tag{8} \end{align} where in the last step I used the spinor identity, $ \psi \sigma ^\mu \bar{\chi} = - \bar{\chi} \bar{\sigma} ^\mu \psi $. Repeating the calculation for the product of the fifth term of $ \Phi ^\ast $ and the second term of $ \Phi $ and summing the two results gives: \begin{equation} - i \bar{\psi} \bar{\sigma} ^\mu \partial _\mu \psi \bar{\theta} ^2 \theta ^2 \tag{9} \end{equation} which after stripping off the $\theta,\bar{\theta}$ is the result I quote above.
The final sign $- i \bar{\psi} \bar{\sigma} ^\mu \partial _\mu \psi$ seems correct if we look at Matteo Bertolini, formula $5.2$, page $72$, just notice that the order of $\psi$ and $\bar \psi$ is inverted in the formula and apply $\frac{i}{2}\partial_\mu \psi \sigma ^\mu \bar{\psi} = -\frac{i}{2} \bar{\psi} \bar{\sigma} ^\mu \partial_\mu \psi$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/92998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Would a three wheeled vehicle be faster than a four wheeled vehicle of the same weight? If I have a four wheeled vehicle (small wooden block with metal nail axles and plastic wheels) and apply a force X to it, would it be made faster by keeping one wheel off the ground in order to reduce friction? My thought is that the remaining three wheels would then have more weight on them, and thus more friction -- but is this added force more than offset by the loss of friction in the missing wheel? Update: After some back and forth with Ruben I think I have gathered the following -- The friction per wheel exists in both contact with ground and to a much greater degree contact of axle to wheel. There is a small and most likely negligible wind resistance component. 4 wheeling........3 wheeling * *Friction per wheel = F ...........................4/3 F *Wind resistance = W ...................................W
Friction is what keeps the wheels from spinning (i.e. traction), the friction that you want to reduce in order to gain speed, is air resistance. Removing a wheel adds more strain (friction) to the axes, probably making the car slower (keep in mind that the 4rth wheel still has some friction even though it is removed).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Different definition of SL(2,R) algebra? I'm looking into $SL(2,\mathbb{R})$ group and it's algebra. I found on line that the $sl(2,\mathbb{R})$ algebra is given by the two by two real matrices of trace zero. This Lie algebra has dimension three; a standard basis is given as $$X=\begin{pmatrix}1 & 0\\ 0 & -1\end{pmatrix}, Y=\begin{pmatrix}0 & 1\\ 0 & 0\end{pmatrix}, Z=\begin{pmatrix}0 & 0\\ 1 & 0\end{pmatrix}$$ with commutation relations $[X,Y]=2Y$, $[X,Z]=-2Z$, $[Y,Z]=X$, and Jacobi identity is satisfied (I calculated via Mathematica). Now, in an article about Kerr/CFT correspondence, the Near-Horizon Extreme Kerr (NHEK) geometry, has an enhanced $SL(2,\mathbb{R})\times U(1)$ isometry group, with Killing vectors that generate $SL(2,\mathbb{R})$ group \begin{equation} \tilde{J}_0=2\partial_\tau \end{equation} \begin{equation} \tilde{J}_1=\frac{2r\sin\tau}{\sqrt{1+r^2}}\partial_\tau-2\sqrt{1+r^2}\cos\tau\partial_r+\frac{2\sin\tau}{\sqrt{1+r^2}}\partial_\varphi \end{equation} \begin{equation} \tilde{J}_2=-\frac{2r\cos\tau}{\sqrt{1+r^2}}\partial_\tau-2\sqrt{1+r^2}\sin\tau\partial_r-\frac{2\cos\tau}{\sqrt{1+r^2}}\partial_\varphi \end{equation} with algebra that satisfies \begin{equation} [\tilde{J}_0,\tilde{J}_1]=-2\tilde{J}_2,\quad [\tilde{J}_0,\tilde{J}_2]=2\tilde{J}_1,\quad [\tilde{J}_1,\tilde{J}_2]=2\tilde{J}_0 \end{equation} and Jacobi identity is also satisfied. Now, my question is, since this is $SL(2,\mathbb{R})$ group, shouldn't the algebra be the same? That is, shouldn't Lie brackets be identical in the first and second case? Why is there a difference? In one case I have $[X,Y]=2Y$, and in other $[X,Y]=-2Z$ basically. Is this because of how we defined the generators? I'm kinda confused because how do I know the latter are indeed generators of $SL(2,\mathbb{R})$. I mean, all I need for Lie algebra is to have the basic axioms fullfilled, and that's it, right?
Given Lie Algebras $\mathfrak{g}$ and $\mathfrak{g'}$ structure constants need not to be the same in order for $\mathfrak{g}$ and $\mathfrak{g'}$ to define the same Lie Algebra. Since any Lie Algebra is by definition a vector space with a product (the commutator) that satisfies certain properties, it is indeed a linear space. Then it is all up to a change of base. Probably the two algebras you are facing are indeed the same, but written in with a different bases for the vector space. My suggestion is that you should find a matrix $M$ that brings you from one base to the other. Basically if you call $T_i$ the generators of $\mathfrak{g}$ and $\tilde{T}_i$ the generators of $\mathfrak{g}$, as long as you find a matrix that does $$MT_iM^{-1}=\tilde{T}_i$$ then $\mathfrak{g}$ and $\mathfrak{g'}$ deifne the same Lie Algebra.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
What is the difference between the words transparent and translucent? Merriam Webster defines transparent as: Having the property of transmitting light without appreciable scattering so that bodies lying beyond are seen clearly. And translucent as: Transmitting and diffusing light so that objects beyond cannot be seen clearly. Now if you see any object through a lens or a bottle filled with water then most of the times whatever you see is not at all clear. What does this signify? Are these two transparent or translucent? Or is their behaviour conditional to how we see through them?
Lenses and glass bottles are transparent. As you quoted above, the different has to do with diffusion. Here is an example of an image through a transparent object: Here is an example of a translucent object: This is an example of how diffusion causes translucency: As light passes through a translucent object, it either enters or exists a rough surface that causes light to reflect and refract at a bunch of different angles. This causes the image through the glass to be very blurry. When you look through a glass or lens and object isn't clear, that's because it isn't focused, not because of diffusion. There are many reasons why images won't be focused but most have to do with the lens not being shaped perfectly or different behavior for different colors of light. See Wikipedia on optical aberrations for more information on this. Here is an example of a perfect lens (top) versus a lens with a spherical aberration (bottom): The word transparent is used in all cases where diffusion isn't involved. Even if the lens is poor and causes images to not focus properly, as long as the issue is due to aberrations. The word translucent gets applied when there is significant diffusion of light to the point where the object looks "cloudy" or "frosted" and a sharp image can never be formed. When you look through glasses with water and see an out-of-focus image, the glasses are still transparent.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Field of moving charge / Lorentz; Liénard-Wiechert First question here. I'm really confused at the moment. An electron moves at constant velocity, no acceleration Wikipedia says here Lorentz: $$\mathbf E=\frac{q}{4\pi\epsilon_0}\frac{1-v^2/c^2}{1-v^2\sin^\theta/c^2}\frac{\hat{\mathbf r}}{r^2},$$ which yields something like this: Whereas here, Wikipedia says this and this, $$ \frac{E'_y}{E'_x} = \frac{E_y}{E_x\sqrt{1-v^2/c^2}} = \frac{y'}{x'}, $$ which yields something like this: Which one is correct? If you could explain me exactly the reason why one of them is correct, I give you a big imaginary hug. Last question: In none of those fields is there any radiated energy, since there is no acceleration, correct?
Both equations (for the instantaneous field of a charge moving with constant velocity $v$) are correct. (Well, maybe the primes should be swapped in the second equation, so that the unprimed frame is that in which the charge is moving.) The first figure is not an accurate representation of the first equation: as Jan Lalinsky stated, the field lines should be symmetric about $\theta=\pi/2$, the direction perpendicular to the velocity. The second equation just says that the field lines are still radial for the moving charge, although they're no longer isotropic. Again echoing Jan Lalinsky, the second figure (another plot of the first equation) looks fine. Finally, no acceleration does indeed mean no radiation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Physical motivation for differentiation under the integral I am thinking about the mathematical process of "differentiating underneath the integral", i.e. applying the theorem $$\partial_s \int_{-\infty}^\infty f(x,s)\,dx=\int_{-\infty}^\infty \partial_s f(x,s)\,dx$$ given some regularity assumptions. I was trying to think of some relevant physical interpretations of this. One that I camp up with (that I think is rather weak) is: the total force exerted by the walls of a chamber holding a gas is defined by an integral. We might want to ask how that function is changing with respect to some parameter of the gas, so we'd differentiate under the integral. Anyone have a better one?
For example, consider some water flow in the space, in which the density $\rho(x,t)$ fluctuates in space and in time. You might be interested in how the mass inside some fixed volume $V$ changes over time. The mass is equal to $$ M(t)=\int_V{\rho(x,t)\mathrm{d}x}, $$ therefore the "mass flow rate", using the rule you mentioned, is equal to $$ \frac{\mathrm{d}}{\mathrm{d}t}M(t)=\int_V \partial_t\rho(x,t)\mathrm{d}x. $$ Other examples include energy, probability, momentum or state densities instead of mass density.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Beam power and electric field after a beam splitter Consider a beam with power $P_1$ and electric field amplitude $E_{01}$. It is sent through a 50/50 beam splitter that produces beams with power $P_2=P_3=P_1/2$. What are the electric field amplitudes of the split beams, $E_{02}$ and $E_{03}$? From what I understand, $P=KE_0^2$ where $K$ is a constant. Therefore, $E_{02}= \sqrt{P_2/K}=\frac{1}{\sqrt{2}}\sqrt{P_1/K}$, $E_{03}= \sqrt{P_3/K}= \frac{1}{\sqrt{2}}\sqrt{P_1/K}$ Now say I can recombine the beams in phase and without any losses, then $E_{04}=E_{02}+E_{03}$. The power of the recombined beam is $P_4=KE_{04}^2 $ $= K[E_{02}^2 + E_{03}^2 + 2E_{02}E_{03}]$ $=\frac{1}{2}P_1+\frac{1}{2}P_1 + P_1$ $=2P_1$ So $P_4 >P_1$ and I've created power from nowhere! What is wrong with this picture?
Your error is in how the electric fields are combined by a 50/50 beam splitter. If you have two entry ports $a$ and $b$ with electric field amplitudes $E_a$ and $E_b$, and exit ports $c$ and $d$ with electric field amplitudes $E_c$ and $E_d$, then the correct way to combine them is $${E_c=\frac1{\sqrt2}(E_a+E_b),\\ E_d=\frac1{\sqrt2}(E_a-E_b).}$$ You know one instance of this already: if port $b$ is shut off, then each of the output ports should get $1/\sqrt2$ of the amplitude. Similarly, if port $a$ is shut off, the same thing should happen, so both $E_a$ and $E_b$ should have equal weights in the expressions for $E_c$ and $E_d$. (Here, of course, I'm invoking the principle of superposition to combine the solutions for different sets of sources.) The phases are a little trickier. The minus sign along the bottom can be imposed by saying the interferometer is aligned so that no output comes out of that port for equal input intensities. The sign of $E_a$ at the top can also be arbitrarily set, by adding an appropriate phase delay on port $c$. (A brief note for an intermission. The formulas above can of course be derived rigorously once you know how the beam splitter is implemented. However, it's not necessary to know the details to derive them, since they also follow from the general considerations I'm expounding here.) By now, your problem has gone away. Even if you had an arbitrary phase in the final coefficient, i.e. $E_c=\frac1{\sqrt2}(E_a+e^{i\theta}E_b)$ for some $\theta$, then you can't create energy: $$P_c=K|E_c|^2=\frac12K(E_a^2+2\cos\theta E_a E_b+E_b^2)\leq \frac12K(E_a+E_b)^2 = P_1.$$ In fact, in order not to destroy energy, you must have both components coming out in phase, with $\theta=0$ and a plus sign in the equation for $E_c$. So, what's the bottom line? In order to recombine beams in phase and without any losses, you must put them through a beam splitter and ensure that interference kills the other output port. To do that, though, the amplitude of each beam gets further reduced by $1/\sqrt2$, and this ensures that the total beam energy is conserved.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why does dark matter form walls and filaments Related: How are galaxy filaments formed? And do they have any analogues in stellar formation? But I want to come at this from a different angle. Like the user asking that other question, I was a bit surprised by the walls, filaments and nodes of the large-scale structure of dark matter: Intuitively, I might have expected more spherical shapes. I got a bit closer to a satisfactory explanation when I heard about an equivalent way of thinking about the situation. Instead of thinking of overdense regions collapsing (e.g. the Zel'dovich pancakes mentioned in the answer to the question linked above), one can think about underdense regions expanding to form voids/supervoids/etc. The voids are (roughly) spherical, and as they push out and collide, they compress the dark matter into walls and filaments. I'm picturing something like blowing bubbles in soapy water, giving a nice intuitive picture. Now where I get hung up is that there is an obvious symmetry between thinking about overdensities collapsing and underdensities expanding, but there is an obvious lack of symmetry between the structures resulting from collapse and expansion, for instance why wouldn't we get the opposite case, where the voids are filament shaped and the dark matter forms roughly spherical blobs? I have a feeling that the key is in the strictly attractive nature of gravity, but can't really put my thoughts together coherently. Would be interested to hear if anyone can elaborate a bit on this. Feel free to get technical in the answers, I have a solid math & physics background to help me interpret. Bonus points if you can draw a nice intuitive picture, though :)
Gravity is not attractive emanating from objects. It is repulsive emanating from voids, and objects result from congealed light pressed together in black hole whirlpools.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Does any object placed in an electric field change the electric field? Lets say I have a point charge of magnitude $+q$, All around it I would have a symmetric radial electric field. Now if I place a neutral object lets say a sphere (doesn't matter insulating or conducting) in this field some distane away from the point charge. A negative charge will be induced on the object near the point charge and a positive charge on the opposite side. No matter how small this induced charge is, due to the radial distance of the two (positive and negative) there must be an increase/decrease in net electric field on either side of the object and mostly everywhere else too ! I hope that what I am thinking is wrong, because we have not been taught that anything placed in electric field would affect the field itself regardless of it's nature. But I can't figure out what am I thinking wrong, how to solve this dilemma ?
I'm not quite sure I understand why you have a problem with this - every static charge is a source or a drain of the electric field, depending on its sign. So obviously the field of a single charge at the origin will be different from the field of three charges or any other configuration. The electric potential of such an ensemble of pointlike charges $q_i$ at a specific point, measured by an uncharged observer, will be $$\Phi(\vec r) = \frac{1}{4 \pi \epsilon_0} \cdot \sum_i \frac{q_i}{\left| \vec r - \vec r_i \right|}$$ At this point, instead of evaluating the sum you can do a multipole expansion $$\Phi(\mathbf r) = \frac{1}{4 \pi \epsilon_0} \left( \frac{Q}{r} + \frac{\mathbf r \cdot \mathbf p}{r^3} + \frac{1}{2} \sum_{k,l} Q_{kl} \frac{r_k \cdot r_l}{r^5}+... \right),$$which is essentially a Taylor series of ${\left| \vec r - \vec r_i \right|}^{-1}$. From there, you will get the electric field by taking the gradient of the approximated potential.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/93911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }