Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How can a photon collide with an electron? Whenever I study the photoelectric effect and the Compton effect, I have always had a question about how a photon can possibly collide with an electron given their unmeasureably small size. Every textbook I've read says that the photo-electrons are emitted because the photons collided with them. But since the photons and electrons virtually have no size, how can they even collide? I have searched for the answer on the internet but I couldn't find any satisfying one.
Both photons and electrons may be considered point-like particles, but the interaction/force that they feel has a range: the electromagnetic interaction has a pretty long range. Actually it is infinite in the absence of screening effects (ideal cases). You could ask yourself, what does it even mean colliding? For example when you clap your hands, the atoms that form your skin do not collide or touch at all. It is just the "electric" repulsion that increases so much until you do not have the force in your muscles to overcome it.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38", "answer_count": 9, "answer_id": 7 }
Wavefunction of a particle on a ring ($E > V$) using WKB method For a particle on a ring (with radius $R$ and changing angle $\theta$) with only kinetic energy ($V=0$) we get the expressions for the wavefunction (normalized) and eigenvalues $$\Psi_n (\theta) = \frac{1}{\sqrt{2 \pi}} e^{in \theta}$$ $$E_n= \frac{\hbar ^2}{2mR^2}n^2$$ with $$n=0, \pm1, \pm2, ...$$ This means we have degeneracy for $|n|>0$, and only one eigenstate for $n=0$ with energy $E_0 = 0$. Now using the WKB method, considering $L=2\pi R$ the length of the ring and $s$ the distance travelled by the particle, for $E > V(s)$, being $V(s)$ any potential we get $$\Psi(s) = \frac{1}{\sqrt{p(s)}}(C_+ e^{i \phi (s)} + C_- e^{-i \phi (s)}) \tag{1. a}$$ or similarly $$\Psi(s) = \frac{1}{\sqrt{p(s)}}(A\, cos(\phi (s)) + B\, sin(\phi (s))) \tag{1. b}$$ with $$\phi(s) = \int_0^s p(s') ds'$$ and $$p(s)=\sqrt{2m(E_n - V(s))}$$ $\hskip1.95in$ After the boundary conditions $\psi(s) = \psi (s + L)$, or equivalentely $\psi(0) = \psi (L)$, the quantization for the eigenvalues is obtained $$\int_0^L p(s') ds' = n h \tag{2}$$ since $p(s)>0$, we obtain $n=1, 2, ...$ by evaluation of ($2$). Physically speaking we could think of a particle moving in the counter clockwise direction ($s=0 \rightarrow s=L$) regarding ($2$). Inverting direction and the eigenvalues $E$ would remain the same, but now we have $n=-1, -2, ...$ There's degeneracy like the example above with $n=\pm 1, \pm 2, ...$ however $n=0$ doesn't exist here (for it regards to $p=0$ and the WKB approximation doesn't apply). The degeneracy is sustained by the fact that in ($1$) there exists two linearly independent solutions. My question is: How can I obtain the wavefunction ($1.b$) for each value $E_n$? Either obtaining the constants ($A$ and $B$) or a more elegant expression.
* *For the case $V=0$, note that the semiclassical WKB approximation cannot be trusted for the ground state $n=0$ corresponding to zero momentum $p=0$, or equivalently, infinite de Broglie wavelength $\lambda=h/p=\infty$. *More generally, the semiclassical WKB approximation typically estimates the ground state poorly, cf. e.g. this Phys.SE post. *For a derivation of the WKB approximation and its limitations, see references in this Phys.SE post.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Electric field in a sliding bar along frictionless conducting rails When discussing a conducting bar sliding frictionless over two parallel conducting rails in the context of motional emf as in the picture below, Chabay and Sherwood write in their book Matter and Interactions (4th edition, page 821) The electron current through the resistor is continually depleting the charges on the ends of the moving bar with the result that the electric field inside the bar is always slightly less than is needed to balance the magnetic force. The downward electric force $eE$ is slightly less than the upward magnetic force $evB$, so the electrons move toward the negative end of the bar. (emphasis added) How much is slightly less? How can it be quantified depending on $R$ and $v_{\text{bar}}$ or maybe other parameters of the system?
Let's call the $+x$ direction to the left, the direction the bar is moving. The current through the bar is up. A current up in a field out of the page gives a force to the right: $$ F_x = ma = -ILB \Rightarrow\\ m \frac{dv}{dt} = -ILB. $$ The current is given by Ohm's Law, where the emf is the motional emf: $$ I = \frac{ε}{R} = \frac{vBL}{R} $$ Plugging this into our force expression gives: $$ m \frac{dv}{dt} = -\frac{vB^2L^2}{R} $$ The solution to this is the function that is basically the negative derivative of itself. We need a negative exponential: $$ v(t) = v_i e^{-t/τ} $$ The time constant here is $τ = mR/(B^2L^2)$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/543767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is a photon reflected, transmitted or in a superposition? When a photon hits a half-silvered, mirror quantum mechanics says that rather than being reflected OR transmitted it enters into a superposition of transmitted AND reflected (until a measurement takes place). Is there an experiment that demonstrates that this is actually the case and that the photon didn't end up with a single outcome all along? In other words, is the superposition view just a hypothesis that can't be proven either way?
When the photon is detected, it always ends up with a single outcome. The superposition is simply an expression of the fact that until it is detected there are non-zero probabilities for different outcomes. Confusion arises because there is a fundamental difference between probabilities given in quantum mechanics and classical probabilities. In classical probability theory outcomes are determined by unknowns, but quantum mechanics describes situations in which outcomes are actually indeterminate. This naturally alters the way in which probabilities are calculated. Textbooks, and physicists generally, are usually concerned with applications, and in doing the calculations, not the underlying mathematical reasons why we calculate probability in quantum mechanics in the way in which we do, but it is known in the mathematical foundations of quantum mechanics that the general form of the Schrodinger equation is required by the probability interpretation, and that consequently we calculate using wave mechanics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Variance of an Overlap Between States: Bra-Ket Notation? Imagine two eigenstates of a system $|0\rangle$ and $|1\rangle$, and suppose you manage to prepare your system in the superposition $|\psi_{in}\rangle = (|0\rangle + |1\rangle)/\sqrt{2}$. After some time, the system evolves naturally to the state $|\psi_{out}\rangle = (|0\rangle + e^{i\phi}|1\rangle)/\sqrt{2}$. The probability of the output being the same as the input is $p(\phi) = |\langle \psi_{in}|\psi_{out} \rangle|^2$. I'm reading a paper that claims we can estimate this quantity with a statistical error (meaning variance) of $\Delta^2p(\phi) = \langle \psi_{out}| \left( |\psi_{in}\rangle \langle \psi_{in}| \right)^2 |\psi_{out}\rangle - p^2(\phi)$. Can anyone tell me where this expression comes from? Maybe I'm missing something obvious, but it's not clear how this relates to any of the usual expressions I know for variance or standard deviation.
The variance of an operator $\hat A$ is $$\langle \psi| \hat A ^2 |\psi\rangle - \langle \psi | \hat A |\psi\rangle^2 , $$ as in statistics, $\overline{A^2}-\bar {A}^2$. You have used this in the uncertainty principle. Your operator $\hat A$ here, however, is a projector, $P=|\psi_{in}\rangle \langle \psi_{in}|=P^2$, yielding $p-p^2$. So $(\tfrac{1}{2}\sin \phi)^2$. Which paper?
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does a rubber band become a lighter color when stretched? I was stretching a pink colored rubber band, and I noticed that the longer I stretch it, the lighter the pink becomes. I haven't found answers to this question anywhere else. Is there a reason for this phenomenon? Why does this happen?
Rubber bands are made of polymers (more specifically elastomers). A given polymer in the band can either be aligned with other polymers around it, or it can be misaligned. Therefore, you can end up with regions of order and regions of disorder in the band. In an unstretched band you have much more disorder, but when you stretch the rubber band you are forcing the polymers to become more ordered and aligned$^*$. It is this alignment that changes the optical properties of the band, causing it to scatter light differently and appear to be more white. Relating this to Anders Sandberg's answer, unstressed the rubber is more transparent, but stretched the rubber is more opaque, thus causing fewer pigments to be visible. $^*$This also explains why heating up a rubber band causes it to shrink, as the extra energy causes the polymers to become less aligned, which causes the band to decrease in length.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85", "answer_count": 6, "answer_id": 1 }
"A spinning top spins much longer because it experiences less frictional torque" is wrong? The above quote was found in my physics textbook, but it struck me as strange because my understanding of friction is that the surface area doesn't matter in calculating the amount of frictional force. Another question that asked a similar thing on stackexchange was answered basically by saying that a spinning top with a narrow point spins better and longer because of "precession"? Why does a top spin so well? So my question is: is the above statement just flat out wrong? Is the reason it spins much longer not because of torque, but because of other properties of a narrow point?
The surface area doesn't matter because $F_a = \mu N$. If the contact area is very small as in a top, the normal is the same, and $F_a$ doesn't change. But the average distance ($d$) between the center of spin in the ground and the other points of contact (because the "point" of contact indeed has some area) is very small. So the resisting (friction) torque, $T = F_a d$ is as small as the contact area tends to a theoretical point.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/544682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Are antileptons and antibaryons linked? The recent news about the T2K experiment got me thinking: is there any linkage in the Standard Model between the matter and antimatter categories across the families of Standard Model particles? Are antileptons necessarily linked to antibaryons? As a specific example: In our universe "matter" is made up of electrons $e^-$ and protons $p$. Antimatter particles are positrons $e^+$ and antiprotons $\bar p$. $p$ and $\bar p$ are obviously a matter-antimatter pair, but is there any theoretical reason the $e^-$ is the same type of matter as the $p$? Could there be a universe in which $p$ and $e^+$ are the "matter" particles and $\bar p$ and $e^-$ are the "antimatter" particles?* * Besides the fact that obviously that would be a weird universe where you couldn't make atoms.
What I am trying to ask: is there any reason that p and e− must be grouped together as matter (other than their current abundance in the universe). The basic and only reason is that the grouping is consistent and unique within the standard model of particle physics, which emerged from a great number of data validating it. It depends on what has been observed.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Why does a hollow conductor does not have a electric field inside it when it is enclosing a charge? When a positive charge is enclosed in a thick hollow sphere which is a conductor, the inner surface gains a negative charge distribution and due to that the outer gains a positive charge distribution. So there should be electric fields through the walls of the sphere. But it is not possible because the potential difference in a conductor must be zero, the conductor should be a equipotential surface. Please clear this doubt.
Why does a hollow conductor does not have a electric field inside it when it is enclosing a charge? You can take this as a given in electrostatics. You can also take this fact as derivable from your statement that "the potential difference in a conductor must be zero." The potential difference divided by the distance difference is the negative of the field value (E=-$d\phi/dx$). But negative zero is zero, so the field is zero. If you want a further reason, you can consider what would happen if there were a free charge inside the conductor. If there were some free charge, that charge would be acted on by the field and would feel a force and would move (e.g., to the surface of the conductor). But now we have a moving charge, which means we are in the realm of electrodynamics not electrostatics. This latter comment is not really a reason, just kind of pointing out that to be internally consistent we can't have free charge moving around and still say we are doing electrostatics. So there should be electric fields through the walls of the sphere. No, you can think of the field "inside the walls" as coming from the "+q" in your picture and all the little "-" signs in your picture. These two fields exactly cancel each other inside the walls of the conductor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Inverse of a metric tensor on a Hermitian manifold Let $(M, g)$ be a Hermitian manifold. We have a metric tensor $g^{i \bar j} dz_i \otimes d\bar{z_j}$, where $(g_{i \bar j})$ is a hermitian positive definite matrix. Now we naturally get the inverse of the metric $(g^{i \bar j})$. I have been told being inverse to each other would imply: $g^{p \bar k} g_{q \bar k} = \delta_{pq}$ which makes no sense to me. I think matrix multiplication should give us $g^{p \bar k} g_{k \bar q} = \delta_{pq}$.
The inverse property implies $$\sum_k(g^{-1})^{pk} g_{kq}+\sum_{\bar{k}}(g^{-1})^{p\bar{k}} g_{\bar{k}q} ~=~ \delta^p_q.$$ It is standard convention to not write the power "$-1$" explicitly for the inverse metric. Next use symmetry $g_{\bar{k}q}=g_{q\bar{k}}$ and that for a Hermitian metric $g_{kq}=0$ to obtain the sought-for relation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Under which conditions do two moving bodies start orbiting each other around their center of mass? If two bodies are close, both will get attracted to each other and collide. Under what conditions will the two bodies start revolving around their common center of mass? I understand that such bodies represent the gravitional two-body problem, but I want to know what are the initial conditions that allow it so settle into such a configuration (without going into the maths)
Unless other objects are "near enough" to complicate the motion, the bodies can always be described by a two-body solution whenever both are in freefall (nothing is pushing one of them). The only difference that being close makes is that forces from other objects become less significant. You could then describe their motion as one of two groups: a hyperbolic pass if speed is great enough ($\mathit{KE} + \mathit{GPE} > 0$), or an elliptical orbit ($\mathit{KE} + \mathit{GPE} < 0$). A collision can happen in either case, it just means the orbital paths intersect the surface of the objects. Falling straight toward each other is just a degenerate case, where the ellipse minor axis (if a bound state) or the hyperbola transverse axis (if an unbound state) is zero length.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
What is the meaning of vertical bars in paths of high symmetry points? I am a new to the study of high symmetry paths. After looking at the Silicon path that is $Γ—X—U|K—Γ—L—W—X$, I am not able to understand the meaning of $U|K$ in this path?
Silicon's crystal structure is the diamond crystal structure and the Bravais lattice is the fcc lattice. The first Brillouin-Zone (a truncated octahedron) looks like this: (Image source and further information) As you can see, the path includes every one of the red lines, always connecting neighboring symmetry points, except for a discontinuity at $K|U$. The wikipedia page for "bitruncated cubic honeycomb", which is the space-filling tesselation made up of truncated octahedra also containes an image that shows how these Brillouin-Zones are aligned in 3D $\boldsymbol{k}$-space: If you compare this image with the one of the first Brillouin-Zone above, you can see that by continuing the reciprocal lattice, every $K$-point overlaps with a $U$-point. This means, that for example the value of every band has to be identical at $K$ and $U$. This is why you will see jumps from $K$ to $U$ in bandstructure diagrams. In conclusion, the vertical bar in $K|U$ marks that both points are equivalent in $\boldsymbol{k}$-space even though the path has a discontinuity there.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/545748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Baryons in flavor $SU(N)$ (in ChPT) For flavor $SU(2)$ (Isospin) we have two $\frac{1}{2}^+$ baryons, the nucleons. For flavor $SU(3)$ we have the eight baryons in the octet. In a world with $N$ light quarks we would see a baryon multiplet of dimension $\frac{N}{3}(N^2-1)$. Such a theory would see the chiral symmetry breaking $SU(N)\times SU(N) \to SU(N)$ creating $N^2-1$ Goldstone bosons $\phi^a, a=1,\dots,N^2-1$. These mesons are usually parametrized in the $N\times N$ matrix $U=\exp\left( i T^a \phi^a \right)$, where $T^a$ are the generators of $SU(N)$. These fields than transform under $(L,R)\in SU(N)\times SU(N)$ as $U \mapsto R U L^\dagger$. So far so good (please correct me if I already made any mistakes, I think this is correct). My question now is: How can we include the $\frac{N}{3}(N^2-1)$ baryons in a Lagrangian? We need to find a parametrization for these baryon fields and we have to find out how they transform under $SU(N) \times SU(N)$. It feels like we got a bit lucky in reality, since for $N=2$: $\frac{N}{3}(N^2-1)=N=2$, and for $N=3$: $\frac{N}{3}(N^2-1)=N^2-1=8$ is the number of generators of $SU(3)$. So in these cases we can use the isospin doublet $N = \begin{pmatrix} p\\n\end{pmatrix}$ and for octet baryons we can use $B=\sum_{a=1}^{8} \frac{B^{a} \lambda^{a}}{\sqrt{2}}=\left[\begin{array}{ccc}{\Sigma^{0} / \sqrt{2}+\Lambda / \sqrt{6}} & {\Sigma^{+}} & {p} \\ {\Sigma^{-}} & {-\Sigma^{0} / \sqrt{2}+\Lambda / \sqrt{6}} & {n} \\ {\Xi^{-}} & {\Xi^{0}} & {-\sqrt{\frac{2}{3}} \Lambda}\end{array}\right].$ Is there literature for chiral perturbation theory for $N$ light flavors? How would one go about including baryons?
Baryons in ChPT are an advanced subject, so I won't presume to do your Googling for you. But you recall baryons are fermions, so you don't need gimmicks: $SU(N)\times SU(N)$ is realized linearly on vectors of an m-dimensional representation. Recall the nucleon isodoublet $$\begin{pmatrix} p\\n\end{pmatrix}$$ is acted upon on the left by both the L and the R chiral operators, $$ L^i=\tfrac{1}{2} \tau^i P_L, \qquad R^i=\tfrac{1}{2} \tau^i P_R, $$ where the two factor groups commute by virtue of the chiral projectors, the flavor group (algebra) being just the vector, $V^i=\tfrac{1}{2} \tau^i $. So, how does the group act on the Δ isoquartet $(Δ^{++},Δ^+, Δ^0, Δ^-)^T$? Same way, except you utilize the quartet isospin generators, mutatis mutandis... Same for SU(3); you could, if you wished, act on the left of the octet 8-vector $$ B^a= \begin{pmatrix}\sqrt{2}(\Sigma^++\Sigma^-)\\ i\sqrt{2}(\Sigma^+ -\Sigma^-)\\ \Sigma^0 \\ \sqrt{2}( p+\Xi^-)\\ i\sqrt{2}( p-\Xi^-)\\ \sqrt{2}( n+\Xi^0)\\ i\sqrt{2}( n-\Xi^0)\\ \Lambda \end{pmatrix} $$ by the adjoint rep (generators are the 8 structure constant matrices) instead of your matrix realization, now with the chiral gamma matrix projectors tacked on. (One assumes you have done the exercise of linking the two!) But that would also suggest to you how to deal with the baryon decuplet, one row of which we just did above! (However, you'd have to chase down the 10-dim generator matrices.) Proceed to SU(4), where the parents of the octet and the decuplet above are both 20 s, by coincidence. For generic flavor N, the mixed symmetry octet blossoms to $N(N^2-1)/3$-tuplet reps; but the symmetric decuplet to $N(N+1)(N+2)/6$-tuplets, etc.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Where is the wiggle room in current gravity theories? As far as I know, General Relativity has long since been proved experimentally to every qualified person's entire satisfaction, and modern technology such as GPS relies on its accurate predictions. So although there may be debatable aspects, such as local conservation of energy, and of course a theory of quantum gravity is still being sought, there seems little scope for improvement except possibly more streamlined formalisms and explicit solutions in more scenarios. But skimming ArXiv articles, one often sees titles of papers which appear to refer to lots of variant gravity theories. So I was curious to know how these differ from GR. Some look like abstract extensions to higher dimensions, perhaps with a view to elucidating why the 3+1 dimensions are favoured by nature. But others seem to require different laws of gravitional attraction, and one wonders if they are all consistent with GR. But, assuming they are, or are intended to be, where is the flexibility in GR which allows this variety?
To understand where the "wiggle room" in general relativity is it is useful to look at one of the main theorems that constrains GR, Lovelock's theorem. This says that if we start from an action that * *is local *depends only the spacetime metric *is at most second order in derivatives of the metric and *is in 4 spacetime dimensions then the only possible equation of motion for the metric is the Einstein equation (possibly with cosmological constant). The conditions of this theorem immediately tell us what assumptions we need to let go off to find alternative theories. You can * *Consider theories with more fields than just the metric, as is done for example in scalar-tensor theories such as Brans-Dicke *Consider actions that contain higher derivatives of the metric. For examples in f(R) gravity. *Consider non-local actions (I don't know of any good examples that people study.) *Consider theories in different number of dimensions than 4.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Which should be the correct option? Question-- A person standing on the floor of an elevator drops a coin. The coin reaches the floor of the elevator in time t1 if the elevator is stationary and in time t2 if it is moving uniformly. Then (a) t1 = t2 (b) t1 < t2 (c) t1 > t2 (d) t1 < t2 or t1 > t2 depending on whether the lift is going up or down. Explanation- As the elevator is moving at uniform speed, so it's acceleration is zero, so, no pseudo force. Thus it can not affect the motion of the coin. Thus in both cases, the coin takes the same time. i.e, t1=t2.Therefore correct option is (a). ​ But I am not satisfied with the above explanation and the answer. I think that option (d) is correct. Let the coin be dropped from a height h from the floor of the elevator. Suppose, if the elevator is moving down, then the coin will have to travel a distance h+H(>h), where H is the distance travelled by the elevator in the time the coin reaches it's floor. Since the coin has to travel a greater distance than previous, with the same acceleration, so it should take more time than t1. Similarly, if the lift moves up the coin will have to travel lesser distance to reach the bottom. And thus it should take less time. Correct me, if I am wrong. (This question is from HC Verma's "Concepts of Physics".)
The thing you're missing is that if the elevator is travelling uniformly at some velocity $v$, the coin starts with that same velocity. So taking our SUVAT equation - in the case where the elevator isn't moving, you have: $$\tfrac12 g t_1^2 = h$$ and when it is, $H$ is clearly equal to $v t_2$ so: $$v t_2 + \tfrac12 g t_2^2 = h + H = h + v t_2$$ $\implies \tfrac12 gt_2^2 = h$ after cancellation. The constant motion of the elevator cancels out! This is actually a very general principle of physics: the speed you're moving at doesn't affect the outcome of experiments, if it's a constant speed. Look up the strong equivalence principle if you're interested!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Confusion regarding Ampère's law and non-planar loops To show that $\int_{C} \vec{B}\cdot \vec{dl}=4\pi I/c$ for this loop Purcell uses this other path ($C'$) He argues that since $C'$ doesn't enclose the wire $$\begin{align*}\int_{C'}\vec{B}\cdot \vec{dl'}&=0\\ \int_{C_1}\vec{B}\cdot \vec{dl_1}+\int_{C}\vec{B}\cdot \vec{dl}&=0\end{align*}$$ and since $\int_{C_1}\vec{B}\cdot \vec{dl_1}=-4\pi I/c$ then $$\int_{C}\vec{B}\cdot \vec{dl}=4\pi I/c.$$ But I might as well choose to add another circular loop like $C_2$ (assuming $C$ is non planar). Now in this case I would get $$\int_{C}\vec{B}\cdot \vec{dl}=\color{red}{2}\times 4\pi I/c=8\pi I/c$$ Here, $C_2$ is in front of $C_1$. Where does this contradiction arise from?
Imagine that you pulled and reshaped the part of the loop you called $C1$ as shown below. In doing this you have not cut through the current carrying conductor. You can now see that within your Amperian loop you have the current carrying conductor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Contradiction in canonical transformation The problem I'm supposed to solve is finding $Q$, such that $(p,q)\rightarrow(P,Q)$ is a canonical transformation. In this case $\mathcal{H}=\frac{p^{2}+q^{2}}{2}$ and the new hamiltonian $\mathcal{K}$ is $\mathcal{K}=P$. This means $\dot{q}=p$ and $\dot{p}=-q$ Since $\mathcal{H}$ and $\mathcal{K}$ are time independent $\mathcal{H}=\mathcal{K}$ and $P=\frac{p^{2}+q^{2}}{2}$. Now I use a generating function of canonical transformations $F_{4}=F_{4}(p,P)$ so: $\frac{\partial F_{4}}{\partial p}=-q\quad\quad\quad\mbox{and}\quad\quad\quad\frac{\partial F_{4}}{\partial P}=Q$ $P=\frac{p^{2}+q^{2}}{2}\quad\Rightarrow\quad q=\sqrt{2P-p^{2}}$ Then \begin{equation} F_{4}=-\int\sqrt{2P-p^{2}}dp\quad\Rightarrow\quad Q=-\int \frac{\partial\sqrt{2P-p^{2}}}{\partial P}dp=-arcsin\left(\frac{p}{\sqrt{2P}}\right)=-arcsin\left(\frac{p}{\sqrt{p^{2}+q^{2}}}\right) \end{equation} $\{Q,P\}= \frac{\partial Q}{\partial q}\frac{\partial P}{\partial p}-\frac{\partial Q}{\partial p}\frac{\partial P}{\partial q}=\frac{p}{p^{2}+q^{2}}p-\left(-\frac{q}{p^{2}+q^{2}}\right)q=1$. Therefore this transformation is canonical. However I also tried to find $Q$ with the generating function $F_{1}=F_{1}(q,Q)$, where \begin{equation} \frac{\partial F_{1}}{\partial Q}=-P\quad\quad\mbox{and}\quad\quad\frac{\partial F_{1}}{\partial q}=p \end{equation} Then \begin{equation} F_{1}=\int\frac{-p^{2}-q^{2}}{2}dQ\quad\Rightarrow\quad p=\int \frac{\partial\left(\frac{-p^{2}-q^{2}}{2}\right)}{\partial q}dQ=\int -qdQ=-qQ\quad\Rightarrow\quad Q=-\frac{p}{q} \end{equation} This is very different with respect to the first $Q$ found, and $\{Q,P\}=\frac{p}{q^{2}}p+\frac{1}{q}q=\frac{p^{2}}{q^{2}}+1$ which can only be equal to 1 if $p=0$. But if we assume this is a canonical transformation then $\dot{Q}=1$ and $\dot{P}=0$, and \begin{equation} \dot{Q}=\frac{\partial Q}{\partial q}\dot{q}+\frac{\partial Q}{\partial p}\dot{p}=\frac{p^{2}}{q^{2}}+1=1\Rightarrow p=0 \end{equation} I think the second result can't be possible, if $p=0$ then $Q=0$; so my question is why I could not obtain $Q$ with $F_{1}$, did I miss something?
I am not that much familiar with Hamiltonian mechanics, but are you not supposed to write $F_1$ as a function of $q$ and $Q$ only? You need to replace $p$ in $F_1$ by a combination of $q$ and $Q$, which will obviously have a non-zero partial derivative with respect to $q$, thus changing your calculation. I will be "cheating" since I will use the first result in the second part, but I don't know if there is a way to do it differently. Since $Q = - \mathrm{arcsin}\left(\frac{p}{\sqrt{p^2+q^2}}\right)$, we can write that $\mathrm{sin}^2(-Q) = \frac{p^2}{p^2+q^2}$, or $p^2 = \frac{\mathrm{sin}^2(-Q)}{1 - \mathrm{sin}^2(-Q)} q^2$. Thus: $$F_{1}=\int\frac{-p^{2}-q^{2}}{2}dQ = \int\frac{-\frac{\mathrm{sin}^2(-Q)}{1 - \mathrm{sin}^2(-Q)}-1}{2} q^2dQ = \int -\frac{1}{2 \mathrm{cos}^2(-Q)} q^2 dQ$$ which leads to: $p=\int \frac{\partial\left(-\frac{1}{2(1 - \mathrm{sin}^2(-Q))} q^2\right)}{\partial q}dQ = \int -\frac{q}{\mathrm{cos}^2(-Q)}dQ = \int q d(\mathrm{tan}(-Q)) = q \,\mathrm{tan}(-Q)$ or finally: $Q = - \mathrm{arctan}\left(\frac{p}{q}\right) = - \mathrm{arcsin}\left( \frac{p}{\sqrt{p^2+q^2}}\right).$ The very last equality can be easily derived by remembering the fact that the tangent of an angle $\theta$ in a triangle can be expressed as the ratio of the opposite side length $p$ over the adjacent side length $q$, whereas the sine of the same angle is expressed as the ratio of $p$ over the hypotenuse length $\sqrt{p^2+q^2}$. But $\mathrm{arctan(tan}(\theta)) = \mathrm{arcsin(sin}(\theta)) = \theta$. Of course, this would not be useful to derive the expression for $Q$, as in this solution I've used the expression of $Q$ from the first part of your answer to find the same expression in the end. This is merely a safety check that the equations on $F_1$ are correct. I don't know how if you can derive the same result using $F_1$ from scratch. The problem here is that you don't have a nice way to express $F_1$ using explicitely $q$ and $Q$ only.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Classic Man on a Boat problem To be clear I have indeed reviewed the question asked by helios321 (Classic man on boat problem). But i have something else to ask related to man on a boat problem. The man on a boat problem goes like this: A man is standing on one side of a boat and the boat is stationary. We ignore friction between water and boat (and air friction). Thus there are no external forces on the man+boat system. So momentum is conserved, and centre of mass does not move. (Copied from helios321's post) I know that if the man moves to the other side of the boat the boat moves in the opposite direction. But what i don't understand is : Let the boat move $x$ m to left and the man $(L-x)$m to right.[L is the length of the boat] then how can we say that $M_{man}(L-x) = M_{boat}(x)$
There is no external force on the system right? So shouldn't the centre of mass remain stationary? Let me give you an example too. A man is standing on the boat and jumps onto the pier. As a result, the boat moves backward and centre of mass of system is still at rest So back to this problem, the boat also acquires a velocity opposite in direction to the man, though of a lesser magnitude(Considering the boat is heavier than man). Hence the net velocity of centre of mass results to zero Hope it helped Edit: Answering the new question Now i hope you believe that the centre of mass is at rest right? So that means its position does not change(As the system is initially at rest). You know that Msystem*Xcm,initial =Mman Xman,initial+MboatXboat,initial Msystem*Xcm,final =Mman Xman,final+MboatXboat,final As Xcm,initial=Xcm,final, on subtracting we get Mman*(change in Xman)+Mboat*(change in Xboat)=0 So Mman*(L-x)-Mboat*x=0 (- sign is because boat moves left) I hope you got it now.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/546856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Schumann resonance frequency I keep hearing that Schumann resonances have been increasing rapidly in recent years. And that it was relatively stable before. I failed to find any scientific data proving or disproving this. I would really appreciate a chart depicting what has been happening over the years in that concern.
Of course, the Schumann resonance frequency can't constantly rise. We can see continuous fluctuation (with seasonality). As for "human brain resonance", I'd not call it "pseudo-science" (I'm referring to one of the comments above, unfortunately, I don't have enough reputation yet to write a comment). There is nothing magic in the effect of electrical and magnetic fields on the nervous system, you can find that Adey and Blackman published tons of articles with proves. That article also might be useful: Cherry N. J., Human intelligence: The brain, an electromagnetic system synchronized by the Schumann Resonance signal // Human Sciences Department, Lincoln University, New Zealand. – 2003. – V.12. – №6. – P.843–844.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is the force of gravity always directed towards the center of mass? This is a pretty basic question, but I haven't had to think about orbital mechanics since high school. So just to check - suppose a [classical] system of two massive objects in a vacuum. If the density of either object is the same at a given distance from the center, and both objects are spherical, then both objects can be treated as point-masses whose position is the [geometric] center of the original sphere. In the case that either object is not spherical or has an irregular distribution of mass (I'm looking at you, Phobos!), both objects can still be treated as point-masses but the center of mass rather than the geometric center must be used. Is this correct?
Perhaps I am wrong, given the other answers, but it was my understanding that gravity would indeed always be directed towards the centre of an object's mass. I would argue this by proposing a 2D plane rather than a 3D space. in this example, we would like to see the direction of gravity between a point and, say, a rectangle. The centre of mass, here, is incredibly useful. Because of the definition of gravity, point-mass, and centre of mass, the centre of mass will always be the point at which gravitational forces of the surrounding mass on any opposite sides are exactly equal. If the point for which we are testing gravity is directly above our rectangle's centre of mass, then the gravitational pull from both the right and left sides of the rectangle are perfectly balanced, and the point will be pulled straight down (and, similarly, the rectangle will be pulled straight up, assuming that the point in question has some mass). I hope this helps, and I look forward to hearing from other responders regarding this.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 6, "answer_id": 3 }
What is aperture of a lens? I read that aperture of a lens is the surface from which refraction takes place and it is represented by the diameter of the lens. So, by saying that the aperture refers to the surface from which refraction occurs, do we mean that the surface area of the curvature would be the numerical value of the aperture. And since the surface area of the curvature depends on the diameter, would that explain why it is represented by the diameter?
The aperture of a lens is the working area of a lens: practically diameter of a beam which is refracted. If you place a pinhole with a variable diameter (sometimes called an aperture) before the lens, you can decrease the working area of the lens up to 0. Opening the pinhole fully, you obtain the full working area which is indeed represented by a diameter of the lens. In this way, a photographer could change the depth of the field (a parameter characterizing how many planes of a volumetric object are in focus). P.S. Don't mix it up with a numerical aperture (NA) which has another sense, though related
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is wrong with this calculation of work done by an agent bringing a unit mass from infinity into a gravitational field? Let us assume that a gravitational field is created by a mass $M$. An agent is bringing a unit mass from $\infty$ to distance $r < \infty$, both measured from mass $M$. The agent is always forcing the unit mass with a continuously changing force $\vec F(\vec x)$, $\vec{x}$ being the distance pointing radially out from $M$. According to classical mechanics, it holds that $\vec F(\vec x) = \frac{GM}{x^2}\hat{x}$, with $G$ being the gravitational constant. The work is calculated as follows: $$W = \int_\infty^r\vec F(\vec x)\cdot d\vec x$$ $$=\int_\infty^r{{F(x)}\,dx\cdot cos(\pi)}$$ $$=-\int_\infty^r{{\frac{GM}{x^2}}dx}$$ $$=-GM[-\frac{1}{x}]_\infty^r$$ $$=GM[\frac{1}{x}]_\infty^r$$ $$=GM[\frac{1}{r}-\frac{1}{\infty}]$$ $$=\frac{GM}{r}$$ The body moved against the force's direction (the angle between them was always $\pi$). So the work should have been negative. But since $r$ is the scalar distance from $M$, it is positive like $G$ or $M$, yielding the result always positive. What is wrong here?
You made a mathematical error in trying to prove the result. It arises in many scenarios. To give you an insight into your mistake I would like to tell you the correct method of integration in physics. Remember that we always consider an element $dx$ at a distance $x$ from origin in the direction of $x$. What you did was physically correct but while considering the field at $x$ you displaced the unit mass by $dx $ opposite to $x$. Always remember that for your physics to be mathematically correct consider $dx$ in the direction of $x$. If you are taking $dx$ opposite to $x$ as you did you will have to integrate with a negative sign.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Approximation of the total number of accessible microstates So, here is a system having two subsystems $\alpha$ and $\beta$ where the two subsystems can exchange energy between them, then the total number of accessible microstates of the whole system is given by, $$\Omega(E)=\sum_{E_{\alpha}}\Omega_{\alpha}(E_{\alpha})\Omega_{\beta}(E-E_{\alpha})$$ which approximation did we use to get,$$\Omega(E) \approx \Omega_{\alpha}(\tilde E_{\alpha})\Omega_{\beta}(E-\tilde E_{\alpha})$$ where, $\tilde E_{\alpha}$ is the most probable value of $E_{\alpha}$
I'll put the conclusions first. Take ideal gas as an example, if you define the function $f(E_{\alpha}) = \Omega_{\alpha}(E_{\alpha}) \Omega_{\beta}(E-E_{\alpha})$. This function f(E) will look like: $$ f(E)=E^{N}e^{-E} $$ where N is a very large number (same order as the number of particles, around $10^{23}$), $E$ here is dimensionless and 1 unit of E correspond to $k_B T$. The most probable state correspond to the maximum of $f(E)$. You can set $f'(E_{max})=0$ and get $E_{max}=N$. The task comes down to comparing $f(E_{max})$ and $\int_0^{\infty} f(E) dE$. Your summation is actually an integral because $E$ is continuous. So the approximation claims that $$N^N e^{-N} \approx \int_0^{\infty} E^{N}e^{-E} dE = N!$$ Take log on both sides and you have $$ ln(N!) \approx Nln(N)- N $$ which is the famous Stirling's Approximation taught in every statistical mechanics course. If you don't believe it plug in some numbers. I tried $N=100$ and found that they differ by less than 1%. Now to explain why $f(E)$ would take the form $E^{N}e^{-E}$. Boltzmann distribution essentially tells us $\Omega_{\beta}(E-E_{\alpha}) \propto e^{-E_{\alpha}/k_B T} $, which explains the second term. The first term comes from $$ E = N (1/2 m v^2) \\ \Omega_{\alpha}(E) \propto (4\pi v^2)^N \propto E^N $$ where $v$ is the average velocity of particles. I realize it should really be $\Omega_{\alpha}(E) \propto E^{3/2 N}$ because I really should be using the degree of freedom but it doesn't change the order of $N$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/547933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Why is the acceleration of the string connected to the cylinder different from which the cylinder is moving forward with? The following Object 'B' is a cylinder. It is kept mounted horizontally on a massless block, when a tension T is applied by a string passing over the lower end of cylinder, the acceleration of the string which is tied to the cylinder Is different from that of the acceleration with which the CENTRE OF MASS of mass of cylinder is moving forward with (i.e., the cylinder is experiencing both rotational and translocation motion). Please explain me why this happens. Intutuively I can imagine that they are to be different, but can you please provide a proof of that.
First of all $$\vec{F_{ext}}=m\vec{a_{cm}}$$ This equation conveys us the messge that the net external force is the only force which is responsible for the acceleration of center of mass of the system. It doesn't means that $\vec{a_{cm}}=\vec{a_{i}}$, where $\vec{a_{i}}$ represents the acceleration of $i^{th}$ particle of the system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What exactly happens when $\rm NaCl$ water conducts electricity? Assume a DC power source with $2$ electrodes made of Fe. We dip those $2$ electrodes into table salt water. What happens exactly? * *Will $H^+$ and $Na^+$ migrate to the negative electrode by electrical field or diffusion or a combination of both? *Will $H^+$ accepts frist electrons and then $Na^+$ or both ? But what if we really $amp$ up the current, are we going to see metal $Na$ at the negative electrode and then $Na$ reacts with water violently? *At the positive electrode, should we expect oxygen and chlorine gas or just the $Fe$ electrode gets eaten away? Although there are many questions, but I believe there is one general principle that can explain all. Something that can explain the priority of all possible reactions.
...I believe there is one general principle that can explain all. Something that can explain the priority of all possible reactions. Under normal conditions sodium chloride has a crystalline structure. Each ion from $Na^+$ and so from $Cl^-$ is surrounded by six ions of the opposite charge. From this we learn that one electron from sodium is more on the chlorine side and less on the sodium side. Since water is a good solvent due to its ionic character it is not surprising hat the aqueous solution of NaCl looks like this: The attraction between the Na+ and Cl− ions in the solid is so strong that only highly polar solvents like water dissolve NaCl well. When dissolved in water, the sodium chloride framework disintegrates as the Na+ and Cl− ions become surrounded by the polar water molecules... The sodium and the chloride ions are also strongly solvated, each being surrounded by an average of 6 molecules of water. And the one electron is still missing sodium, and chlorine with the surrounding water molecules has captured it. There is a field in chemistry called electrochemistry. It is about the destruction and new formation of chemical bonds by electrical energy. The chemical bonds appear to be strong, but only a few volts are sufficient to destroy or re-form compounds with fluorine. Using an electric potential difference the ions are moving to the electrodes, of course being in balance in the solution. All other of your questions depend from the material of the electrodes and their electronegativity. The electrode could be destroyed (more true for the chlorine side) or the electrode gets plated (more true for the sodium side). All this is really a question for CSE.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Mathematically prove that a round wheel roll faster than a square wheel Let's say I have these equal size objects (for now thinking in 2D) on a flat surface. At the center of those objects I add equal positive angular torque (just enough to make the square tire to move forward). Of course the round tire will move faster forward and even accelerate (I guess). But how can I mathematicaly prove/measure how better the round tire will perform? This for my advanced simulator I'm working on and I don't want to just Hardcode that rounds rolls better, square worse, etc. I know the answer could be very complex, but I'm all yours.
At the center of those objects I add equal positive angular torque (just enough to make the square tire to move forward). You do not apply torque at the center, a single point. It requires at least two points. This is an important detail. Because the second force applied to the object is a friction force. This friction force will be different for the round and square wheels. With a different friction force the object will experience a different impulse and obtain a different translational momentum (impulse will be related to the difference between friction and pull forces multiplied by the time) For the round wheel, the friction force will be smaller because less torque is needed to make the circle move and rotate. This difference can be computed by considering the moment of inertia. Computation by moment of inertia The moment of inertia around the center is for square and round: $$\begin{array}{} I_{square} &=& \frac{1}{6}mD^2\\ I_{round} &=& \frac{1}{8} mD^2 \end{array}$$ with mass $m$ and D the height/diameter of the wheel. As seen in the image below for a given speed at the center and a given rotation, the square does not have a continuous speed in the horizontal direction (which will go along with variations of friction and bouncing, but let's assume a perfect situation where this does not lead to energy dissipation). We can compute the average speed by relating the circumference of the wheel ($4D$ for the square and $\pi D$ for the round). So the relative ratio of average horizontal speed $v_h$ and angular velocity will be $$\begin{array}{} \omega_{square} &=& \frac{1}{4} (v_h/D)\\ \omega_{round} &=& \frac{1}{\pi} (v_h/D) \end{array}$$ The square is moving faster than this speed because it is making a longer path. For a distance of $D$ in horizontal direction (one quarter flip) it follows a path of 1 quarter circles with radius $\sqrt{0.5}D$ and the length is $\sqrt{1/8} \pi D$. So for a given translational velocity $v_h$ the energy needed is $$\begin{array}{} E_{square} &=& I_{square}\omega_{square}^2 + (\sqrt{1/8} \pi)^2 mv_h^2 &=& \left(\frac{1}{96} + \frac{\pi^2}{8} \right) mv_h^2 & \approx & 1.244 mv_h^2\\ E_{round} &=& I_{round}\omega_{round}^2 + mv_h^2 &=& \left(\frac{1}{8 \pi^2} + 1 \right) mv_h^2 & \approx & 1.013 mv_h^2 \end{array}$$ So to move the square with a certain average horizontal velocity, you need more energy. This difference has been computed above by considering energy, but the mechanism is a difference in the friction force between the wheel and the surface (differences that need to match the difference in momentum along the horizontal direction). If the square wheel is pulled at the center, and a rotation plus translation comes from this, then this needs to coincide with a frictional force at the bottom. This friction will be bigger for the square wheel than the round wheel. Generalization Under construction. The analysis below ignores gravity. This term could be added easily in the last expression, but it must be seen how this influences the conclusion. Let's consider any round (convex) objects of homogeneous density and describe them by the radius (distance of the edge from the CM) as a function of the angle $r(\phi)$. We have the following relations. For the circumference $$L = \int_{0}^{2 \pi} \frac{\text{d}L}{\text{d}\phi} \, \text{d}\phi = \int_{0}^{2 \pi} \sqrt{r^\prime(\phi)^2 + r(\phi)^2} \, \text{d}\phi$$ position as function of angle $x(\varphi)$ $$\begin{array}{} x(\varphi) &=& \int_{0}^{\varphi} \sqrt{r^\prime(\phi)^2 + r(\phi)^2} \, \text{d}\phi \\ y(\varphi) &=& r(\varphi) \end{array}$$ distance traveled as function of angle $$\begin{array}{} s(\varphi) &=& \int_{0}^{\varphi} \frac{\text{d}s}{\text{d}\phi} \, \text{d}\phi \\ &=& \int_{0}^{\varphi} \sqrt{x^\prime(\varphi)^2 + y^\prime(\varphi)^2} \, \text{d}\phi \\ & =& \int_{0}^{\varphi} \sqrt{2 r^\prime(\phi)^2 + r(\phi)^2} \, \text{d}\phi \end{array}$$ moment of inertia (assuming homogeneous density distribution) $$I = m \frac{1}{2} \frac{\int_{0}^{2 \pi} r(\phi)^3 \, \text{d}\phi}{\int_{0}^{2 \pi} r(\phi) \, \text{d}\phi} = m \frac{1}{2} \frac{\bar{r^3}}{\bar{r}} $$ Using the above we can express the velocity $v = \frac{\text{d}s}{\text{d}t}$ in terms of the angular velocity $\omega = \frac{\text{d}\varphi}{\text{d}t}$ and also the horizontal velocity $v_h = \frac{\text{d}x}{\text{d}t}$ in terms of the angular velocity $$\begin{array}{} v &=& \sqrt{2 r^\prime(\phi)^2 + r(\phi)^2} \omega \\ v_h &=& \sqrt{r^\prime(\phi)^2 + r(\phi)^2} \omega \end{array}$$ Then we can express the kinetic energy (decomposed in rectilinear motion and rotation) in terms of $v_h$. $$\begin{array}{} E_{kinetic} &= &\frac{1}{2}m v^2 + \frac{1}{2} I \omega^2 \\ & =& \frac{1}{2}m \left( \frac{2 r^\prime(\phi)^2 + r(\phi)^2}{ r^\prime(\phi)^2 + r(\phi)^2} + \frac{\bar{r^3}}{\bar{r}} \frac{1}{ r^\prime(\phi)^2 + r(\phi)^2} \right) v_h^2 \\ & =& \frac{1}{2}m \left( 1+ \frac{r^\prime(\phi)^2 + \bar{r^3}/\bar{r}}{ r^\prime(\phi)^2 + r(\phi)^2} \right) v_h^2 \\ \end{array} $$ For the round wheel the term in the brackets equals $2$ such that $E_{kinetic} = m v_h^2$. For other shapes the $\bar{r^3}/\bar{r}$ term will be higher making that more energy is required to roll at a particular horizontal velocity $v_h$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64", "answer_count": 12, "answer_id": 11 }
What would cause an infrared thermometer to malfunction in a specific room? The situation is as follow: When used to measure body temperature, my infrared thermometer will always measure an abnormal high temperature in ONE certain room (40/41 degree celcius), but it will always measure a normal temperature outside of that room (36/37 degree celcius). This is tried on multiple different persons with different room temperature, with multiple distances of measurement on the forehead. I own two infrared thermometer of the same brand and model (JPD-FR202). And both would only behave abnormally in that single room. The room has an air conditioner and a computer. When tested, both the air-con and pc are turned off. Is there any wave or radiation that would affect the accuracy of an infrared thermometer? Or are there any other scientific phenomenons that I should look into? Or would it be more reasonable to assume that I own two thermometers of a faulty patch?
Is there any wave or radiation that would affect the accuracy of an infrared thermometer? To potentially state the obvious, infrared radiation would do that. Have you checked your lights in that room? LEDs and incandescent bulbs give off an enormous amount of IR radiation, which could cause your thermometer to read high. To test this, you could try measuring temperatures with the lights off. Your thermometer doesn't care - ostensibly it's just reading the IR which is being radiated by the target, and works just as well in the dark - but it would remove any contribution coming from ambient IR due to the lights in the room.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If heat is merely molecular motion, what is the difference between a hot, stationary baseball and a cool, rapidly moving one? This is from the Exercises for the Feynman Lectures on Physics, exercise Exercise 1.1. I believe that a hot stationary ball has more thermal energy due to the inter-molecular motion of the baseball, while a cool, fast moving baseball has more kinetic energy due to the macro-object itself moving faster in a particular direction?
Definition of temperature in statistical mechanics terms: $$T_{\text{kinetic}}=\frac{2}{3k}\left[\overline{\frac 1 2 m v^2}\right]=\frac{2}{3k}\text{KE}_{\text{average}}$$ Please read the linkfor the constants, what is important to note is that temperature is analogous to average kinetic energy Also 3. applies When kinetic temperature applies, two objects with the same average translational kinetic energy will have the same temperature. So it is necessary by the calculation of temperature using kinetic energy statistically, that the same inertial frame should be used . In thermodynamic terms, a thermometer measuring the temperature of the stationary hot ball would give the same reading if the ball is in motion.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/548965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
About magnetization in ferromagnetic material I am studying about ferromagnetism and have wondered whether the magnetization can be aligned independently of external magnetic field direction. As far as I know, the ferromagnetic material has no linear relationship between magnetization M and external magnetic field H, thus I guess there is no need that M doesn't have to be parallel with H. But in many figures, when an external field is applied, the all spins are aligned parallel with that of an external field as you can see below: So, I'm confused whether the magnetization doesn't have to follow the direction of the external field or not. Please help me out.
Above the critical temperature such a material exhibits paramagnetic properties, i.e. the spins align along the magnetic field. However, below the critical temperature it is in a ferromagnetic phase, which a specific value of the magnetization. Changing the direction of this magnetization than requires applying a sufficiently strong field - hence the non-unique dependence of the magnetization on the applied field in the magnetization curves.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Eddington-Finkelstein coordinates not well-defined? Consider the Schwarschild solution $$d s^{2}=-\left(1-\frac{2 m}{r}\right) d t^{2}+\frac{d r^{2}}{1-\frac{2 m}{r}}+r^{2}\left(d \theta^{2}+\sin ^{2} \theta d \varphi^{2}\right) $$ and the radial null geodesics (in Schwarschild coordinates): $$t=\pm(2 m \ln |r-2 m|+r)+\text { constant }. $$ The advanced Eddington Finkelstein (EF) coordinates are defined as ($\bar{t}$,r,θ,φ) with $$\bar{t}=t+2 m \ln (r-2 m) $$ The metric in EF coordinates has the form: $$d s^{2}=-\left(1-\frac{2 m}{r}\right) d \bar{t}^{2} -\frac{4m}{r}d \bar{t} dr-\left(1+\frac{2m}{r}\right)dr^2+r^{2}\left(d \theta^{2}+\sin ^{2} \theta d \varphi^{2}\right) $$ which is not singular at $r=2m$ (but still singular at $r=0$). The incoming radial geodesics (corresponding to $-$) become: $$\bar{t}=-r+\text{constant} $$ and the outgoing (corresponding to $+$): $$\bar{t} = 4m\ln(r-2m) +r +\text{constant} .$$ My understanding is that the above definition for $\bar{t}$ is only valid for $r>2m$. However we use the solution in EF coordinates in all $r>0$ and when we draw the geodesics we extend them to $r=0$, even thought $\bar{t}$ is not defined for $r<2m$ since the quantity in $\ln$ is negative. The mathematical treatment seems a bit imprecise, what's up? I'd appreciate a more rigorous approach to this.
When you choose $(\bar{t}, r \ldots)$ instead of $(t, r \ldots)$ and notice that inside $r<2m$ the EF metric is a valid and regular solution, what you find is an extension (like an analytical extension) of the solution in the region $r>2m$ into a region below the event horizon. It was not there before because of the coordinate singularity in $r=2m,$ but now your new coordinates allow you to see that there is nothing strange for incoming geodesics in $r=2m.$ This is because there was no true singularity there but just a failure of the Schwarzschild original coordinates to describe the region close to the horizon. I believe some textbooks discuss this, D'Inverno, Ryder, etc. talk about it. But in short, usually physicist extend the solution of one region where it works into another.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is a pump's head usable for any fluid? As far as I investigated, a pump has a specific head in a determined flow rate (relating to its power and rotating speed). Then considering the formula ($\Delta P=\rho g H$), $\Delta P$ is adjusted for any fluid (with a different density) to obtain the same head. But my question is: how the extra pressure is created for a fluid with higher density, when using a specific pump with a specified power and therefore max head? This makes it confusing. Because it seems more logical to say the head is reduced/increased in such case; not that the pump produces more power to obtain the same head.
I am assuming you are referring to a centrifugal pump. If that is the case, the term head is used in place of pressure. To understand where the pump will operate you must plot the system head curve onto the pump characteristic performance curve. The pump curve starts on the left side of the chart and normally arcs down from left to right. The system head curve is an inverted arc and starts on the left side of the chart and projects in an upward motion to the right side. The flow rate in gallons per minute (GPM) is plotted on the horizontal axis starting at zero flow on the left side of the chart to maximum flow to the right side of the chart. Where the pump curve and system curve intersect, is the operating point of the pump. Thus, if you close down on a valve in the piping, it creates more resistance to flow and the intersection point moves to the left pumping less flow at a higher head. Likewise, reduce the resistance to flow by opening a valve larger, the flow resistance decreases and the system curve intersects the pump curve at a greater flow rate. Calculate the horsepower at the operating conditions will be the horsepower needed for those conditions. The motor driving the pump must be suitable to cover the operating range of flow vs. head.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Horn equation (wave propagation in an object with a circular cross-section) I have a problem with finding eigenfrequencies for wave which propagate in an object with a circular cross-section. I don't know how to start. I'll be very grateful for solution and comment or solution of very similar problem. $G(x)\frac{∂^2u(x,t)}{∂t^2}=c^2∂_{x}(G(x)∂_{x}u(x,t))$ For horn of cross-section $G(x)=a(x+1)^2, 0≤x≤1$ And the boundary condition $u(0,t)=u(1,t)=0$
Webster's Horn equation at frequency $\omega$ is just an engineer's name for a special case of the Sturm-Liouville equation. Your case is a particularly simple example. Firstly change your independent variable to $\xi = x+1$ so your equation becomes $$ \xi^2 \frac{d^2u}{d\xi^2}+2\xi \frac{d u}{d\xi}+ \frac{\omega^2}{c^2}\xi^2 u=0. $$ I made a mistake first time round and missed the $\xi^2$ in the last term. The solutions of the new equation involve spherical Bessel functions $j_0(x)= (1/x)\sin x $ and $n_0(x)=-(1/x) \cos x$ where now $x= \omega \xi/c$. Take linear combinations to satisfy your BCs.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Force, Newtons second law In the book "An Introduction to Mechanics - Second edition" by Kleppner D., Kolenkow R, I came across a paragraph: pg. 54 , sec 2.5.2(force) It is worth emphasizing that force is not merely a matter of definition. For instance, if we observe that an air track rider of mass $m$ starts to accelerate at rate $\vec a$, it might be tempting to conclude that we have just observed a force $\vec F=m\vec a$. Tempting, but wrong. The reason is that forces always arise from real physical interactions between systems. Interactions are scientifically significant: accelerations are merely their consequence. Consequently, if we eliminate all interactions by isolating a body sufficiently from its surroundings —an inertial system— we expect it to move uniformly. I can not understand why it is wrong. Please explain to me the reason author gave.
The author is trying to convey the message that it was not the acceleration which made us observe a force, the force was already there. In other words, force is not the consequence of acceleration, rather it's the other way around, acceleration is the consequence of force, which implies forces are more fundamental.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/549959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Is there an authoritative data source for EM spectrum assignments? I'm working on some software to coalesce various standard constants (eg SI, CODATA, AME) into an easy-access library. However, trying to find an authoritative source I can find for EM spectrum assignments is a little less clear, which I suppose makes sense for its use. The ITU's source is costly, the FCC's overly specific, and so on. Is there a common authoritative source for the EM spectrum, or is it as subjective as it seems?
ISO 21348 seems to be a good set of definitions, with major and minor categorisation (eg Radio and UHF).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
On-shell SUSY-transformations for interacting Wess-Zumino model I'm learning SUSY with Quevedo, Cambridge Lectures on Supersymmetry and Extra Dimensions. Setup: The SUSY transformations of the component fields of a chiral field $\Phi$ are given by (p.41) \begin{align*} \delta_{\epsilon,\overline{\epsilon}}\varphi &= \sqrt{2}\epsilon^{\alpha}\psi_{\alpha}, \\ \delta_{\epsilon,\overline{\epsilon}}\psi_{\alpha} &= i\sqrt{2}\sigma^{\mu}_{\alpha\dot{\alpha}}\overline{\epsilon}^{\dot{\alpha}}\partial_{\mu}\varphi + \sqrt{2}\epsilon_{\alpha}F,\\ \delta_{\epsilon,\overline{\epsilon}} F &=i\sqrt{2}\overline{\epsilon}_\dot{\alpha}(\overline{\sigma}^{\mu})^{\dot{\alpha}\alpha}\partial_{\mu}\psi_{\alpha}, \end{align*} where $\varphi$ is a complex scalar, $\psi_{\alpha}$ is a left-handed Weyl spinor and $F$ is an auxiliary field. My questions: * *Let us choose the superpotential $W(\Phi)\equiv \frac{m}{2}\Phi^2 + \frac{g}{3}\Phi^3$ together with kinetic part $\Phi^{\dagger}\Phi$ and remove the auxiliary field $F$ via its algebraic equations of motion. Then, the transformation rules must change as well, correct? *We can use the equations of motion of the auxiliary field $F$ to remove it from the Lagrangian. How do we account for this in the transformation rules of the component fields? The transformation rules do not know anything about the model (free/interacting/massless) we are considering, so it is us who should implement this choice into the transformation rules -- but how do we do this without messing up SUSY?
* *When we eliminate/integrate out the auxiliary field $F$, the SUSY transformation for $F$ is rendered moot, and the appearance of $F$ on the RHSs of the other SUSY transformations is replaced with its algebraic EOM. *It's not true that we do not know anything about the model -- we assume that the action $S$ is SUSY-invariant. In particular, the EOM for $F$ is derived from the action.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is the Force of Gravitational Attraction between two “Extended” bodies proportional to the product of their masses? Newton’s Law of gravitation states that force of attraction between two point masses is proportional to the product of the masses and inversely proportional to the square of the distance between them. I know that the force of attraction between two spheres turns out to be of the same mathematical form as a consequence of Newton’s law. But I am not able to prove how the force between any two rigid masses is only proportional to the product of their masses (as my teacher says) and the rest depends upon the spatial distribution of the mass. So $F$ is ONLY proportional to $Mmf(r)$ where $f(r)$ maybe be some function based on the specifics of the situation.
The simple explanation is that any finite body (i.e. occupying a bounded region of space) looks like a point from sufficiently far away. This observation also tells you what is the range of validity of this "law". The distance between the bodies needs to be much larger than the linear size of each body. Using math and calculus it is possible to turn this intuition into precise and predictive equations. This approach goes under the name of multipole expansion
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 9, "answer_id": 7 }
What is causality? I was looking a few videos on youtube and I came across a video named the speed of light is nothing related to light. In the video, it was said that speed of light is actually speed of causality so what actually is causality?
In the broadest sense, a world which exhibits causality is one in which 1) things have causes (i.e., magic does not operate) and 2) those causes precede their effects. As pointed out by PM 2Ring, a strict definition of this can be furnished in a space-time diagram, in terms of the past and future light-cones and their relationship to the world-line corresponding to you, the observer. Those light-cones delineate the locations in time and in space from which things in your past can influence something or someone in your present, and also whether or not things in the present can influence something or someone in the future- depending on where they are and when in the future. The boundaries between the spacetime zones where influence is possible and where it is not are set by the speed of light, which connects this concept of causality with c. The reason that "causality" travels "at the speed of light" is that no cause can have any effect which travels through spacetime faster than that.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Can we use quantities other than temperature to describe thermal equilibrium? From the 0th law, Thermal equilibrium is when there is no heat transfer between two objects. So I want to ask is temperature the only "potential"-esque quantity which should be equalized for stop of heat flow? If temperature is the only one then why is it the only one? Could we prove this?
First, the 0th law is not what you think it is... From the 0th law, Thermal equilibrium is when there is no heat transfer between two objects. This is not the 0th law, this is just the definition of thermal equilibrium. The 0th law is just something needed to make thermal equilibrium a well-defined "equality" between systems at thermal equilibrium. This is taken straight from the mathematics of equivalence relations, where we need a relation to be reflexive, symmetric, and trasitive. By definition of thermal equilibrium, a system is in thermal equilibrium with itself (reflexivity), and if system 1 is in thermal equilibrium with system 2, then it must be that system 2 is in thermal equilibrium with system 1 (symmetry). However, transitivity is not guaranteed by the definition of thermal equilibrium. Therefore, we need the 0th law, which actually states If two thermodynamic systems are each in thermal equilibrium with a third one, then they are in thermal equilibrium with each other. So, contrary to what you have stated that the 0th law is the definition of thermal equilibrium, it is actually something that characterizes thermal equilibrium. The 0th law brings in transitivity so that, mathematically, thermal equilibrium is an equivalence relation. As for the meat of the question, I think knzhou's answer is great. Anything arising from maximization of entropy can typically be used to describe some sort of equilibrium.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/550672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Should the thermos flask better be half full or half empty? Every evening I am preparing hot water for my two year old son wakes up in the night to get his milk. We use a rather bad isolation can for this. It is a typical metal cylinder shaped can holding half a liter. If I put cooking hot water into it, I know that about 5 hours later it will have room temperature already, but it does the job as my son typically wakes up 2 or three hours after I go to bed, and so he gets his milk temperated. As I need only about 200ml then to mix up his milk, I was asking myself if it is better to only fill in that amount of hot water or to fill up the whole can. I guess losing temperature has much to do with the amount of water but also with its surface touching the (colder) room air outside. With no idea anymore of what my old physics teacher told me twenty five years ago I hope you could share some wisdom for my little story here. Thanks in advance ;)
Actually the heat of liquid you have poured in flask is half of the volume of Thermos so the heat of liquid will get conventionally transfer to air which is another half of Thermos so better to fill it to full to get long time to put water hot.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If the integral of $\frac{dQ}{T}$ over an engine cycle is less than 0 , then why is entropy of the universe always increasing? For a reversible process, we define $ dS= \frac{dQ}{T}$ , so, the integral being negative would suggest that the entropy of universe decreases with each cycle of the engine because the clasius inequality states this quantity is less than 0
The value of $\Delta S_{\text{sys}}$ You are confusing the entropy change of the system with the total entropy change. Now since the process is cyclic, the total entropy change for the system will be zero ($\Delta S_{\text{sys}}=0$). It does not matter whether the process is reversible or irreversible because entropy is a state function and depends only on the initial and the final states, which in this case are the same. What about $\Delta S_{\text{surr}}$? Now, I would like to state the Clausius inequality more clearly: $$\oint\frac{\delta Q_{\text{sys}}}{T_{\text{surr}}}≤0\tag{1}$$ As you can see, the temperature in the denominator is the temperature of the surroundings, not the system, and $\delta Q_{\text{sys}}$ is an inexact differential of the heat given to the system. Since the amount of heat given to the system will be equal to the amount of heat lost from the surroundings, thus $$\delta Q_{\text{sys}}=-\delta Q_{\text{surr}}\tag{2}$$ So, now let's compute $\Delta S_{\text{surr}}$ (entropy change of the surroundings) for a cyclic process: $$\Delta S_{\text{surr}}=\oint \frac{\delta Q_{\text{surr}}}{T_{\text{surr}}}\tag{3}$$ But using equation $(2)$, we can rewrite the above equation as: $$\Delta S_{\text{surr}}=-\oint \frac{\delta Q_{\text{sys}}}{T_{\text{surr}}}\tag{4}$$ Now, using equation $(1)$, we can conclude that $$\Delta S_{\text{surr}}=-\oint \frac{\delta Q_{\text{sys}}}{T_{\text{surr}}}≥0\Longrightarrow \Delta S_{\text{surr}}≥0\tag{5}$$ Total entropy change The total entropy change is given by \begin{align} \Delta S_{\text{total}}&=\underbrace{\Delta S_{\text{sys}}}_{=0}+\underbrace{\Delta S_{\text{surr}}}_{≥0}\\ \therefore \Delta S_{\text{total}}&≥0\tag{6} \end{align} Thus, as you can see, Clausius inequality also yields the fact that the total entropy of the universe must increase. Do note that the equality in the equation $(6)$ holds when the process is reversible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why does light behave like a wave? When discussing a single or double slit experiment, where light is shined through a very small slit, it is often compared to a water wave going through a similar, if larger, slit. It's my understanding that when a ripple hits a wall with a hole in it the reason the ripple "bends" and spreads out is because of internal attraction between the water molecules, which are polar. So the molecules on the far side of the slit with energy will pull on the ones without and create a diffraction pattern; and I believe that a similar argument could be made for sound waves, that the molecules the wave travels through are at least slightly polar, or at least they have mass and momentum, so they will push/pull each other and create a diffraction pattern . But as far as I know light exhibits none of these properties, So what property of light allows it to diffract? Shouldn't light which passes through the slit be completely unaffected by the light which hits the material? Clearly light sometimes behaves like a physical wave; but I was wondering if this physical behavior can be explained with some intrinsic property of light. Similar to how a wave travelling through a physical medium can be explained with different attractive forces and momentum.
Yes water and sound waves are share similarities but are also very different to light waves which travel in a vacuum. Diffraction of light is caused by an interaction of the EM field of the photon with the EM field of the material at/in the slit edges. Any size aperture will effect the light path to some degree. The light path bends showing the diffraction. ( "Interference" though is the result of another phenomenon, the wave property of light that requires it to travel n multiples of its wavelength.) Your question highlights that light waves/particles propagate independently as compared to matter molecules that can push/pull on each other.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Amount of electrons in a material? Is there a way to calculate the amount of electrons in a plate of a certain material and certain dimensions? What I want to know is how many electrons are available to remove from a plate when light of appropriate wavelength hits the plate(photoelectric effect).
As an addition to the answer by @SuperfastJellyfish, consider this. Your charge of $18.1\times10^{28}$ electrons is approximately equal to $3\times10^{10}$ Coulomb. If we have that charge removed to a distance of $1 m$ (and the opposite positive charge is left on the aluminium), the force on the removed electrons is given by $$F = K_e q^2 / r^2$$ where $K_e$ is the Coulomb constant ($8.99\times10^9$), $q$ is the charge, and $r=1$. This force is approximately $9\times10^{30}$N. Hence you will not be able to remove that many electrons. The attractive force will cause the electrons (and whatever equipment is used to remove them) to smash back into the aluminium. If you did remove them, the two plates would move towards each other with an acceleration be of the order of $10^{36}g$. You would prefer not to be close by!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Optimize crossbow I'm currently building a crossbow and was wondering how I might improve the performance of it? I was suggested to fine-tune the rubber band more and maybe change the projectile maybe to a zinc alloy one instead of the plastic ones I use. I do understand this is sort of engineering feat but I think it wouldn't hurt to hear feedback from some physicists so I would appreciate any insight into this.
A better crossbow design than a fixed wooden rod and an elastic band is a tough piece of string connecting to "sprung" arms that are pivoted at the join. They can be sprung by winding up elastic as the string is pulled back. This makes for a better crossbow as it is much more effective at storing elastic potential energy as the mass of the arms move as well as the projectile. This idea was used in Greek catapult's such as the palintonon: Source
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does CERN do with its electrons? So to get a proton beam for the LHC, CERN prob has to make a plasma and siphon off the moving protons with a magnet. Are the electrons stored somewhere? How? I don’t mean to sound stupid but when they turn off the LHC, all those protons are going to be looking for their electrons. And that’s going to make a really big spark.
The beam has a positive charge, so there's an electric field that surrounds it. But the beam pipe is metal, conductive. At the surface where the field intersects the metal, electrons flow to cancel the field. There is thus a layer of electrons on the metal surface, with equal but opposite charge to the beam. The beam pipe in turn is "grounded", connected to all the other structural metal around to avoid hazardous voltage build-up. The proton injector's electron collector is also grounded. So, electrons flow into grounded structure, and the beam pulls an equal number out of grounded structure. Charge balances.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/551992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 5, "answer_id": 3 }
Does solar cells absorb sub-bandgap photons? My understanding is that although we are taught that solar cells only absorb photons of energy higher than the bandgap of the material, some of the sub-bandgap photons still gets absorbed, which is evident when looking at the absorption coefficient spectra (it is not zero where it is below the bandgap). First, am I correct on this? Second, if so, what happens to the sub-bandgap photons that are absorbed? For example, you can see the absorption spectra of Silicon here: pveducation.org/pvcdrom/materials/optical-properties-of-silicon The band gap of Silicon is 1.14 eV at 300 K, which corresponds to a wavelength of 1087 nm. You can see that the absorption coefficient is non zero for wavelength greater than 1087 nm, which means given enough thickness, the sub-bandgap photon will be absorbed.
At 300K there are intrinsic free holes and electrons due to thermal excitation across the bandgap. These give rise to absorption. Note the text under the logarithmic graph: " The drop in absorption at the band gap (around 1100 nm) is sharper than might first appear". Note that there also may be charge carriers due to shallow impurities (doping) and deep impurities, which may induce energy levels in the band gap. This will alter the absorption.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/552221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Angular momentum of the earth We know the tidal waves are decreasing the spin rate of the earth which causes the days to longer, so as the angular momentum of the earth decreases it means it rotational kinetic energy also decreases since energy is always conserved the translational kinetic energy of earth must increase now right? Then that would cause number of days in a year to decrease as we right?
Note that energy can be radiated into space as heat, while angular momentum is harder to get rid of. The total angular momentum of the Earth-Moon-Sun system is approximately constant, even as Earth's daily spin rate slows slightly.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/552355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What happens to an inductor if the stored energy does not find a path to discharge? Suppose an inductor is connected to a source and then the source is disconnected. The inductor will have energy stored in the form of magnetic field. But there is no way/path to ground to discharge this energy? What will happen to the stored energy, current and voltage of the inductor in this case?
A fine example of the stored energy of an inductor used to generate a useful voltage, is the ignition coil in petrol engines. When the points open the current in the primary cct. of the ignition coil, the magnetic flux rapidly collapses as the magnetic energy is converted to electric field energy in the intrinsic capacitance of the primary winding's. To prevent the rapid rise in voltage across the points from creating a spark, a capacitance is added across the points. This capacitance is chosen to reduce the natural sinusoidal resonance in the coil to a frequency such that the rate of voltage build up across the coil rises sufficiently slowly to allow the points to open wide enough to prevent arcing. The choice of the capacitance also limits the peak of the sinusoid to approx. 400 volts. The secondary winding of the coil has about 60 times more turns than the primary. the sinusoidal flux in the primary is shared by the secondary so the secondary will have 60 times the primary voltage developed across it this equates to 24 thousand volts. The result is a damped oscillatory waveform in the several 10's of kilohertz range. Once an arc is formed at the spark plug, all the energy is dumped into the spark.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/552611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 10, "answer_id": 9 }
Regarding total rotational kinetic energy The rotational kinetic energy for a body that is rolling is is $\boldsymbol{\frac{1}{2}Iω^2}$ (where $I$ is moment of inertia about its centre of mass) and the translational kinetic energy is $\boldsymbol{\frac{1}{2}mv^2}$ for a rolling body..where $v$ is speed of its centre of mass for an inertial observer If we add both of these for the body shown in the figure then we should get its total kinetic energy at a particular instant. $\frac{1}{2}I\omega^2 + \frac{1}{2}mv^2$ this should be a generally applicable formula because I have seen the derivation and it seems to be applicable for any rolling body... But it is yielding the wrong answer in this case..can anyone please tell me why? P.S : Sorry for bad circle in the top view.
Here, though it is a rigid body, you cannot use $KE_{TOT} = \frac{1}{2}M{v^2}_{cm}+\frac{1}{2}I\omega ^2$ because the particles closer to the larger axis (Radius $R$) are moving slower than those far away. So we must find KE_TOT as : $KE_{TOT} = \frac{1}{2}I_o{\omega_o}^2 + \frac{1}{2}I_p{\omega_p}^2$........(1) The moment of inertia of sphere about $O$ is $\frac{2}{5}Mr^2 + MR^2$ and $\omega_o$ is $\frac{V}{R}$ Moment of inertia about $P$ is $\frac{2}{5}Mr^2$ and $\omega_p$ is $\frac{V}{r}$ substituting into (1) $KE_{TOT}$ $= \frac{1}{2}\left(\frac{2}{5}Mr^2 + MR^2 \right) {\left(\frac{v}{R}\right)} ^2 + \frac{1}{2}\left(\frac{2}{5}Mr^2\right ) {\left(\frac{v}{r}\right )}^2$ $=\frac{7}{10}Mv^2 + \frac{1}{5}\frac{r^2}{R^2}v^2$ which is the correct result.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/552778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Fermi energy, occupation factor and number of particles Using the grand canonical ensemble, we can show that the occupation factor of an energy level (when the temperature $T$ and chemical potential $\mu$ are fixed) is given by $$ f_E(T,\mu) = \frac{1}{\exp \frac{E-\mu}{kT} \pm 1} \quad (1)$$. The total number of particles and energy in the system are thus given by $$ N(T,\mu) = \int dE \, DoS (E) f_E(T,\mu) \quad (2)$$ $$ U(T,\mu) = \int dE \, E\, DoS (E) f_E(T,\mu) \quad (3)$$ On the other hand, the chemical potential is related to the internal, free or Gibbs energy of the system as $$\mu = \left( \frac{\partial U}{\partial N} \right)_{S,V}= \left( \frac{\partial F}{\partial N} \right)_{T,V}= \left( \frac{\partial G}{\partial N} \right)_{T,p} \quad (4)$$ Is there a way to recover these relations from (1) - ie to check that the $\mu$ which appears in eq.(1) is indeed a chemical potential is the sense of (4) ?
Solved it. The grand potential is $$ A =kT\int dE\,D(E)\log\left(1-\frac{1}{\exp\left(\frac{E-\mu}{kT}\right)+1}\right) =-kT\int dE\,D(E)\log\left(1+\exp\left(\frac{\mu-E}{kT}\right)\right)$$ from where we calculate $$ S=-\left.\frac{\partial A}{\partial T}\right|_{\mu,V}=-\frac{A}{T}-\frac{1}{T}\int dE\,D(E)\frac{(\mu-E)}{\exp\left(\frac{E-\mu}{kT}\right)+1}$$ and $$ p=-\left.\frac{\partial A}{\partial V}\right|_{T,V}=-\frac{A}{V}$$ and it is then straightforward to calculate $$G=U-TS+pV =\mu\int dE\,\frac{D(E)}{\exp\left(\frac{E-\mu}{kT}\right)+1} = \mu N$$ CQFD.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it necessary that a capacitor stores energy but not charge? Is it necessary that a capacitor stores charge? The definition of capacitor given in books is that it store electric energy. So is it possible that the capacitor does not store charge but stores energy only?
It depends on what the capacitor is used for: * *In some cases it is indeed a way of storing energy, similar to the battery. It however allows for higher transfer of this energy, although a rather short storage time. *Capacitors may be used as a way of creating high electric fields. In this case the potential difference between the plates is more crucial than the energy involved. *Finally, by far the most frequent use is in $LC$-circuits, that are part of any generators/receivers of the electromagnetic waves of radio and tv frequencies. In this case the charge is probably the most important variable.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
When creating term symbols, how do you know if the angular momentum $L$ is antisymmetric of symmetric? For example I'm trying to get the term symbol of $(1s)^{2}(2s)^{2}(2p)^2$ . In the answers they state the following: The combination of angular momenta $L_1 = L_2 = 1$ gives $L = 2$ (symmetric), $L = 1$ (antisymmetric) and $L = 0$ (symmetric). This must be combined with the spin wave function of opposite symmetry, thus $^1D_2, ^3P_{0, 1, 2}$ and $^1S_0.$ I totally understand this, except for how they assign symmetric and antisymmetric to the angular momenta. In the previous exercise I only had $L = 0$ and they said it was symmetric and antisymmetric. So how do I know if the angular momentum is symmetric or antisymmetric?
For two particles with the same angular momentum $\ell_1=\ell_2=\ell$, the permutation symmetry follows immediately from the symmetries of the Clebsch-Gordan coefficients: $$ C^{LM}_{\ell m_1;\ell m_2}=(-1)^{2\ell+L} C^{LM}_{\ell m_2;\ell m_1} $$ so that (in accordance with the answer of @Superciocia), the symmetric states have $L$ even and the antisymmetric ones has $L$ odd. This holds for any $\ell$. The situation is much more complicated if you have $3$ or more particles. For instance, in the coupling of $3$ states with $\ell=1$, there are three sets of states with total $L=1$. One of them is symmetric but the other two have mixed symmetry. One of the sets with mixed symmetry comes from coupling the first and second particles to $L_{12}=0$, then coupling this to the third to get $L=1$. It’s easy to see the resulting states are neither symmetric nor antisymmetric (although they are antisymmetric under the exchange $1\leftrightarrow 2$.) The symmetric states with $L=1$ are in fact linear combinations of $L_{12}=1$ and $L_{12}=2$ states.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Infinite Square well: Abrupt change in well's length So, one of my homework problems reads A particle is trapped in an infinitely deep square well of width $a$, suddenly the walls are separated by infinite distance so that the particle becomes free. What is the probability that the particle has momentum between $p$ and $p + dp$? I know that if the wavefunction of a particle is $\Psi(x)$, then the probability of finding the momentum between $p$ and $p+\text{d}p$ is given by- $|a(p)|^2 \text{d}p$ where $a(p)$ is given by- $$a(p)=\dfrac{1}{\sqrt{2\pi \hbar}}\displaystyle \int_{-\infty}^{+\infty}\Psi(x)\ e^{-ipx/\hbar}\ \text{d}x$$ So, the question is- How does the wavefunction change with this abrupt change in the well's dimensions. And what quantity does not change for the particle even after the change of length between walls. To the second question, I think the answer is energy, but which energy state should I assume the particle to be in if it is not mentioned beforehand, since the energy in a well is given by- $\dfrac{n^2h^2}{8mL^2}$?
How does the wavefunction change with this abrupt change in the well's dimensions? It doesn't. The term 'abrupt' implies that the change in the potential is so fast that the state of the system does not have time to react to the change before it is complete, so the state when the new situation becomes operative is identical to the state just before the change started. This makes the answer to your second question easy: what quantity does not change for the particle even after the change of length between walls? The wavefunction. Now, the set-piece you quote does have a problem in that it does not specify the state of the system before the potential changes, so it's basically unanswerable unless you can make a well-justified argument that allows you to specify that state as a specific eigenstate of the energy, or some linear combination of them. Unfortunately, you're basically on your own on that front. If this is formal homework assigned by an instructor, then you should ask them what to do; if it's a textbook problem and you're self-studying, then find a better textbook.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Interpretation of rolling without slipping Here is an interpretation I came up with, The friction, for a rolling body, converts the kinetic energy into rotational energy. Instead of dissipating it. Questions: 1. Is my interpretation correct? *what happens to the motion once all the k.e is converted to rotational energy?
Rolling without slipping occurs when the static friction force between the rolling body and surface (e.g, tire and road) does not exceed the maximum possible static friction force of $f_{max}=u_{s}N$ where $N$ is the force normal (perpendicular) to the surface. For a level surface, $f_{max}=u_{s}mg$. There is no dissipation in the case of static friction because there is no relative motion between the contacting surfaces. So static friction enables rotation without slipping. There is no conversion of kinetic energy to rotational energy. The rolling body has both translational kinetic energy (due to the translational motion of its center of mass) and rotational kinetic energy, due to its angular velocity and moment of inertia. If slippage occurs then some of the rotational kinetic energy is dissipated as heat. There are two possible forms of heat dissipation. One is due to slipping and kinetic friction (due to relative motion between the surfaces). The other is rolling resistance. That's due to the inelastic compression and expansion of the material, such as the rubber of the tire, that occurs when the surfaces contact one another every revolution. Hope this helps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/553794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How does a water jet hitting a wall move parallel to the wall if momentum is conserved? Classical mechanics says that if I throw a ball with velocity perpendicular to the wall and it collides elastically with the wall with a velocity $v_0$, then it bounces back with the same velocity $v_0$. However, if I shoot a beam of water perpendicular to the wall, in most cases it will not deflect back perpendicular to the wall instead it gains velocity perpendicular to the initial velocity and continues to move on the surface. Isn't this a violation of conservation of momentum since for any small molecule inside the beam of water we had no momentum in the perpendicular direction to get started with?
Conservation of momentum is valid only for systems with no external forces. In the case of water, the molecules are polar so they get attracted by the surface you are throwing it on. Hence the external forces on the system is not zero. Also since water molecules are ejected as a stream of particles, Even if some molecules are ejected perpendicular to the surface , you wont be able to see it. Its just like opening a water pipe inside a bucket of water. You wouldn't notice whether the pipe is working or not. If you imagine an experiment where you have a completely non polar fluid , then you would observe the following things if you ejected the fluid perpendicular to the surface: 1)You would notice that the Beam of fluid attains almost 0 velocity near the surface. The fluid will thereby fall from there to the ground (without sticking to the surface) due to gravity. This happens because the fluid particles after striking the wall come back with the same velocity and collide with the incoming beam of fluids and thus nullifying the part of the beam near the surface.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
In the equation: $a = dv/dt$ , is $dt$ the time taken to achieve that instantaneous acceleration? If you solve for $dt$ from $a = \frac{dv}{dt}$ , is it the time taken to to achieved that instantaneous acceleration? $a$ : acceleration $v$ : velocity $t$ : time
In the equation $a= \frac{dv}{dt}$, $dt$ is actually the time in which that small change in the velocity of the body is brought. So you can say that it is the time to achieve that very acceleration. But actually acceleration is defined as the change in velocity in a certain amount of time. Mathematically you can get time if acceleration is given but you can not define acceleration without knowing about time. So meaningfully $dt$ is not the time to achieve that acceleration but it is the time in which that acceleration is achieved. It can sound confusing but think of it for once. Thanks for asking.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Does the action remain dimensionless after the renormalization? After the renormalization procedure, fields will gain an anomalous dimension, $\gamma$, which means that their scaling dimension will be different from what we would guess from the dimensional analysis. My question is whether this means that the action will no longer be dimensionless and if so what are the consequences?
Yes, the action remains dimensionless. The "failure" of dimensional analysis is due to the fact that dimensional quantities that could be neglected under certain conditions (e.g., at a Gaussian fixed point in the renormalization group flow) cannot be neglected at other non-trivial fixed points in the renormalization group flow. The anomalous dimension comes from the contribution of these neglected quantities to the scaling. For instance, if we have a quantity that has dimensions of length to some power, say $[Q] = L^d$ for example, then by dimensional analysis this quantity must be some function of the dimensional parameters of the model. For example, in the Ising model we might take the independent the length scales to be the correlation length $\xi$ and the lattice spacing $a$. Thus, it must be that $Q = \xi^d f(a/\xi)$ for some function $f(\cdot)$ that cannot be determined by dimensional analysis. The usual argument is that near a critical point (or in the continuum limit), the lattice spacing is negligible compared to the correlation length of the system, $a/\xi \ll 1$, so we can set $Q = \xi^d f(0)$. The key assumption here, however, is that we can take the limit $\lim_{x \rightarrow 0} f(x) = f(0)$. It turns out that this limit exists only under certain conditions (such as in dimensions $d > 4$ for the Ising universality class). In general, this limit does not exist, and we actually observe the asymptotic behavior $$\lim_{x \rightarrow 0} f(x) \sim x^{-\gamma} g(x)$$ for some power $\gamma > 0$ and regular function $g(x)$ that is finite at $x = 0$. Thus, $Q = \xi^d f(a/\xi) \sim \xi^{d+\gamma}$ and $\gamma$ appears as an anomalous dimension. However, note that the $\sim$ hides the fact that there is still a factor of $a^\gamma$ in the exact expression---it has just been dropped because it is a constant independent of the temperature (or other variable that tunes the correlation length). Thus, $Q$ still has dimensions of $L^d$, even at a critical point in which the scaling as a function of the correlation length is anomalous. For more details, see Goldenfeld's book, Lectures on Phase Transitions and the Renormalization Group, in particular Chapter 7.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
In physics, are all functions fields? I really confused if there is a function (mostly in physics, functions represents physical quantities) which is not a field? I feel all functions in physics are fields. Is there any functions which are not fields? I see a lot of questions in stackexchange about functions and fields. But no one nail down the difference between functions and a fields in Physics, other than answers resembling textbook explanations.
Functions are a mathematical construct, they have nothing to do with physics other than the fact that we use them as for their mathematical relevance. They become meaningful whenever physicists give them a physical meaning. Fields are, mathematically, functions but they have a deeper meaning in the physical sense. In physics appear many functions as mathematical entities, some of them have a physical meaning. Some examples could be the generating functional (which is actually a function of fields, so a functional), spherical harmonics which for example pop up in the angular distribution of atomic orbitals, Bessel function which pop-up everywhere and are liked, for example, to the pattern of light coming from a slit, distribution functions appear everywhere in quantum mechanics and are actually a meaningful measurable quantity, and so on. But saying that "all functions in physics come up as fields" is not so good since you're mixing up a mathematical object with a meaningful physical quantity that comes to be of the form of that specific mathematical object, a function.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
How and why does an electron add up (enters) in the valance shell of an atom? How does an electron add up (enters) in the valance shell of an atom? Why is energy released when an electron adds up in the valance shell of an isolated atom.
Energy isn't always released when an electron is added to an atom. It depends on the kind of atom you are adding the electron to. Energy is released if the electronegative atom attains a more stable state by accepting the electron (by say attaining a octet configuration in the valence shell). Stable states have less energy when compared to other states and this difference in energy is released when an atom accepts an electron. On the other hand, it actually requires energy to add an electron to an electropositive atom which has an extra shell of electrons which makes it unstable. To add another electron we would have to overcome the repulsions due the already present electrons and hence energy would have to be supplied rather than it being released. If by how an electron entere you mean that how does an atom gain or lose electrons,its to complete its octet configuration.For example when Mg reacts with o2 As O has 2 less electrons to reach its octet whilst Mg has 2 extra electrons.Mg gives its electrons to O for both of them to complete their octet.This is known as an Ionic bond.Hope it helps
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How is it physically possible that the electric field of some charge distributions does not attenuate with the distance? Let's consider for instance an infinite plane sheet of charge: you know that its E-field is vertical and its Absolute value is $\sigma / 2 \epsilon _0$, which is not dependent on the observer position. How is this physically possible? An observer may put himself at an infinite distance from all charges and he will receive the same E-field. It seems strange.
I believe that it is your intuition that is failing here, and since mathematical arguments have already been provided, I'll offer a simpler and more intuitive approach. Take a more or less similar example which is more familiar to you, the sky. The sky, from where we are, looks like an infinite plane, this is not entirely correct but we can assume that from our point of view and for the sake of the argument. You may notice that you don't see a bigger or smaller sky as you are higher or lower over the ground (lets say as you move up and down inside a building.) This is because the plane we are looking at has actually infinite extension. If it didn't, you may see how its "apparent dimension" changes, as you may experience with any object of your everyday life. Of course, this is just a simple comparison to help intuition which gets often confused when the infinite is involved, not really an argument for the constant field, in which case we have no better approach that the mathematical proof.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 5 }
How should I interpret the 'Location area (deg^2)' table on Wiki's 'List of gravitational wave observations' page? Is it in 'square degrees'? Wikipedia's page for 'List of gravitational wave observations' has a location table called 'Location area (deg^2)'. Just to be sure, if you click on 'deg^2' it takes you to the 'square degree' page.... The values range from 16 to 1651, with two different events at 1033..... But if gravitational waves can reach us from any direction, and there are 41,253 square degrees in a sphere, why aren't any numbers higher than 1651? Out of a possible 41253? It doesn't say anything about dividing Earth's sphere into quadrants, or anything... Also, isn't it a weird coincidence that two events occurred in the exact same direction (1033)?
It is quite hard to determine precisely the direction from which the GW signal is coming from, so instead of a single point in the sky we give a confidence region: check out figure 8 in https://arxiv.org/abs/1811.12907, there they show what kind of shape and size these regions have. Note that we are not 100% certain that the signal has come from that region. Instead, we draw a 90% confidence region: we are 90% sure that the signal has come from that region. The figure reported in Wikipedia is the size of that region: it is heavily dependent on the percentage we choose --- if we were to draw a 95% confidence region it would be larger! These regions are quite large! For comparison, the Moon's angular size from the Earth is of the order of 0.2 square degrees. So, we are quite uncertain about the direction the GW signals come from. You can see in the figure that many regions are tall and skinny --- this reflects the fact that they were computed using only data from the two LIGO detectors; when Virgo joined it enabled the regions to be shrunk by a lot along the long axis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is my friend right about omitting $c^2$ in world famous tiny equation? I know $E = mc^2$ says that inertial mass of a system is equal to the total energy content of a system in its rest frame. My friend told me the $c^2$ can be omitted from this equation because that's just an `artifact' when measuring inertia and energy in different units. Is he right?
Your friend is right. If you adopt the length unit l = 299 792 458 metres then c=1 l/s. This can be convenient because in these units $E^2=m^2+p^2$ instead of $E^2=m^2c^4+p^2c^2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/554935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 5 }
Miss understanding related with the energy density of radiation in the context of cosmology The usual definition of radiation energy density in the context of statistical physics is given by $$U=a_{B}T^{4}$$ With $a_{B}=7.5657\times 10^{-16} J m^{-3} K^{-4}$. So $U$ has units of $J m^{-3}$ On the other hand I read in some General relativity textbooks that the parameter $\rho$ (the parameter that appears in the Friedmann equations) is the energy density, but if I look at the units in the of the Friedmann equation for the Hubble parameter $$H^{2}=\frac{8 \pi G }{3}\rho$$ I find that $\rho$ has units of $kg/m^3$. So in the particular case of radiation $\rho_{r}$ don't have the same units as $U$, then $\rho_{r}$ is not the energy density of radiation?
In astrophysics and cosmology it is common to omit factors of $c$, where $c$ is the speed of light. This means that in these units an energy density will look like a mass density. To fix this, you have to put in a factor $c^2$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is really meant by the area of black hole? The area of a black hole is an important parameter in the thermodynamic description of a black hole. In particular, reading popular literature, everyone knows that the entropy of a black hole is proportional to its area as discovered by Stephen Hawking. Can someone explain with a diagram which is really the area of a black hole? I know what is event horizon and Schwarzchild radius but I have real difficulty visualizing the area of a black hole.
You can calculate the area of the event horizon by taking the limit of the area of a sphere surrounding the event horizon as the radial coordinate tends to the Schwarzschild radius. This gives a coordinate independent result. E.g. you could calculate the area after defining a radial coordinate $$R=r-r_s,$$ and take the limit $R\rightarrow 0$. This would give you the same result, but if you were to think of the black hole using the $R$ coordinate, then you would think of the event horizon as a point which, being singular, happens to have an area. This is pretty much how Schwarzschild himself regarded it. The point is that coordinates are not important. Neither the Schwarzschild radial coordinate $r$, nor $R$, has direct physical meaning. But you can calculate the area of a surface as an integral over small areas by using the metric. In practice, both approaches are completely equivalent outside of the event horizon, and nothing can be observed at or inside the event horizon. Which you prefer is strictly a matter of opinion, outside the realm of science (for what it is worth, I prefer Schwarzschild's opinion).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Properties of timelike congruences in flat spacetime I'd like to learn about (or confirm) certain properties of congruences, concerning some presumably rather simple cases, namely of timelike congruences in the setting of flat spacetimes $\mathcal S$. Therefore I have here three closely related questions: 1. Are there at least two (or more) distinct timelike congruences, each covering a given 1+1 dimensional flat spacetime region, which are not disjoint (but which have at least one curve in common) ? 2. Are there at least two (or more) distinct timelike geodesic congruences, each covering a given 1+1 dimensional flat spacetime region, which are not disjoint (but which have at least one curve in common) ? 3. Are there at least two (or more) distinct timelike geodesic congruences, each covering a given 2+1 dimensional flat spacetime region, which are not disjoint (but which have at least one curve in common) ? Proofs or examples should be given preferrably in terms of the spacetime intervals $s^2 : \mathcal S \times \mathcal S \rightarrow \mathbb R$ whose values may be presumed given for flat spacetime region $\mathcal S$ under consideration.
* *Yes. Consider a 1+1 spacetime with metric $ds^2=dt^2-dx^2$. In congruence A, the curves are $(t,x)=(t,x_0)$, with one curve for each value of $x_0$. In congruence B, the curves are $(t,x)=(t,x_0+v|t|)$ with $v=x_0/(1+|x_0|)$. If a smooth congruence is desired, we can smooth out the kinks at $t=0$ without changing the basic idea. The two congruences share the same $x_0=0$ curve. *No. In flat spacetime in the usual coordinate system, geodesics are straight lines. In a plane, the straight lines in a congruence must all be parallel so they don't intersect. *Yes. Consider a $2+1$-dimensional spacetime with metric $ds^2=dt^2-dx^2-dy^2$. In congruence A, the geodesics are $(t,x,y)=(t,x_0,y_0)$, with one geodesic for each value of $x_0$ and $y_0$. In congruence B, the geodesics are $(t,x,y)=(t,x_0+vt,y_0)$ with $v=y_0/(1+|y_0|)$. The two congruences share the same $y_0=0$ geodesics.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do I tensor differentiate a factor without tensors? How do I tensor differentiate a factor without tensor, such as: $$\partial_\mu e^{i\Lambda(x)}\tag{1}$$ Should it be zero or should I differentiate it twice changing the order of the tensors follows: $$\partial_\mu e^{i_\mu \Lambda^\mu}+ \partial_\mu e^{i^\mu \Lambda_\mu}= ({i_\mu + \Lambda_\mu} )e^{i\Lambda }$$ If the latter case is the correct one, must I keep the tensors in the $i$ and in the $\Lambda$ that were brought down ?
This is a standard application of the chain rule. If $\Lambda$ is a scalar function of $\mathbf x = (x^0,x^1,x^2,x^3)$, then $$\partial_\mu e^{i\Lambda (\mathbf x)} = \frac{\partial}{\partial x^\mu} e^{i\Lambda(x^0,x^1,x^2,x^3)} = i\frac{\partial \Lambda}{\partial x^\mu} e^{i\Lambda(\mathbf x)} = i(\partial_\mu\Lambda)e^{i\Lambda(\mathbf x)}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is orbital and wave function are same thing? As we know that wave functions are the solution of schrodinger wave equation which contains all the information about an electron. We also tought that these wave functions are the atomic orbitals of that electron. But my question is as orbitals are the region where the probebility to find the electron is maximum than other regions but wave functions doesn't have any physical significance then how wave function can be orbital ?
I'm not a historian, so my interpretation/usage of these terms is subjective. When I use the term orbit I refer to Bohr's atom model ("invented" around the year 1913). In Bohr's atom model the electrons circle around the nucleus on fixed orbits. The term wave function came later (around 1925). Schrödinger used a probability amplitude, which we also call wave function, to describe the hydrogen spectrum. Depending on the context it's helpful to think in terms of orbits or of wave functions. Therefore, both terms are use.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/555946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Velocity of undamped pendulum On this page, under the heading "Orbit Calculations": http://underactuated.mit.edu/pend.html or here. The author says, "This equation has a real solution when $\cos{\theta} > \cos{\theta_{\rm max}}$" and then they give a piecewise function for $\theta_{\rm max}$. I have no idea how these statement and function were derived from $\dot{\theta}(t) = \pm \sqrt{\frac{2}{I}\dots}$ Can someone show the exact steps to get to this derivation?
As mentioned above by @bRost03, the condition to be obtained, is when the the angular displacement, is maximum $\theta=\theta_{\text{max}}$ and thus $\dot{\theta}_{\text{ma}x}=0$. Then, the condition becomes $$\pm \sqrt{ \frac{2}{I} \big[E+ mgl \,\cos[\theta_{\text{max}}(t)]\big]}=0$$ or $$\cos[\theta_{\text{max}}(t)] =\frac{-E}{mgl} \implies \theta_{\text{max}}(t)= \cos^{-1} \left[\frac{-E}{mgl} \right]$$ Mathematically speaking, the range of $\cos^{-1}(\cdots)$ is $[0,\pi]$. The value of $\pi$ is achieved only in the case the pendulum is standing vertically upwards. The ratio $\frac{-E}{mgl}$ cannot be greater than $\pm 1$ as it would be outside the domain of inverse cosine function.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Does putting a thin metal plate beneath a heavy object reduce the pressure it would have applied without it My dad bought an earthen pot and he kept it on our glass table. Worried that the glass could break on filling the pot with water. I kept a metal plate beneath it. At first, it seemed like a good idea , but on further thinking I was unsure if it would actually help in bringing down the pressure on the glass. what if I put three coins beneath the pressure points instead of the plate, would it be any different than placing the plate (assuming the coins to be nearly as thick as the plate). and if it won't be any different, am I right to think that I could further keep reducing the area of the coins until they start looking extensions of the stand on which the earthen pot rests.
Yes, definitely by putting a plate you are reducing the pressure the pot exerts on the table because $$p= f/a$$ where p is pressure, f is force and area is a . That basically is not a description but a definition of the word pressure itself. Though I'm not sure about this, I think the plate is better than the three coins. Think about it. Gravity is not like arrows from he pot shooting straight down (technically it is straight lines towards center of earth but I meant that it also has an impact on other objects on its way - how?- because gravity pulls the pot down which presses onto the plate and the electrostatic forces between them means that the plate is experiencing a force.) Simply put: The pot exerts a downward force on the plate OR presses down onto the plate and the plate in turn presses onto the table but now with a larger area and hence with lesser pressure, so your table is much safer. The coins idea is not good as it does not change the surface area and simply acts as a mediator of the force of the pot above.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Diffraction of sound - long versus short wavelengths I am having some problems finding an explanation why long sound wavelengths travel around objects easier than short ones, hence making lower frequencies audible across longer distances. Most online sources use a slit/opening for the explanation of diffraction but somehow I still fail to grasp what happens to air molecules as they hit an object. One answer to a similar question in this forum mentions phase discrepancies and wave cancelling which seems to occur more often with shorter lengths, but I just cannot get my head around it. Any input will be much appreciated by a frustrated non-physicist.
Sound is really not about air molecules hitting obstacles. It's about the collective motion of the air molecules, which translates to pressure waves in the air. When the wave hits an obstacle, it reflects or is absorbed, but does not continue on its original path. Portions of the wave that bypass the obstacle do continue, but on the far side of the obstacle they diffract into the "shadow" of the obstacle. Please look at this video clip to see what happens; and read this Wikipedia article. Another Wikipedia article describes "knife-edge diffraction" and plays a video clip illustrating what happens when the diffraction is just around a single edge rather than through a two-edged slit.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Gauge fixing conditions in general relativity Is there a limit to gauge fixing conditions we can impose in gravity ? I have seen two gauge fixing conditions. The DeDonder gauge $\partial_\mu g^{\mu\nu}$ and then in 3+1 formalism the gauge fixing condition $\nabla^2 t = 0$ is imposed where $t$ is the time coordinate. What if I imposed $\nabla^2 x_i = 0$ where $x_i$ is some spatial coordinate. Maybe a combination $\nabla^2(t^2+x^2) = 0$. How is the gauge fixing condition decided upon?
In the case of linearised GR, one of the main reasons for choosing our gauge fixing condition is just sheer convenience. If we're doing a metric perturbation of $g_{\mu \nu} = \eta_{\mu \nu} + h _{\mu \nu} $, then our corresponding action that we get from perturbing our Ricci scalar is the Fierz-Pauli action which is second order in $h $ (since the first and zeroth orders vanish), the precise form of which you can look up in David Tong's notes on linearised gravity. $$ S_{FP} = \int d ^ 4 x \, L (h) .$$ Now, this yields a very difficult and verbose equation of motion to solve for $h_{\mu \nu } $ (which you can look up yourself in the same chapter), so to make things a bit easier we employ the fact that metric diffeomorphisms should leave our action unchanged, since they're just changes in coordinates. So, our action should be invariant under the change $$ h _{\mu \nu } \rightarrow h_{\mu \nu } + \partial _ \mu \xi _ \nu + \partial _\nu \xi _ \mu. $$ This freedom allows us 4 free parameters to fix, since $\xi$ has four components, and we choose constraining the four degrees of freedom via the de-Donder gauge because it simplifies our equation of motion in a vacuum to the simple $$ \Box h _{\mu \nu} - \frac{1}{2} \Box h \eta_{\mu \nu } = 0 $$ Now, this is great because we can now set $\bar {h}_{ \mu \nu } = h _{\mu \nu } - \frac{1}{2} h \eta _{\mu \nu } $ which leaves us with $\Box \bar{ h}_{\mu \nu } = 0 $, which is just a wave equation which we can solve easily.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Applying the principle of Occam's Razor to Quantum Mechanics Wolfgang Demtröder writes this in his book on Experimental Physics, The future destiny of a microparticle is no longer completely determined by its past. First of all, we only know its initial state (location and momentum) within limits set by the uncertainty relations. Furthermore, the final state of the system shows (even for accurate initial conditions) a probability distribution around a value predicted by classical physics. If the quantum probabilistic distribution always lie near the classical prediction, why do we need quantum mechanics in the first place? According to the Feynman interpretation, if an electron has to go from A to B, it can take all the paths but the weight is more on the path predicted by classical mechanics. We know that it is unlikely that the electron travel through the mars to go from A to B on earth. Then, is not that path through mars is unnecessary? Should not in the spirit of Occam's razor, we exclude such thing in a theory?
No. If the classical path was assumed to be the only path, there would be no quantum theory. It would just be classical. And clearly from the need for and success of a quantum theory that explains things outside the domain of the classical one, we know the world to be following quantum rules. In Feynman’s highly readable QED he shows that assuming only the classical path fails to explain reflection from a glass slab. Experimentally the reflection depends on the thickness of the slab and he shows how it can be explained by the “all path” approach. One needs to be aware of when it makes sense to use Occam’s razor. We can’t rule out a successful theory with a less successful one only because the less successful one is simpler. It must be used when choosing between things that have the same domain of validity. For instance, “the particle takes all paths” vs “the particle takes all paths and god exists.” Here both theories make the same testable predictions but one has an extra untestable factor. Occam’s razor says pick the simpler one.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 10, "answer_id": 4 }
What's the debate about Newton's bucket argument? I visited some other QA threads about this topic, and I don't understand why people think it's mysterious that the bucket knows about its rotation. If a non-rotating bucket is all there is in the universe, then, initially, all the parts of the bucket are at rest wrt to each other. But if we want to rotate that bucket with an angular velocity $\omega$, then we basically require the different parts of it to have relative acceleration wrt each other. Because if we divide the bottom of the bucket into many concentric rings, then each ring would've an acceleration $\omega^2 r$ towards the center, depending on the radius $r$ of ring. This means that the rings have relative acceleration wrt to each other. Laws of physics would take different forms for people standing on different rings. Hence, a rotating bucket is a collection of non-inertial frames having relative acceleration. But non-inertial frames are supposed to detect acceleration in Newtonian physics. So what am I missing?
Newton thought that there could only be a meniscus on the bucket if the bucket was rotating relative to something. He took it to be a demonstration of the existence of Absolute Space, because his equations were formulated in terms of Absolute Space. Mach may or may not have discussed whether absolute space can be replaced with distant stars (Mach's principle was formulated by Einstein, but this was an exercise in thought, and never made precise). The problem only exists in Newtonian mechanics, because the formulation depends on an unobservable concept, Absolute Space. Inertial frames are assumed infinite, and in uniform motion relative to each other. It is resolved in general relativity, without reference to either distant stars or Absolute Space, since we can replace Newton's first law with * *An inertial body will locally remain at rest or in uniform motion with respect to other local inertial matter This can be used to define inertial frames locally. (As for what you are missing, you appear to be using a relativistic concept of inertial frames, not a Newtonian one, so you don't see the problem).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How are the constants in the inflationary hypothesis derived? The inflationary hypothesis as I understand it is a correction to GR to account for the observed flatness of the universe in a model in which the universe is expanding. How are the constants behind this inflationary hypothesis derived? I am looking to establish whether this model predicts or is derived from an estimate of the age of the universe, and how to prove that.
The age of the observable universe is determined from the end of the inflationary expansion, which is effectively the hot Big Bang. The end of inflation is associated with the hot Big Bang because, during inflation, matter and energy are exponentially diluted and the universe “reheats” after inflation ends, as the inflaton decays into matter and energy consistent with the temperature of the universe at the time, which can vary according to the model. Inflation could have lasted an arbitrarily long time, and because inflation is great at erasing initial conditions, it’s impossible to know just how long the universe was inflating before it stopped in what would become our observable universe.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/556911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
One third of Lyapunov exponents are zero? What does it mean? This may be quite a straightforward question, but I have a dynamical system with a high dimensional phase-space. I calculated the Lyapunov spectrum for it and saw that one third of my Lyapunov exponents are approximately zero (which is a lot and was quite unexpected). What can I conclude from this? These do not signify stable manifolds in the phase-space as you would need negative Lypaunov exponents for this I guess. Does this signify like a giant oscillatory motion in my phase-space? What can I deduce from this abundance of zero modes in the spectrum? Sidenote: This is a Hamiltonian system so that all Lyapunov exponents sum to zero
What first comes to my mind is that you're probably sampling invariant quasiperiodic tori, which are typically neutrally stable. The associated motion is regular, but not periodic, in that the phase space trajectory comes arbitrarily close to previous states, but never exactly repeats itself (hence quasiperiodic). Especially if the system's nonintegrable term is relatively weak, its phase space is bound to be full of these so-called KAM surfaces.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Cooling behaviour of beverage I offer my colleague some milk in her coffee. The milk has just come out of the fridge. "Not now," she says. "Not till after I've finished my sandwich, and I don't want it to go cold." So: identical mugs and quantities of hot coffee and milk at same temperature; only difference is that the cold milk is added — straight from the fridge — immediately in one case and five minutes later in the second one. My guess is that my colleague is mistaken, and that after the five minutes are up, and the milk added to the second one, that the milky coffee in it will be colder than in the first.
The question is an old conundrum and can be found in various guises on the Internet and in handbooks. One can summarise it as follows: If I add milk to my coffee and wait 5 minutes before drinking it and another person waits 5 minutes and then adds the milk to his/her coffee, who is drinking the hottest coffee? A theoretical derivation of the end temperatures can be carried out based on the following assumptions/simplifications. * *amounts of milk and coffee are identical in both cases *mixing of milk and coffee is adiabatic *the cooling of the liquid (coffee or coffee plus milk) follows Newton's Law of Cooling *radiative losses during cooling are negligible compared to convective losses *evaporative losses are negligible *material constants like density, specific heat capacity and convective heat transfer coefficients are independent of temperature *the above list of assumptions may not be fully exhaustive A derivation based on the above is tedious and shows both end temperatures to be very close together. But the derivation is also useless for providing a definitive answer (but it can be useful to provide insights into the cooling process) because critics can always point to one or more of the assumptions not being met in reality. Further refining the model will likely not quell such critiques. For those reasons a well designed experiment with sufficient replication (to allow statistical analysis) would be more interesting and insightful.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Does the frame in which the CMB is isotropic violate the Copernican Principle? The Copernican Principle states that Earth is not at a special place in the Universe, and by extension, that there are no "special places" in the Universe (per homogeneity of the universe, aka the cosmological principle). However, the frame in which the CMB is isotropic appears pretty special: * *There is exactly one reference frame in which the CMB is isotropic *It's independent of the observer's motion *Every observer agrees which reference frame that is *It's not trivial, since it makes the CMB simpler (and the CMB underpins much of modern cosmology) Does the frame in which the CMB is isotropic violate the Copernican Principle? If so, why do we still believe in the Copernican Principle?
CMB frame provides a privileged foliation of spacetime by spacelike hypersurfaces, essentially by defining universal cosmic time via some function $t_c=f(T_\text{CMB})$. A given “slice” $t_c=\mathrm{const}$ could be then interpreted as a space part of a spacetime. Copernican Principle as stated by OP means that “there are no special places”. Applied to cosmological models we could identify “places” with points of space within a slice $t_c=\mathrm{const}$. And indeed if averaged over sufficiently large scale these slices appear to be homogeneous spaces, every point of it is just like any other point. We see that cosmological models that assume spatial homogeneity do satisfy Copernican Principle in the sense outlined above. Note, that the “time” part of spacetime is not homogeneous: every moment (on cosmological timescale) is unique and not like the moments before or after, at the very least they differ by having a specific value of CMB temperature.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
RG fixed points and $T_{\mu\nu}$ It is common to refer to fixed points of the renormalization group as scale invariant theories. This statement can be formulated as $$ \beta(\mu) \Big |_{\mu^*} = 0 \; \; \Longrightarrow \; \; T^{\mu}_{\mu} = 0 .$$ However, I never saw a proof of this fact and I do not think it is trivial. How can I approach it?
This is a hard problem, it is about the conditions which ensure that scale invariance implies conformal invariance. In two dimensions any unitary local scale invariant theory is conformally invariant. In four dimensions it is not yet known a set of necessary and sufficient conditions. Here are some references. [1] J. Polchinski, Scale and Conformal Invariance in Quantum Field Theory, Nucl. Phys. B 303 (1988) 226. [2] M. A. Luty, J. Polchinski and R. Rattazzi, The a-theorem and the Asymptotics of 4D Quantum Field Theory, JHEP 01 (2013) 152 1204.5221. [3] Y. Nakayama, Scale invariance vs conformal invariance, Phys. Rept. 569 (2015) 1 1302.0884. [4] A. Dymarsky, Z. Komargodski, A. Schwimmer and S. Theisen, On Scale and Conformal Invariance in Four Dimensions, JHEP 10 (2015) 171 1309.2921.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Explanation for an unexpected rainbow Yesterday, I observed an unexpected rainbow in the sky. There was no forecast for rain, neither was it raining anywhere nearby. I have been trying to find an explanation but don't seem to find any. Can someone please explain what this rainbow is? Note:the colours were way more vivid as compare to the picture I have taken
These are tropospheric Iridescent Clouds According to AtmosphericOptics: When parts of clouds are thin and have similar size droplets, diffraction can make them shine with colours like a corona. In fact, the colours are essentially corona fragments. The effect is called cloud iridescence or irisation... The usually delicate colours can be in almost random patches or bands at cloud edges. They are only organised into coronal rings when the droplet size is uniform right across the cloud. The bands and colours change or come and go as the cloud evolves...Iridescence is seen mostly when part of a cloud is forming because then all the droplets have a similar history and consequently have a similar size. I've saturated the image so the interesting part can be appreciated And here you have a very similar observation I quickly found by google image search:
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Hydrogen atom and scale transformation for radial variable While solving Schrödinger equation for Hydrogen atom we make a scale transformation for radial variable ($r=\frac{ax}{Z}$; where $a=$ Bohr radius, $x=$ dimensionless variable and $Z=$ atomic number), this turns out to be a very good scale transformation. But my question is how do we know value of Bohr radius in advance, before solving Schrödinger equation? Do we just use Bohr radius that we got from Bohr theory? If we do use Bohr radius from Bohr theory then why is so because it is a classical theory?
Usually when transforming into dimensionless variables one looks at the relevant constants in the problem. For hydrogen atom we have the electron charge $e$, electron mass $m_e$, Plank constant $\hbar$, permittivity of free space $\epsilon_0$ Then one does dimensional analysis to make scales based on the above constants. And the expression for length scale turns out to be $$a_0=\frac{4\pi\epsilon_0\hbar^2}{e^2 {m_e}}$$ The extra $4\pi$ is part of the permittivity. And this is exactly the Bohr radius.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Finite potential well with quantised energy In a finite potential well like that in figure, is the potential constant between $-L/2$ and $L/2$? Since that energy is quantised, if I'm in the second excited state, would the potential still be constant and equal to $0$, so that energy is only kinetic?
The potential is constant by definition. It's independent on your energy state, and it is in fact one of the elements that dictates the behaviour of your system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/557942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
"Boiling is to evaporation as melting is to... ?" Or, why aren't 31 degree ice cubes wet? Well before a liquid reaches boiling point, it gradually looses molecules with exceptionally high kinetic energies to its surroundings, which is called evaporation. Does this phenomenon occur to some solids as well, where before their melting points, the lose some of their mass into liquid forms? Why don't ice cubes at 31 degrees have a layer of water sticking to them, but are instead extremely dry?
If you look at the phase diagram of water you will see that below the temperature of the triple point, ice turns directly into vapour rather than into liquid. In other words it sublimates
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Doubt on Tetrads, Energy-momentum tensors and Einstein's equations Given, for instance, the perfect fluid energy-momentum tensor: $$T_{\mu\nu} = (\rho+p)u_{\mu}u_{\nu} - pg_{\mu\nu}\tag{1}$$ We can put (due to diagonalization procedure) into the diagonal for as: $$T_{\hat{\mu}\hat{\nu}} = Diag[\rho, \tau,p_{2},p_{3}] \tag{2}$$ On the other hand , if we specify a tetrad frame we write the very energy tensor into the same diagonal form. Now, the Einstein tensor can be written in terms of tetrad frame as well: $$G_{\hat{\mu}\hat{\nu}} = e_{\hat{\mu}}^{\mu}e_{\hat{\nu}}^{\nu}G_{\mu\nu} \tag{3}$$ My doubt is: If we write the energy tensor in a tetrad frame, we need to express the Einstein tensor in a tetrad frame too?
The Einstein field equations read $G_{\mu\nu}=8\pi T_{\mu\nu}$, so if we contract one side with $e^\mu_{\hat\mu}$ we have to do so to the other side as well. Hence, yes, both need to be in the orthonormal basis. This is just a special case of the more general principle that indices should match on both sides of an equation (there are a few rare exceptions, to do with uncommon notational tricks). In this case while you might have the same amount of indices on both sides, the point is they need to also be the same kind of indices.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can we transform energy conservation laws on inclined plane? Suppose a inclined plane Now in normal projectile problem we can normally apply energy conservation laws but in this case since this is a inclined plane we have to transform the conservation laws now this sounds confusing but i am saying that suppose a ball is thrown from one end of inclined plane and it reaches the other end we can't say that potential energy in both the cases is zero along the inclined plane but suppose we want to make it zero along the inclined plane what transformation we must apply i tried transforming g as components along inclined plane but it didn't work please help me this is not a homework question i am just trying to learn the conversation laws in deep otherwise there are other methods which dont require such transformation
The change in potential along the incline will be equal to the work done by the component along the incline.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does it make sense to say that something is almost infinite? If yes, then why? I remember hearing someone say "almost infinite" in this YouTube video. At 1:23, he says that "almost infinite" pieces of vertical lines are placed along $X$ length. As someone who hasn't studied very much math, "almost infinite" sounds like nonsense. Either something ends or it doesn't, there really isn't a spectrum of unending-ness. Why not infinite?
Even (and especially) someone who has studied math a great deal would concur with your second paragraph As someone who hasn't studied very much math, "almost infinite" sounds like nonsense. Either something ends or it doesn't, there really isn't a spectrum of unending-ness. The intended meaning of the offending phrase “almost infinite” is that the quantity $x$ in question is so big that the system concerned is well modelled by the theoretical limiting case $x\to\infty$ (which is often mathematically simpler). As others have remarked here, a better shorthand for this description is “effectively infinite”.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 11, "answer_id": 4 }
Canonical transformation to diagonalize Bosonic Hamiltonian The Hamiltonian of the system of bosons ($a$, $a^{\dagger}$, $b^{\dagger}$ & $b$ are Bose operators) is: \begin{equation} H=\epsilon_{1} a^{\dagger}a+\epsilon_{2}b^{\dagger}b+\frac{\Delta}{2}\left(a^{\dagger}b^{\dagger}+ba \right) \end{equation} where $\epsilon_{1}$, $\epsilon_{2}$, and ${\Delta}$ are real and positive, ${\Delta}$ < ($\epsilon_{1}$ + $\epsilon_{2}$). I am trying to find a Canonical Transformation to diagonalize this Hamiltonian. And afterward to find expressions for the eigenenergies and parameters of the transformation. I am not sure whether first I need to switch to any other space like momentum etc and using Bogoliubov Transformation. Any help and hint will be highly appreciated.
You need to make sure the bosonic commutation realtions hold for any basis you choose. For that you need the equivalent of a $z$-Pauli matrix $$\sigma_3 = \begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -1 \end{pmatrix},$$ where this is just an example for two bosonic operators, the size of the matrix is 2 x #bosons. You start by expanding your Hamiltonian to a Nambu space Hamiltonian, i.e. in momentum space this basis would be $ \begin{pmatrix} a_k & b_k & a^\dagger_{-k} & b^\dagger_{-k} \end{pmatrix}$. Now that you have your Hamiltonian written in this basis, you diagonalize the matrix $\sigma_3 H$. The procedure is specified here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does this motor move? Say we have a motor coil like this: We hang a mass (red ball) on the motor to prevent it's rotation. We make the mass heavy enough such that it's Weight Force directly opposes the motor force produced by that wire. $$mg = BIL$$ Does this motor turn? I feel like the answer is no, because that wire has no net force acting on it (forces cancel out). However, there still is a force being produced by the right hand wire (next to the N pole). It feels like this force should still be able to make the coil turn.
Answer: Yes, motor's coil will turn. Notice, the magnetic field $B$ exerts a force $=BIL$ to the right hand wire in vertically downward direction (given by Fleming left hand rule). Similarly, it exerts an equal force $=BIL$ on left hand wire in vertically upward direction. These two equal and opposite forces for a couple which have tendency to turn the coil of motor depending on the magnitude of net turning moment. $$\text{Turning moment acting on the coil}=BIL\times d$$ $$\text{Opposing moment prouced by weight}, (mg=BIL)=mg\times \frac{d}{2}=BIL\times \frac d2$$ $$\implies BIL\times d>mg\times \frac{d}{2}$$ Since, turning moment is greater than opposing moment by weight $mg$ Hence the coil will certainly turn.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Significance of Diagonalization in Degenerate perturbation Theory I am studying Degenerate perturbation Theory from Quantum Mechanics by Zettili and i'm trying to understand the significance of diagonalizing the perturbed Hamiltonian. He uses the stark effect on the hydrogen atom as an example. Im gonna skip the calculations of the matrix elements because i understand how they are done. The perturbed hamiltonian is in this form: $H_p =-3Ea_0 \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ \end{pmatrix}$ After this he says that diagonalizing this matrix we get these eigenvalues: $E_{21}=-3Ea_0,E_{22}=E_{23}=0,E_{24}=3Ea_0$ where these eneriges are the first order corrections.After finding these eigenvalues we can find the corresponding eigenvectors. This problem though could be solved without diagonalizing the matrix, we would have to find the eigenvalues with the determinant and then find the corresponding eigenvectors and result would be the same. So is diagonalizing the matrix just another way of finding the eigenvalues, or is there something deeper behind it?
Yes and yes. For this particular problem, it is true that any method of finding eigenvalues will yield the answer you're looking for–though finding the eigenvalues is virtually synonymous with diagonalizing the matrix (the diagonalized matrix is simply the eigenvalues down the diagonal). There is tremendous physical significance to the diagonalization of Hamiltonians. In condensed matter physics, for example, to diagonalize the Hamiltonian of a model is to understand its energy spectrum (gapped/gapless, degeneracies etc.), as well as its fundamental excitations, which is a lot of information. If you know the energy spectrum of a many-body system, you can conclude almost anything you would want to about its time evolution and thermodynamic properties. Furthermore, subspaces of Hilbert space are defined by their shared quantum numbers (or equivalently, their symmetries). These subspaces show up in your Hamiltonian matrix as blocks, and in a finite-dimensional Hamiltonian they can be neatly visualized, and even manually constructed as a computational shortcut in the process of diagonalizing the full Hamiltonian. In the degenerate perturbation problem you've cited, you are interested in diagonalizing the perturbed Hamiltonian in any degenerate subspaces. These are blocks of the perturbed Hamiltonian in the same matrix location as any blocks of the unperturbed Hamiltonian with a repeated eigenvalue. As we said earlier, the states in these block share any relevant symmetries. The perturbation can break one or more of these symmetries, resulting in energy corrections which lift the degeneracy. This fractures the degenerate subspace into several smaller, less degenerate subspaces. If any symmetry remains unbroken, these smaller subspaces may still themselves be degenerate.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/558912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is the Lagrangian density of electromagnetism half-blind? The Lagrangian density of electromagnetism is $$ \mathcal{L}_{EM}=\frac{1}{4\mu_0}F^{ab}F_{ab} $$ This represents one of two fundamental Lorentz invariants of electromagnetism. The second one is: $$ \frac{1}{2}\epsilon_{abcd}F^{ab}F^{cd} $$ Since $\mathcal{L}_{EM}$ contains only 1 out of 2 fundamental Lorentz invariant, how is it the case that $\mathcal{L}_{EM}$ not "half-blind"? Does the absence of the second fundamental Lorentz invariant from $\mathcal{L}_{EM}$ erases any features of electromagnetism from the solutions, that would otherwise be present in nature who obviously accounts for both invariants?
The quantity you propose is a total derivative; specifically, $$ \frac{1}{2} \epsilon_{abcd} F^{ab} F^{cd} = \partial^a \left( \epsilon_{abcd} A^b F^{cd} \right). $$ Since adding a total derivative to any Lagrangian doesn't change the classical equations of motion, it doesn't matter if this invariant is in the Lagrangian or not, and it's customary to just leave it out. (At the quantum level there are interesting physically observable phenomena that can arise from total-derivative terms, but that's a separate question and one I'm not as qualified to answer.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/559325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Describing forces in rolling Consider a wheel on a frictionless horizontal surface. If we apply a horizontal force (parallel to the surface and above the level of the center of mass), what happens to the wheel? Does it roll or slide forward or rotate only or does any other phenomenon happen? Please guide me. Also draw a free body diagram. Note: This is a thought experiment. If the question is not satisfying, I am sorry for that and please guide me.
Look neglecting friction makes it simple. We have to think only about the applied force for rotational motion as well as translational motion. So since there is an external force , the body will have some translational motion in the forward direction. This will happen for sure and we can find the acceleration using F=ma Now we have to look for the rotational motion. If the force was applied exactly at the centre of mass of the body, there would be no rotation at all (cause is in your previous question). But since the force is applied at some distance above the centre of mass there would be some Torque(same cause , as given in your question) and hence the body will rotate. So in your case , there would be both rotation as well as translation or you can say that the body is rolling. For pure rolling I1mbo has given correct explanation. Thanks for asking. Hope it helps. Sorry for no fbd's.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/559470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Why can't many models be solved exactly? I have been told that few models in statistical mechanics can be solved exactly. In general, is this because the solutions are too difficult to obtain, or is our mathematics not sufficiently advanced and we don't know how to solve many of those models yet, or because an exact solution genuinely does not exist, i.e. it can be proven that a model does not admit an exact solution?
Try finding an analytical solution of the particle position $(x,y,z)$ at time $t$ when the movement is described by the Lorenz attractor equation system: $$ {\begin{aligned}{\frac {\mathrm {d} x}{\mathrm {d} t}}&=\sigma (y-x),\\[6pt]{\frac {\mathrm {d} y}{\mathrm {d} t}}&=x(\rho -z)-y,\\[6pt]{\frac {\mathrm {d} z}{\mathrm {d} t}}&=xy-\beta z.\end{aligned}} $$ We can't do that. An analytical solution doesn't exists, because the system is chaotic. We can only try to solve the equation numerically and draw particle position at each moment in time. What you will get is this: Neither numerical method helps to shed a light on the particle's exact behavior or exact prediction where it will be exactly after time period $\Delta t$. You can do some estimations of course, but just predicting in small time window and with great inaccuracies/error. That's why weather prediction fails for large time scales, and sometimes fails for small ones too. Three-body problem in Newton mechanics is also chaotic and doesn't have a general solution either. So, there's unpredictable systems everywhere in nature. Just remember uncertainty principle. EDIT Thanks to @EricDuminil - he gave another more simple idea how to see chaotic behavior of systems. One just needs to recursively calculate Logistic map equation for couple of hundreds iteration: $$ x_{n+1}=r\,x_{n}\left(1-x_{n}\right) $$ And draw $x$ values visited over all iterations as a function of bifurcation parameter $r$. One then will get a bifurcation diagram like this: We can see that $r$ values in range $[2.4; 3.0]$ make a stable system, because it visits just 1 point. And when the bifurcation parameter is $r > 3.0$ the system becomes chaotic, output becomes unpredictable.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/559744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 3, "answer_id": 1 }
What is the probability of measuring $p$ in the momentum space? I have a wave function $\Psi (x,t)$. According to the Max Born postulate, $\lvert\Psi (x,t)\rvert ^2$ is the probability density. This quantity specifies the probability, per length of the $x$ axis, of finding the particle near the coordinate $x$ at time $t$. If at $t=0$ I make a Fourier transform for the momentum space, $$\phi(p)=\frac{1}{\sqrt{2 \pi \hbar}} \int _{-\infty} ^{+\infty} \psi(x)e^{-ipx/\hbar} dx$$ does $\vert\phi(p)\rvert ^2$ specifies the probability of finding the particle near the momentum $p$ at time $t=0 \hspace{1mm}$? In this sense, given $\Psi(x,t)$, how could I write $\phi(p)$ at any time $t$, i.e. $\Phi(p,t)\hspace{1mm}$?
You’re exactly right: $|\phi(p)|^2$ gives the probability of measuring momentum $p$ at time $t=0$. An analogous relation holds for the time-dependent case: $$\Phi(p,t)=\frac{1}{\sqrt{2\pi\hbar}}\int_{-\infty}^{\infty}dx e^{-i px/\hbar}\Psi(x,t)$$ This is simply due to the fact that one independently transforms between position and momentum space and between time and frequency space. For a “proof” that this is the case, consider: $$\Phi(p,t)=\langle p|\Psi(t)\rangle=\int _{-\infty}^{\infty}dx \langle p|x\rangle\langle x|\Psi(t)\rangle= \int _{-\infty}^{\infty}dx \langle p|x\rangle \Psi(x,t)$$ Now, note that a free particle momentum eigenstate in the position basis is a plane wave $\langle x|p\rangle= \frac{1}{\sqrt{2\pi\hbar}} e^{i px/\hbar} $, so $\langle p|x\rangle=\langle x|p\rangle^*= \frac{1}{\sqrt{2\pi\hbar}} e^{-i px/\hbar}$. Finally then, we arrive at: $$ \boxed{ \Phi(p,t)= \frac{1}{\sqrt{2\pi\hbar}} \int _{-\infty}^{\infty}dx e^{-i px/\hbar} \Psi(x,t)}$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/559980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the question asking for the primitive translation vector of simple cubic or reciprocal lattice? Can anyone please give me a clue on what the question wants? Based on the question, I am clueless if it asks for primitive translation vector of simple cubic or reciprocal lattice? Because the form of the given $\mathbf k_1$, $\mathbf k_2$, $\mathbf k_3$ is very different from simple cubic one.
We assume that the cubic lattice has side length $a$. Then the first Brillouin zone will look like BZ $=(-\frac{\pi}{a},\frac{\pi}{a}]^3$ (why?) with periodic boundary conditions along all three directions. Because of the periodic boundary conditions, we know that k-vectors are only physical modulo any reciprocal lattice vector, since adding such a vector will take you back to the same pooint. So we want to find the reciprocal lattice vectors that add to $k_1$, $k_2$, and $k_3$ such that all of them are in the Brillouin zone. Now it's just a matter of trial and error to find the correct vector. \begin{align} k_i \equiv k_i + \frac{2\pi}{a}(l,m,n) \in BZ \end{align} where $l,m,n \in \mathbb{Z}$. These are the equivalent $k$-vectors inside the Brillouin zone. Brillouin Zone https://eng.libretexts.org/Bookshelves/Materials_Science/Supplemental_Modules_(Materials_Science)/Electronic_Properties/Brillouin_Zones#:~:text=A%20Brillouin%20zone%20is%20defined,vectors%20drawn%20from%20the%20origin.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/560545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Equivalence of Hermitian operator and Hermitian matrix in Quantum Mechanics I learned that a Hermitian matrix $A$ is defined as a matrix that satisfies $$A^\dagger=(A^*)^\intercal=A,$$ i.e. its Hermitian conjugate $A^\dagger$ is the same as the original matrix $A$. I also learned that in QM, a Hermitian operator $H$ is defined as an operator that satisfies $$ \langle f|Hg\rangle=\langle Hf|g\rangle,$$ where $f$ and $g$ are vectors. Since operators and matrix can be represented by matrices in a particular basis, how can it be shown that a Hermitian matrix with the property $(A^*)^\intercal=A$ also satisfies $ \langle f|Ag\rangle=\langle Af|g\rangle$?
$\langle f|Ag\rangle=\langle f|A|g\rangle$. $\langle Af|g\rangle$: * *$(\langle Af|) = (|Af\rangle)^\dagger =(A|f\rangle)^\dagger = \langle f |A^\dagger$, *so $\langle Af|g\rangle = \langle f |A^\dagger|g\rangle$ If $A = A^\dagger$, then $\langle f|Ag\rangle =\langle Af|g\rangle$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/560673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What would happen to the Sun if you reflect all its emitted e.m. radiation back? What would happen to the Sun if you would reflect, in whatever way, all the outgoing electromagnetic radiation (Solar winds can be neglected)?
This is the same as asking what would happen if the sun couldn't get rid of the heat it generates. In that case, the heat builds up, the sun's temperature goes up, and in response the sun expands a bit. This expansion causes the fusion reactions in the sun's core to slow down, which slows down the rate of heat generation. But if none of that heat can escape, the sun will get hotter and it will continue to expand, and the fusion rate in its core will continue to decrease. At some point in this process the sun goes blown up so much in size and the temperature in its core falls so far that fusion can no longer occur and the sun's power source is shut down.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/561030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Boltzmann distribution in Ising model I've written in Matlab a code for a Ising model in 1 dimension with 40 spins at $k_{B}T=1$. I record the energy of every step in a Metropolis Monte Carlo algorithm, and then I made an histogram like this. I want to show the theoretical Boltzmann distribution. What is the exact formula to get this shape? $Ax \exp(-\beta x)$?
I have to make a number of assumptions, as you did not state all the necessary information. So, I am going to assume that you are using periodic boundary conditions, that is, your Hamiltonian is $$ \mathcal{H}(\sigma) = -\sum_{i=1}^N \sigma_i\sigma_{i+1}, $$ where I have denoted by $N$ the number of spins (that is, $N=40$ in your case) and used the convention that $\sigma_{N+1}=\sigma_1$. Write $D(\sigma)$ the number of $i\in \{1,\dots,N\}$ such that $\sigma_i\neq \sigma_{i+1}$, still using the convention that $\sigma_{N+1} = \sigma_1$. Note that $D(\sigma)$ is necessarily an even number (because of the periodic boundary conditions). Then, the total energy can be rewritten as $$ \mathcal{H}(\sigma) = D(\sigma) - (N-D(\sigma)) = 2D(\sigma) - N \,. $$ The probability that $D(\sigma)=\delta N$ (with $\delta \in \{0,\frac{2}{N},\frac{4}{N},\dots,\frac{2\lfloor N/2 \rfloor}{N}\}$) is $$ \mathrm{Prob}(D = \delta N) = \binom{N}{\delta N}\frac{\exp(-2\beta\delta N)}{\frac{1}{2}\bigl(1-\exp(-2\beta)\bigr)^N + \frac{1}{2}\bigl(1+\exp(-2\beta)\bigr)^N}, $$ since there are $\binom{N}{\delta N}$ ways of choosing the $\delta N$ pairs of disagreeing neighbors. This can be easily reformulated in terms of the energy per spin $$ \frac{1}{N}\mathcal{H}(\sigma) = \frac{2}{N}D(\sigma) - 1. $$ Note that the possible values of the latter are $-1$, $-1+\frac{4}{N}$, $-1+\frac{8}{N}$, ... , $-1+\frac{4\lfloor N/2\rfloor}{N}$. The probability of observing an energy per spin equal to $\epsilon$ is then given by $$ \mathrm{Prob}(\mathcal{H}=\epsilon N) = \binom{N}{\frac{1+\epsilon}{2}N} \frac{\exp\bigl(-\beta (1+\epsilon) N\bigr)}{\frac{1}{2}\bigl(1-\exp(-2\beta)\bigr)^N + \frac{1}{2}\bigl(1+\exp(-2\beta)\bigr)^N} . $$ Here is a plot of the distribution for your parameters $N=40$ and $\beta=1$ (only values of $\epsilon$ smaller than $-0.2$ are indicated as higher values have too small probability at this temperature): (The computation using other boundary conditions are similar.)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/561104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Recommend books for learning lattice QCD I want to learn lattice QCD by myself, but I don't know how to start. Can you recommend some books for lattice QCD?
In my opinion, one of the best modern references is a book by Gattringer and Lang https://www.springer.com/gp/book/9783642018497. This book contains rather a broad introduction of the subject, from the elementary details, such as path integral on lattice and different discretizations. And then there is discussion on more modern aspects, such as a various way of treating dynamical fermions on lattice, the sign problem, hadron spectroscopy, etc.. Moreover, the good reference is Thomas Degrand, Carleton DeTar - Lattice Methods for Quantum Chromodynamics http://en.bookfi.net/book/747102. Their content significantly overlaps, however some of the topics, not covered in the first book mentioned, can be looked up in Degrand, DeTar. The book by Istvan Montvay, Gernot Münster https://books.google.ru/books?id=NHZshmEBXhcC&redir_esc=y contains more theoretical details and is more advanced, but I recommend to read it in case there is wish to get more rigorous proof or motivation for some statements, for example, justification of staggered fermions.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/561232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to show that $[L_i, v_j]=i\hbar\sum_k \epsilon_{ijk}v_k$ for any vector $\textbf{v}$ constructed from $\textbf{x}$ and/or $\nabla$? In Weinberg's Lectures on Quantum Mechanics (pg 31), he said that the commutator relation $$[L_i, v_j]=i\hbar\sum_k \epsilon_{ijk}v_k$$ is true for any vector $\textbf{v}$ constructed from $\textbf{x}$ and/or $\nabla$, where $\textbf{L}$ is the angular momentum operator given by $\textbf{L}=-i\hbar\textbf{x} \times \nabla$. An example for vector $\textbf{v}$ is the angular momentum $\textbf{L}$ itself: $$[L_i,L_j] = i\hbar \sum_k \epsilon_{ijk} L_k.$$ Other examples include $\textbf{v}=\textbf{x}$ and $\textbf{v}=\nabla: $ $$[L_i,x_j] = i\hbar \sum_k \epsilon_{ijk} x_k,$$ $$[L_i,\frac{\partial}{\partial x_j}] = i\hbar \sum_k \epsilon_{ijk} \frac{\partial}{\partial x_k}.$$ How can it be shown that the commutator relation $[L_i, v_j]=i\hbar\sum_k \epsilon_{ijk}v_k$ is indeed true for any vector $\textbf{v}$ constructed from $\textbf{x}$ and/or $\nabla$? Edit: I am looking for an answer that does not simply say that this is the definition of a vector operator. In fact, I think that Weinberg refers to $\textbf{v}$ as a vector, not a vector operator.
Theorem: Let $\mathbf{A},\mathbf{B}$ be vector operators. then $\mathbf{A}\times\mathbf{B}$ is a vector operator. But e.g. $\mathbf{A}\mathbf{B}$ is a scalar. That is $[L_j,\mathbf{A}\mathbf{B}]=0$ The proof is done by straight forward algebra, using the definition of a vector operator $[L_j,A_i]=i\hbar\epsilon_{jik}A_k$. But I think your confusion isn't related to the algebra, but stems from the term "constructed". This is very imprecise language. As the theorem shows, not all combinations or even products of two vector operators are vector operators.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/561372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Is $b$ in the drag force formula $F=-bv$ constant for a certain medium and object? I heard that $F=-bv$, where $F$ is the drag force, $b$ is the damping coefficient, and $v$ is the velocity of an object, can be used to calculate the drag force exerted on an object moving at a moderate velocity. * *What range is this moderate velocity referring to? *Does $b$ in the drag force formula $F=-bv$ have to be constant for a certain medium and object? *If the answer to question 2 is a "yes" my question is: should $b$ not vary in the damped oscillation formula $x(t)=Ae^{-b/2m}cos(ωt)$? It sounds counterintuitive that $b$ has to be constant in that formula. I've done an experiment where I changed the value of spring constant in spring-mass systems damped in water, and my results (which seem to be precise) show different values of $b$ for different spring constants.
* *It is not referring to some absolute range of velocities, rather it means the flow of fluid around the object is laminar flow. We can establish whether or not the flow is laminar by computing the so-called dimensionless number $\mathbf{Re}$, i.e. Reynolds number: $$\mathbf{Re}=\frac{vD}{\nu}$$ where: * *$v$ is the velocity *$D$ a characteristic dimension of the object (like its diameter) *$\nu$ the kinematic viscosity of the fluid Laminar flow occurs for $\mathbf{Re}<2300$ and turbulent flow for $\mathbf{Re}>2900$ (in between these numbers is the so-called 'transitional regime'). In the laminar regime, viscous drag forces are said to dominate $F$ and in the turbulent regime inertial forces dominate it. In the case or turbulent flow the drag force is of the form: $$F=-cv^2$$ so the velocity dependence is on the square of the velocity. *In the either laminar or turbulent regime $b$ and $c$ resp. are considered constant and invariant to $v$. *I've not checked your formula ($x(t)=Ae^{-b/2m}\cos(ωt)$) but why does it "sounds counterintuitive that b has to be constant in that formula"? As stated above: in the relatively narrow velocity interval (laminar flow!) $b$ should be constant.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/561615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does an up quark decay into products more massive than itself? According to https://en.wikipedia.org/wiki/Up_quark the up quark can decay into a down quark plus a positron plus an electron neutrino. The problem is that the mass of the by-products is greater than the original particle. This would violate conservation of mass/energy unless some source of energy or mass was put into the system to trigger the decay.
Quarks can never be observed isolated, since they only exist in confinement. What you are asking about is basically the conversion of a proton into a neutron. Even then, the proton cannot decay in isolation (except if there is a incident antineutrino with sufficient energy), and there are basically two main types of cases where the proton can do this transformation into a neutron, one of them is covered in John Rennie's answer, where the proton exist inside a nucleus, together with other nucleons, and the extra energy you are asking about is supplied by the changes in the involved EM and residual strong forces. The other case is electron capture, where the proton rich nucleus of an EM neutral atom absorbs an inner atomic electron. In most cases (except Auger effect) the atom stays EM neutral, the proton converts to a neutron, and all the decay energy is released in form of a neutrino. Contrary to popular belief, the electron is not from an external atom, but from inside the current atomic system. This electron supplies the extra energy you are asking about. https://en.wikipedia.org/wiki/Electron_capture
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
In metals, the conductivity decreases with increasing temperature? I am currently studying Principles of Optics: Electromagnetic Theory of Propagation, Interference and Diffraction of Light, 7th edition, by Max Born and Emil Wolf. Chapter 1.1.2 Material equations says the following: Metals are very good conductors, but there are other classes of good conducting materials such as ionic solutions in liquids and also in solids. In metals the conductivity decreases with increasing temperature. However, in other classes of materials, known as semiconductors (e.g. germanium), conductivity increases with temperature over a wide range. An increasing temperature means that, on average, there is greater mobility of the atoms that constitute the metal. And since conductivity is due to the movement of electrons in the material, shouldn't this mean that conductivity increases as temperature increases?
The key point is that thermal motion disrupts the periodicity of the potential. As stated in ch. 26 of Ashcroft and Mermin: "Bloch electrons in a perfect periodic potential can sustain an electric current even in the absence of any driving electric field; i.e., their conductivity is infinite. The finite conductivity of metals is entirely due to deviations in the lattice of ions from perfect periodicity. The most important such deviation is that associated with the thermal vibrations of the ions about their equilibrium positions, for it is an intrinsic source of resistivity, present even in a perfect sample free from such crystal imperfections as impurities, defects, and boundaries."
{ "language": "en", "url": "https://physics.stackexchange.com/questions/562392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }