Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Proving that force satisfies the laws of vector addition A vector quantity has both magnitude and direction. So, for any physical quantity to be a vector, it should have a direction and a magnitude. Though this is a necessary condition for any quantity to be a vector it is not sufficient. To qualify as a vector a quantity must also follow the laws of vector addition. Now getting to the point, the most commonly quoted reason to explain why force is a vector is that it has direction and magnitude. Most textbooks say nothing more than this. How can one mathematically prove that force is a vector?
I don't think that "satisfying the laws of vector addition" is necessary for something to be a vector, depending on what you mean by that. Take velocities in special relativity. They are vectors; the vector sum of velocities is well defined. But it's rarely useful. More commonly, when you have two velocities and need to combine them somehow, the combination you actually want is given by something like the "velocity addition formula", which isn't the same as vector addition. The same is true of forces. They don't simply add in general, because no actual force of nature is exactly linear. To prove that they do add, you would have to assume some form of linearity, which is more or less what you are trying to prove.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/268400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Multiplicity Identity in Kittel's Thermal Physics On page 25 of Kittel's Thermal Physics, the author derives the multiplicity of $N$ harmonic oscillators with total quanta of energy $n$, $g(N,n)$. He writes \begin{align} g(N,n) &= \lim_{t\rightarrow 0} \frac{1}{n!}\left( \frac{d}{dt}\right)^n \sum_{s=0}^{\infty}g(N,s)t^s\\ &= \lim_{t\rightarrow 0}\frac{1}{n!}\left(\frac{d}{dt}\right)^n(1-t)^{-N}\\ &=\frac{N(N+1)(N+2)\cdots(N+n-1)}{n!}. \end{align} I understand everything after the first equation but I fail to see where the first equation comes from. I've tried expanding out the derivatives and summation but I still can't get it. How can I derive the first equation?
I really like Kittel's philosophical approach to the subject in this book (counting multiplicity exactly in model systems, leveraging those systems to define entropy and temperature, ...). But here, and in many other places, his derivations/calculations seem to obfuscate rather than illuminate. The multiplicity of $N$ quantum harmonic oscillators allowed to share $n$ energy units (sometimes called an Einstein solid), $$g(n,N) = \frac{(n + N-1)!}{n! (N-1)!}\,,$$ is derived much more cleanly by Schroeder. He uses the "Stars and Bars" method from combinatorics, to express one particular microstate as a linear arrangement of $n$ energy units (stars) separated by $N-1$ partitions (bars), which delineate the boundaries between the $N$ individual harmonic oscillators. The multiplicity can then be seen as the total number of permutations of $(n + N-1)$ distinguishable objects, divided by the number of permutations of stars, and by the number of permutations of bars. Alternatively (see section from Schoeder below), one can think of it as the number of ways of choosing $n$ of the $(n + N-1)$ objects to be energy units (dots). See the relevant segment from Schroeder's text here. [Note that he uses $q$ instead of $n$ for the the number of energy units, and $\Omega$ instead of $g$ for the multiplicity.]
{ "language": "en", "url": "https://physics.stackexchange.com/questions/268613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Notation about basis of gamma matrices in $4d$ In Quantum Field theories, we encounter gamma matrices a lot. Reading from various textbook, i encountered some textbook use different basis for their gamma matrices. Gamma matrices are defined such that $\gamma^{a}\gamma^{b}+\gamma^{b}\gamma^{a}=2\eta^{ab}$. Multiplying them in all possible way furnish the following list \begin{align} \{ \Gamma^A \} = \{1, \gamma^{a_1}, \gamma^{a_1 a_2}, \cdots \gamma^{a_1 \cdots a_d} \} \end{align} with $a_{1}<a_{2}<a_{3}\cdots<a_{d}$. where $d$ is the dimension of spacetime for given gamma matrices. Applying above it for $4d$ i have \begin{align} \{ \Gamma^A \} = \{ 1, \gamma^{a_1}, \gamma^{a_1 a_2}, \gamma^{a_1 a_2 a_3}, \gamma^{a_1 a_2 a_3 a_4} \} \end{align} In usual qft textbook, writes \begin{align} \{ \Gamma^A \} = \{1, \gamma_5, \gamma^{a_1}, \gamma_5 \gamma_{a_1}, \gamma_{a_1 a_2} \} \end{align} I know they are equivalent, $i.e$, \begin{align} &\gamma_5 \propto \gamma^1 \gamma^2 \gamma^3\gamma^4 \propto \textrm{four product of gamma}\\ & \gamma_5 \gamma_{a_1} \propto \textrm{three products of gamma} \end{align} What i am interested is instead of writing the first one modern qft textbook prefers to write the second form. Is there any reason for that? I think it might be just a matter of convention, like eastern or western approach of metric $(-1, 1, 1, \cdots, 1)$, $(1, -1, -1, \cdots -1)$, etc
In 4D Minkowski space the $\Gamma^{a}$'s has standard form. As $\Gamma^{a}=\mathbf{1}_{4\times4},\gamma^{\mu},\sigma^{\mu\nu}=\frac{i}{2}[\gamma^{\mu},\gamma^{\nu}],\gamma^{5}\gamma^{\mu},\gamma^{5}$, altogether 16 matrices. Provided that $\gamma^{\mu}\gamma^{\nu}+\gamma^{\nu}\gamma^{\mu}=2\mathbf{1}_{4\times4}\eta^{\mu\nu}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/268687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Schrödinger's Equation with multi-part potential I have this potential $$V(x) = \left\{ \begin{array}{ll} \infty & \mbox{if } x < -a \\ \frac{V_o}{a}x & \mbox{if } -a \leq x \leq a \\ V_o & \mbox{if } x \geq a \ \end{array} \right.$$ And I want to know, qualitatively, how the wave function would look like. So, the particle cannot live at the left of the "wall" at $x=-a$, so the wave fucntion there is $0$. To the left of the ramp (i.e., for $x>a$), the potential is constant, so the particle will behave like a free one. Namely, the wave function will be constant in that zone. But what happens in the middle? I'm not interested in the mathematical approach for this, I've already looked it up and it seems to be related with Airy functions or something like that. However, I want to understand what would happen, not just do the math. I think that the wave function in this zone will depend on the value of $E$ the particle has. This is what I thought: for low values of energy, the particle will have a small probability of getting trough the ramp (tunneling?); on the other hand, for high values of energy ($E>V_o$ I suppose), the probability of the particle living in the zone with the constant potential would be higher, as the "box" in the middle wouldn't be able to contain it. My guess is that if $E<V_o$ the wave function would look like a sine wave atenuated along the $x$-axis until it reaches $x=a$, where it would become constant. If $E>V_o$, it would be the same but with the sine wave increasing its amplitude this time. Is this reasoning correct? Or any other form of thinking about it?
Qualitatively, the wave functions of the bound states in a triangular potential well like the one you described, look like this: For $x<-a$, $\psi=0$ because of the infinite potential in that region. Where the wave function crosses the potential line, quantum tunnelling occurs and $\psi \to 0$. For particle energies above $V_0$, no bound states can exist (these so called scattered states are not shown). The triangular potential well can be seen as a crude approximation of the Morse potential.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/268756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Transforming Qubits Into Bits From what I understand, a qubit exists in a superposition of states and once it has been measured, it must fall into one of the two possible states. Now, I have been told that once a qubit is measured, it is no longer proper to call it a qubit but a bit since it no longer exists in a superposition of states. Is this correct? Along the same lines, if a photon with unknown polarization (the polarization state can be our qubit) hits a polarizing beam splitter, then its no longer exists in a superposition of states but must be either horizontally or vertically polarized. So would this mean that the polarization no longer is a qubit, but a bit, since it can only exist in one of two states? This would not make sense because many regimes for experimentally realizing quantum logic gates involve polarizing beam splitters. So if my reasoning is correct, that would mean that in the gate itself the qubit actually is no longer a qubit, but a bit. One final thing, since measuring a qubit is inherent to a functional quantum computer, does this mean that quantum computers actually use bits as well as qubits?
The qubit can be in a superposition of states thanks to the superposition principle, when you measure the spin of your photon it collapse in the vertical or horizontal state (eigenstates of the spin operator) and during the measure you extract the "bit" intended as the information about the outcome of this measure. But the photon (and then the qubit) is still there, in a collapsed state. If you perturb the photons it will evolve and it will no longer be in the collapsed state: it will be again in a complex state described by the superposition principle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/268946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Distance between adjacent planes in a crystal This question has been asked before, but there doesn't seem to be a decent answer. Many sources state that " For cubic crystals with lattice constant a, the spacing d between adjacent (ℓmn) lattice planes is: $$ {\displaystyle d_{\ell mn}={\frac {a}{\sqrt {\ell ^{2}+m^{2}+n^{2}}}}}$$ " https://en.wikipedia.org/wiki/Crystal_structure Could someone please explain what "adjacent" means in this case (Is it planes that share the same side, is it parallel planes, are these panes in the same unit cell or neighbouring cells etc)? Better yet, does anyone know of a sketch explaining this ? I am really at a loss here and this has been driving me nuts the whole day
In answer to the question: adjacent planes are planes that are closest to one another when distance is measured along the normal to the plane. It is important to understand that every lattice point has exactly one of the infinite set of planes described by the Miller indices $(h k \ell)$ passing through it. (I will use $(h k \ell)$ instead of $(\ell m n)$ like the OP.) I agree that many explanations out there seem to lack some important information, so here is a somewhat rigorous treatment. Disregarding some special cases, Miller indices are defined as follows. First, find the intersections of the plane in question along the three crystal axes $\pmb{a}, \pmb{b}, \pmb{c}$ in terms of multiples of the lattice constants, i.e., $m a, n b, o c$ for integers $m, n, o$. Then take the reciprocals of $m, n, o$ and find three integers $h, k, \ell$ having the same ratio, and whose greatest common divisor is 1. As an example, consider the plane that intersects the $\pmb{a}$ axis at the second lattice site, the $\pmb{b}$ axis at the third lattice site, and the $\pmb{c}$ axis at the first lattice site. The reciprocals of $2, 3, 1$ are $\frac{1}{2}, \frac{1}{3}, 1$, which have the same ratio as $3, 2, 6$. The plane is thus called $(h k \ell) = (326)$. To find the distance between adjacent planes, it helps to use the ``reciprocal lattice vectors'', which may be defined as $$ \pmb{a^*} = V^{-1}\pmb{b} \times \pmb{c} \, , \quad \pmb{b^*} = V^{-1}\pmb{c} \times \pmb{a} \, , \quad \pmb{c^*} = V^{-1}\pmb{a} \times \pmb{b} \, $$ where $V = \pmb{a} \cdot ( \pmb{b} \times \pmb{c})$ is the volume of the unit cell. By construction, these have the convenient property that, for example, $\pmb{a} \cdot \pmb{a^*} = 1$, while $\pmb{a} \cdot \pmb{b^*} = 0$, and so on. It turns out that the vector $ \pmb{H} = h \pmb{a^*} + k \pmb{b^*} + \ell \pmb{c^*} $ is normal to the $(h k \ell)$ plane. This can be demonstrated by showing that the dot products of $\pmb{H}$ with two non-colinear vectors in the $(h k \ell)$-plane, for example, $n \pmb{b} - m \pmb{a}$ and $o \pmb{c} - n \pmb{b}$, are zero. Consider now the plane $P_0$ that passes through the lattice point at the origin and is defined by $ \pmb{H} \cdot \pmb{r} = 0 \, , $ where $ \pmb{r} = x \pmb{a} + y \pmb{b} + z \pmb{c} $ for coordinates $x, y, z$. Because of the convenient properties of the reciprocal lattice vectors described above, we can rewrite $\pmb{H} \cdot \pmb{r} = 0$ as $h x + k y + \ell z = 0$. The lattice points are those $\pmb{r}$ for which $x, y, z$ are integers, call them $p, q, s$, i.e., we have $h p + k q + \ell s = 0$. The origin is the trivial case, where $p = q = s = 0$. We now wish to find the closest plane, call it $P_1$, by moving from the origin along the positive $\pmb{H}$ direction. The equation of $P_1$ is $\pmb{H} \cdot \pmb{r} = \delta$, or $h p + k q + \ell s = \delta$ for some delta. The geometrical interpretation of the dot product means that $P_1$ should possess the smallest value of $\delta$ possible. Furthermore, because $h, k, \ell$ and $p, q, s$ are all integers, so too must be $\delta$. The smallest possible integer value of $\delta$ is 1. We are guaranteed to be able to find $p, q, s$ satisfying $h p + k q + \ell s = 1$ because of Bezout's identity, which says that for two integers $a$ and $b$ (not the same $a$ and $b$ as above, but we are running out of variable names) with greatest common factor $f$ (written $\mathrm{gcd}(a,b)=f$), there exist integer $x$ and $y$ (again, not the $x$ and $y$ above) such that $ax + by = f$. This generalizes to more than one pair of integers. Thus, we can always find $p, q, s$ such that $h p + k q + \ell s = 1$ because $\mathrm{gcd}(h, k, \ell) = 1$. Now that we know $\delta$, we wish to find the distance between $P_0$ and $P_1$ measured along $\pmb{H}$. This can be accomplished first by traveling along $\pmb{a}$ from the origin until we encounter $P_1$, i.e., finding $x$ so that $H \cdot (x \pmb{a}) = 1$. This has solution $x = \frac{1}{h}$, so that the vector $\pmb{v} = \frac{1}{h} \pmb{a}$ reaches from $P_0$ at the origin to $P_1$ along the $\pmb{a}$ direction. Finally then, the planar spacing $d$ is the projection of $\pmb{v}$ along the $\pmb{H}$ direction. That is $$ d = \pmb{v} \cdot \frac{\pmb{H}}{|\pmb{H}|} = \frac{1}{|\pmb{H}|} \, . $$ For the special case of the primitive cubic lattice, the lattice vectors are all orthogonal with lattice constant $a$, i.e. $\pmb{a} = a \hat{\pmb{x}}$ and so on, and the reciprocal lattice vectors are $\pmb{a^*} = \frac{1}{a} \hat{\pmb{x}}$ and so on. Therefore $|\pmb{H}| = \frac{1}{a} \sqrt{h^2 + k^2 + \ell^2}$, giving $$ d = \frac{a}{\sqrt{h^2 + k^2 + \ell^2}} \, . $$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/269087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Does the universe expand in every direction evenly? I've heard that the universe is expanding constantly and that galaxies are moving further and further away from each other because of this. However, does the universe expand in every direction evenly or does it expand in one direction more than another direction?
We believe that the universe expand in every direction evenly. Even-if there's any unevenness, it's hard to see, and will only be clear at very very large scales. Some people have combed the CMB (cosmic microwave background) and argue that there's maybe some evidence that things aren't perfectly even, but it's not really clear. Right now it really looks like the universe is expanding evenly in all directions, and any deviation from that is too small to be very clear. More about it here and here
{ "language": "en", "url": "https://physics.stackexchange.com/questions/269236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Velocity of satellite to crash into the earth I was reading through this post today and was very impressed by the response that was given. However, what would have to happen to the velocity in order to collide with the Earth? Velocity of satellites greater than required velocity I was think of setting up an equation as follows. If the orbit changes from a circular orbit at some height $h$ with velocity $v$, then an elliptical orbit will occur if the velocity decreases to $\lambda v$, for some $\lambda \in (0,1)$. From the post made earlier, we know that the original velocity is given by $$v_0^2 = \frac{GM}{R_E+h}$$ and the new velocity is given by $$\lambda^2 v_n^2 = \lambda^2 \Bigg ( GM \Bigg ( \frac{2}{R_E+h} - \frac{1}{a} \Bigg ) \Bigg ).$$ Therefore, solving $$\lambda^2 v_n^2 \leq \frac{GM}{R_E}.$$ Should yield a viable restriction on $\lambda^2$. But this doesn't give me what I want. A satellite should crash into the earth if it breaks through the atmosphere, i.e when $h < R_E + R_A$, where $R_A$ is the atmospheric height. How do I determine this $R_A$ from the general theory? I'm aware that the escape velocity is given by $V_E = \sqrt{\frac{GM}{R_E+h}}$.
All you need to do is calculate the perigee distance $r_p$ that is the distance of closest approach. Then if $r_p < R_A$ your satellite will crash and burn. Once again we start from the vis-viva equation: $$ v^2 = GM\left(\frac{2}{r} - \frac{1}{a} \right) \tag{1} $$ The parameter $a$ is the semi-major axis of the ellipse, and it is related to the perigee and apogee radii as shown below: So we have: $$ 2a = r_p + r_a $$ which turns the vis-viva equation (1) into: $$ v^2 = 2GM\left(\frac{1}{r} - \frac{1}{r_p + r_a} \right) $$ At apogee $r = r_a$ and $v = v_a$ and putting these into our new equation gives: $$ v_a^2 = 2GM\left(\frac{1}{r_a} - \frac{1}{r_p + r_a} \right) $$ And we just need to rearrange this to get the equation for the perigee distance: $$ r_p = \frac{r_a}{\frac{2GM}{v_a^{\,2}r_a} - 1} \tag{2} $$ Now let's look at your specific question. We'll call the impact radius $R$, where $R$ would be at least the radius of the Earth but a bit bigger to take into account the atmosphere. So we are looking for the orbit with perigee distance $r_p=R$. The satellite starts in a circular orbit at a radius $r_0$ so the orbital speed is: $$ v_0 = \sqrt{\frac{GM}{r}} $$ And we ask what happens if we reduce the velocity to $\lambda v_0$. All we have to do is take equation (2) and substitute for the new velocity $v=\lambda v_0$, the apogee radius $r_a=r_0$ and set the perigee radius to the collision radius $r_p=R$ and we get: $$ R = \frac{r_0}{\frac{2GM}{\lambda v_0^{\,2}r_0} - 1} $$ And on substituting $v_0=\sqrt{GM/r_0}$ this simplifies to: $$ R = \frac{r_0}{\frac{2}{\lambda} - 1} $$ And rearranging for $\lambda$ gives: $$ \lambda = \frac{2R}{R+r_0} \tag{3} $$ So given your initial circular orbital radius $r_0$ equation (3) tells you the value of $\lambda$ you need to make your satellite crash and burn.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/269494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Reflectivity and transmissivity greater than 1 In plasmonics, it is often seen that reflection coefficient and transmission coefficient are greater than 1. How is energy conservation valid in such cases?
In the book "Plasmonics and Plasmonic Metamaterials: Analysis and Applications" edited by G. Shvets, Igor Tsukerman, we read in section 2.1: In other words - they clearly state that the enhanced reflectivity is a result of the presence of a inverted dye - that is, a dye with a population inversion, meaning that it can be subject to stimulated emission. In other words - Jon Custer's hunch was correct. There is no violation of the conservation of energy. The energy is coming from the pumped medium. Perhaps an easy analogy is a mouse trap. Imagine dropping a large ball on a mouse trap. As the trap closes, the ball gets "kicked" by the closing spring, and shoots away with more speed than it came in. No violation of the conservation of energy: the energy of the spring was released by the impact of the ball, and sent the ball on its way. Somebody had to set the trap before dropping the ball - the equivalent of pumping the dye.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/269824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Spontaneous symmetry breaking of gauge symmetry in 1+1 dimensions? The Mermin-Wagner theorem states that continuous global symmetries cannot be broken in two or fewer spacetime dimensions; however, I have not seen this statement applied to gauge theories. Does it apply; ie, is there a Higgs mechanism for 1+1 QFTs?
Gauge symmetry is actually not spontaneously broken in the Higgs mechanism; this is a common misconception. See What role does "spontaneously symmetry breaking" played in the "Higgs Mechanism"?. Therefore the Mermin-Wagner theorem does not apply to the Higgs mechanism, and the Higgs mechanism is possible in 1+1D.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/269924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How does gauge invariance protect the SM gauge boson masses in SUSY from divergent radiative corrections? The W and Z gauge bosons receive radiative corrections in loop from the heavy SUSY scalars. There is an argument using gauge invariance which explains how the masses remains protected. I am not able to understand how gauge invariance is protecting the masses of W and Z.
What I say below are very general facts and probably this is not the final answer you were looking for but maybe it helps. A gauge theory (forget about SUSY for the moment) gives rise to a massless spectrum of gauge bosons and massless matter content. If you want to give mass to your gauge bosons you need spontaneous symmetry breaking terms in your lagrangian (this means the absolute minimum of your potential is not unique). Furthermore, if you want matter particles to be massive you need to add Yukawa terms to your lagrangian. Assuming there is no spontaneous symmetry breaking one says "the maslessness of the gauge bosons is protected by gauge invariance" because you an explicit mass term would violate gauge invariance. If SUSY is present but is not broken then your spectrum will be richer but again as long as your gauge invariance is not broken there is no reason to expect massive gauge bosons neither gauginos. Now, what happens when SUSY is there but you break gauge invariance? What happens if you break both SUSY and gauge invariance? I am sorry but I do not know the answer to any of these...it seems to me that if only gauge symmetry is broken your scalar fields (and superpartner) will pick up vacuum expectation values in such a way that all particles and super-particles have the same mass. So you have masses but they have to match. In the second case I guess you will have massive spectrum but the masses of particles and superpartners will not match. Sorry I was not more helpful :(
{ "language": "en", "url": "https://physics.stackexchange.com/questions/270008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Scalar fields and general coordinate transformations In classical mechanics, a scalar field is characterised by the fact that its value at a particular point must be invariant under rotations and reflections of coordinates. That is, one requires that $\phi'(x')=\phi(x)$, where a point, $x'$ in the new coordinate system could be related to a point, $x$ in the old one by either a rotation, $x'=Rx$ (where $R$ is a rotation matrix), or a reflection, $x'=-x$. Then, in special relativity, one requires that a scalar field must be invariant under Poincaré transformations, i.e. under Lorentz transformations and space-time translations, $x'=\Lambda x+a$, such that $\phi'(\Lambda x+a)=\phi(x)$. However, when one considers general relativity one is confronted with more general coordinate transformations. In this case, how does a scalar field transform under general coordinate transformations? Does one still require that it transforms trivially, i.e. such that $\phi'(x')=\phi(x)$?
Yes, that is the definition of a scalar field in a theory with general covariance. $\phi^{\prime}(x^{\prime})=\phi(x)$ where $x^{\prime}$ is the coordinate in the new coordinate system corresponding to a given $x$ in the original coordinate system. i.e. $x$ and $x^{\prime}$ are, in general, different values representing the same point on the manifold.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/270202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Are Newton's laws invalid in real life? One of my friends and I had an argument over this topic. He stressed the fact that in real life many forces exist, whereas in physics we deal only with ideal situations. He put the following arguments:- * *Newton's First Law is invalid because friction exists in real life. *Newton's second law is invalid due to the same reasons. *Newton's third law is invalid because in a trampoline, there is excessive reaction. In defence, I put forward the following arguments:- Newton's laws are true but the equations have to be modified to take into account the other forces in real life. For example, if a force $F$ is applied on a body of mass $m$, and $f_s$ is the force of friction, then, the equation becomes $F - f_s = ma$. Thus, we have just modified the equation $F = ma$. So basically I mean to say that we have to adjust the laws to suit our purpose. In the end, there was a stalemate between us. Even now I am confused after this argument. Please clarify my doubt.
Newton's laws are valid for all situations where velocities are small (compared to the speed of light, ie relativity is not important) and where quantum effects are negligible (mostly where objects are much bigger than elementary particles). The problem with your argument is that you and your friend are using idealized expressions for Newton's laws, not their most general form. That is completely understandable because the more general forms require mathematical concepts that are mostly restricted to physicists and mathematicians (in fact, Newton invented the calculus in order to formulate these laws). Rest assured that the laws are not just valid for those idealized situations that are expressed in terms of elementary math.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/270289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 10, "answer_id": 6 }
Does a "capacitor" for light exist, which could filter out flickering? If I have a light that is flickering at a frequency low enough to be perceived by the human eye, is there any type of material that exists that will smooth out the appearance of flickering? Similar to how a capacitor smooths the output of a rectifier?
You could argue that this is exactly what glow-in-the-dark materials do. Phosphorescent materials gather energy in the form of electrons moved to higher potentials. The result is a very lossy low-pass filter on the light received. The real issue is the lossiness. Phosphorescent materials are substantially less efficient than a capacitor is. For further exploration, consider what it means to you to "smooth out the appearance" of flickering, or what it means to "smooth" the output of a rectifier. The analogy is good for a first pass, but if you really want to get specific regarding what qualifies as a "capacitor for light" and what does not, you may have to dig deeper into properties you wish to see in said capacitor.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/270392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 8, "answer_id": 0 }
Is a light wave's amplitude stretched, along with the "red shift" stretch - making it brighter? When light waves are stretched and "red-shifted", is the amplitude of the light wave stretched as well, affecting the intensity/brightness of the light wave?
The speed of light is constant in all reference frames. This is a principle of relativity derived from Maxwell's Equations. The energy of a photon is given by $E = hf$ where $h$ is Planck's constant and $f$ is the frequency at which the photon propagates. Now picture this: Kinetic energy is the energy of motion. When you throw a ball with a certain force, you give it a certain amount of energy. If you're running forward and you throw the ball with the same force, it has more kinetic energy. If you run backwards and throw the ball with the same force, it has less kinetic energy. Now, for a photon, kinetic energy doesn't really apply, since it's massless (of course, it has energy associated with its 4-momentum, but that's more complicated than necessary). Instead, when a light ray is emitted, the motion of the emitter imparts it with more or less total energy, depending on its motion. Motion toward the observer imparts more energy, motion away imparts less. By the above equation, more energy corresponds with higher frequency (blue) and less energy corresponds with lower frequency (red).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/270464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Largest Mass Diffraction I have read "Matter-wave interference with particles selected from a molecular library with masses exceeding 10000 amu" which claims to observe diffraction patterns in objects of around 10'000 amu. What is the largest mass objects shown to have diffraction patterns and show wave-particle duality? I have heard a claims of this type have been shown for small amino acids, and possibly protein strands or even small viruses, but have struggled to find any references.
According to http://hyperphysics.phy-astr.gsu.edu/hbase/nuclear/mossfe.html it takes 200,000 iron nuclei to preserve the natural line width of the 57Fe transition used in Moessbauer spectroscopy, so that gets you to approx. 11 million AU of coherently moving mass. I don't know if one can do better than this with phonon spectroscopy, but it might be possible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/270588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How is a neutron produced in a hydrogen fusion? So during a fusion reaction, a hydrogen atom (which consists of only a electron and proton) fuses with another hydrogen atom to produce deuterium which contains a proton and neutron. My question how does the neutron come out of nowhere when we have only 2 protons as the raw materials and where does the other neutron go. I know it could be a silly question but i searched and didn't understand the concept. So i came here to clarify my doubt. Please explain in simple language.
Most of the time, proton-proton fusion results in the brief creation of a very unstable di-proton, which immediately decays into a pair of protons. But very occasionaly, during the brief moment they are together, one of the protons will undergo a weak force interaction, changing one of its quarks from up to down, hence making a neutron, and emitting a positron and an electron-neutrino in the process. $$ p \rightarrow n + e^+ + \nu_e$$ The resulting proton-neutron pair can form a stable deuterium nucleus that is then the starting point for the production of helium. The creation of deuterium in this way is rare, because the conversion of a proton to a neutron is (a) endothermic, requiring energy; and (b) slow, moderated by the weak interaction, and thus the di-proton normally decays first. That is why the Sun will survive for about 10 billion years in total as a main sequence star, and why the centre of the Sun generates less heat per unit volume than a compost heap.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/270700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is there a case (besides light speed in any given medium) where speed is experimentally measured rather than theoretically calculated? I studied physics throughout college, but I cannot recall a single time where I directly measured the velocity of an object or force. Every time I measured the components of velocity (distance and time) rather than the actual velocity instead. This got me thinking as to whether or not there is some instance where I would have a known velocity, but not components by which to calculate it. Light, being constant within any given medium, is the only velocity I can measure. Knowing the velocity of an object could be useful for measuring distance, but I want to know if there is a way of directly measuring velocity.
The term "direct measurement" is tricky. It's easy to take a philosophic position and say no measurement is a "direct" measurement. We're always interacting something with something else to do our measurements. Accordingly, I have to play a little loose with the "direct measurement" concept. * *A speedometer measures velocity by converting it into a rotational motion, then using that to spin a magnet to cause a torque, which is measured by comparing its forces against those of a spring. *A radar measures velocity by measuring the Doppler shift caused by the velocity of the object. *One might argue you measure the velocity of cars on the highway as you accelerate up an onramp, trying to predict where they will be.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/270791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Is this constraint holonomic or non-holonomic? $$f(q,q^\prime, t) = 0, ~\mathrm df = \frac{\partial f}{\partial q}~\mathrm dq + \frac{\partial f}{\partial q^\prime}~\mathrm dq^\prime+ \frac{\partial f}{\partial t}~\mathrm dt = 0$$ I really want to know whether this constraint is holonomic or non-holonomic. (As far as I know, Non-holonomic constraint has a term of velocity and do non-integrable. But this formula does not dependent on a path, because it is a total differential form.) * *prime is a time derivative.
If $f$ is defined such that $f=0$ at all points, then it is a holonomic constraint. One example is this: $f(x,y,z)=x^2+y^2+z^2-r^2=0$ which constrains motion to the surface of a sphere of radius $r$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/270880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the relation between Work and Power? If I lift 40 kg 6 times 60 centimeters, is that 40*6*0,6*9,82 Joule? And then if I take the time into account, I will know the Watts? Or did I misunderstand? The background is that I read that you can use the work from exercising to power energy, and I find that one set that I do is 50 Watts approximately. So if I theoretically could find a way to use the work I do, that would be 50 Watt?
No you did not misunderstand, your calculation is correct. You have done 1414 J of work in lifting the weights. Assuming this took about 30 s your average rate of working was 50 W. If you were to keep this up and you convert the energy into electricity, you could probably keep a desk lamp glowing!
{ "language": "en", "url": "https://physics.stackexchange.com/questions/271015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why can any general motion of a rigid body be represented as translation + rotation about center of mass? * *Why can any general motion of a rigid body be represented as translation + rotation about center of mass? *I am beginning to read rotational dynamics and my textbook states this fact without proof. I am wondering - Is this fact only true for a center of mass? *Then - The phrase "rotation about center of mass" strikes as vague to me. Rotation about which axis?
The fact that the motion of a rigid body can be represented as a translation and a rotation about the center of mass is a consequence of a mathematical theorem that states that every function that goes from R^3 to R^3 such that, for all x, y, d(x,y) = d(f(x),f(y)) (where d(x,y) means distance between x and y) can be expressed uniquely as the composition of a translation and a rotation about a certain axis. You can find the proof of this theorem in Peter Lax: Linear Algebra, in the chapter of kinematicas and dynamics (some prior knowledge of linear algebra is required). After studying the proof, you will realize that in fact, for every point in space, there exists an axis passing through that point such that the motion of the rigid body can be expressed as a translation (that depends on the the point you have chosen) and a rotation around its axis.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/271109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Ground state of local parent Hamiltonians and invariance under local unitaries Assume that a finite-dimensional pure state $|\psi\rangle\in \mathcal{H}\simeq \mathbb{C}^m$, $m<\infty$, is the (unique) frustration-free ground state of a local parent Hamiltonian and suppose that the locality notion is given in terms of a connected set of neighbourhoods $\{\mathcal{N}_k\}$. My question is the following one: Is it true that any unitary $U$ satisfying $$U|\psi\rangle\langle \psi|U^\dagger=|\psi\rangle\langle \psi|$$ can be decomposed into a finite product of invariance-satisfying unitaries acting only on the neighbourhoods $\{\mathcal{N}_k\}$, that is $U$ can be written as $U=\prod_{i=1}^N U_{\mathcal{N}_{k_i}}$, where every $U_{\mathcal{N}_{k_i}}$ acts only on the neighbourhood $\mathcal{N}_{k_i}$ and it is such that $U_{\mathcal{N}_{k_i}}|\psi\rangle\langle \psi|U_{\mathcal{N}_{k_i}}^\dagger=|\psi\rangle\langle \psi|$ ? Any (partial) answer/comment/reference is very welcome. Thanks in advance.
Here's one idea: Say $U|\psi\rangle = |\psi\rangle$ for $UU^\dagger = U^\dagger U = I$. Then if we consider the exponential form $U = \exp(iG)$ with $G = G^\dagger$ as usual, $|\psi\rangle$ must necessarily be in the kernel of $G$, $G|\psi\rangle = 0$. On the other hand, having $U$ of the product form $U = \prod_k{U_{{\mathcal N}_k} }$ for mutually disjoint neighborhoods is equivalent to $G = \sum_k{G_{{\mathcal N}_k}}$ with $[G_{{\mathcal N}_j}, G_{{\mathcal N}_k}] = 0$ for any ${\mathcal N}_j$, ${\mathcal N}_k$ involved. But obviously not all $G$ that have $|\psi\rangle$ in their kernel are of this decomposable form. So not all $U$ such that $U|\psi\rangle = |\psi\rangle$ can be of the product type $U = \prod_k{U_{{\mathcal N}_k} }$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/271230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Negative number on weighing scale if I move my hand above it? So, I wondered if my electrical weighing scale could detect the air pressure exerted by something flat like a plastic plate or even my hand. I noticed that if I move my hand with my arm vertical to the scale, 2-3 grams are detected at most, as expected since I move air toward the scale. If I try this with my arm parallel to the scale I can get it to detect -1 gram, if I do it quickly enough. I thought that maybe this happens because I "move away" some air from above it/decrease pressure or something? Arm vertical: Arm parallel:
There are 3 effects. First is the orientation of the balance. Moving around a floor that flexes or deforms, even slightly as you move your weight around, is a no-no. (similarly if the balance isn't isolated from you putting or removing weight from the table it is on). Second is that YOU carry an electric charge. So could a plastic plate. Does the reading persist? Or rapidly diminish? Charge will depend a lot on humidity. Different readings with same motion but different humidity suggests it is not air motion. Should be easy enough to dramatically change humidity in a room by boiling some water (if the room allows hotplates or is a kitchen...) placing pans of hot water near the balance should also reduce the charge your carry. Hmmm. I guess if you took an aluminum foil "hand" the size of yours, grounded it, and moved it in the same way as your hand, that you'd see the effect of wind alone. Air currents will generally diminish more rapidly than electrical charge, but the only way to tell for sure is to use a grounded conductor to move the air with.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/271510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why aren't trigonometric functions dimensionless regardless of the argument? Consider this equation :- $$y = a\sin kt$$ where $a$ is amplitude, $y$ is displacement, $t$ is time and $k$ is some dimensionless constant. My instructor said this equation is dimensionally incorrect because the dimension of $[kt] = [\text{T}^1]$ and since $\text{angles}$ are dimensionless, we can conclude that it is dimensionally incorrect. I don't understand why it is so. Why do we need to check the dimension homogeneity of the term inside the $\sin$ to conclude whether the equation is dimensionally correct or not? Why isn't the whole sine function is dimensionless $(\sin kt = \text{[T}^0]) $ regardless of the dimension of the argument inside as the range of sine function is $[-1, 1]$.
All maths functions can only be used with dimensionless arguments. The reason is quite boringly that these functions are only defined for real numbers, or perhaps integers, complex numbers, real vectors. But time is none of these. The only exception you can make are homogeneous functions, especially linear functions. A linear function allows to you pull physical units (which are basically just multipliers) in and out of the argument, so you can use an $\mathbb{R}\to\mathbb{R}$ function as well as a function mapping times to times (or, at least, time-differences to time-differences). With a homogeneous function, you can do the same, but may pick up some power on the unit in the process.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/272599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 2 }
Does an object float more or less with more or less gravity? This might be a stupid question, but I'm a newbie to physics. An object less dense than water (or any other fluid, but I'm going to use water for this example) floats normally on Earth when placed in water. But if the object was placed in a hypothetical place where there is no gravity and there is air, it would not float on water. So if the object was placed in water on a planet with more gravity than Earth, would it float more or would it float less, or float the same as on Earth? Would it float more because it doesn't float without gravity, but it does float with Earth gravity, therefore it'd float even more with more gravity. Or would it float less because more gravity would pull the object down, so it won't float as much. Or would it'd float the exact same as on Earth because the above two scenarios cancel each other out. EDIT: By "float more," I mean it rises to the surface of the water faster, and it takes more force to push it down. By "float less," I mean it rises to the surface of the water slower, and it takes less force to push it down.
The object would actually float exactly the same for both values of $g$. Let $V$ be the volume of the body, $d$ its relative density, and $V'$ be the volume inside water. Then for equilibrium of the body, $V \cdot d \cdot g=V' \cdot 1 \cdot g$ So, $V'/V$ is independent of acceleration due to gravity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/272918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 2 }
Why is the surface of a liquid slanted when it is accelerated? Consider a uniformly horizontally accelerated tube of water. I know that the fluid experiences a pseudo force in addition to its own weight, so that it reaches equilibrium in the below diagram. But why can't the water also exert a force like this, so it can be in equilibrium horizontally?
For the sake of simplicity, let's assume it's a cubical container of water. The concept remains the same. Look at the "free body diagram" of the water itself. As you noted, one side of the water is higher than the other. The surface is slanted. The water is accelerating, so we know that there must be a net horizontal force acting on it. We also know that the only horizontal forces that are possible in our case are due to the pressure forces acting on the water via $P=\gamma*H$. The only way the water can accelerate is if the force on one side is greater than the force on the other side. The only way this can happen is if the pressure on one side is greater than the pressure on the other side. The only way this can happen is if one side of the water is higher than the other. Again, $P=\gamma*H$. The forces acting on the water in the $x-direction$ can be derived using calculus.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/273039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
How radioactive is uranium? Look at this video: People face uranium directly. Does this mean the radioactivity of uranium is very weak? Because its half-life is very long? Personally, I would never dare to touch any radioactive element. I also remember seeing people holding a big chunk of uranium in hand. See here
There are two sides to this question. Naively, the answer would be "bah, not much" because it is not terribly active and neither alpha, nor beta radiation is really dangerous. The former (which occurs early in the decay chain) is absorbed even by a few centimeters of air, and the latter (which appears later in the decay chain) is unable to penetrate the callus layer of your skin. The callus is dead tissue either way, so radiation doesn't really do anything to it. However, uranium is directly toxic (nephro- and hepatotoxic, and causing neurological effects) and finally decays to an accumulating neurotoxic element (lead). The toxity is generally much more severe than the radioactivity. Uranium dust can very well be inhaled if no precautions are taken (not uncommon in fertilizer production). But what's worst, your body happily absorbs uranium as "calcium" and puts it in your bone matrix. Now, you will remember I just said alpha and beta radiators are pretty harmless. Alpha and beta radiators inside your body and especially near highly active tissue (such as certain organs, but also... bone marrow) are extremely harmful. Further, if you look in the decay chain, you will notice quite a few elements appearing, some of which (radon) are gases which you can neither smell nor see but nevertheless inhale and absorb. Polonium... remember what substance it was the KGB used to murder Alexander Litvinenko? Therefore, from a biological point of view, the answer must be: "very". You can certainly handle uranium safely with simple rubber gloves and behind a suction (or wearing a breath mask), but otherwise playing with it is not such a terribly good idea.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/273208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 0 }
Redshifting light in an expanding universe It's evident and well known that light traveling across an expanding FLRW universe is redshifted via an equation: $$\frac{\lambda_{arriving}}{\lambda_{emitted}}=\frac{a_{now}}{a_{then}}$$ Where $a$ is the cosmological scale factor when the light is emitted and observed (denoted then and now respectively). Let's say the light was traveling through a waveguide over that same distance. Calculations shouldn't be effected, and the redshift would follow the same equation. If we now take that same waveguide and make it a large circle of the same total length, would that effect the redshift equation? I don't see how, but maybe someone here knows better. If light is still redshifted the same it seems we can shrink the size of the waveguide arbitrarily down to a small local system. Does cosmological redshift happen locally? I've found arguments that energy isn't lost to bound systems lacking.
The cosmological expansion can be seen only with very large structures. Its effective "force" is so weak that even galaxies are not affected, gravity keeps them bound and invariant . Thus, the Andromeda galaxy, which is bound to the Milky Way galaxy, is actually falling towards us and is not expanding away. Within the Local Group, the gravitational interactions have changed the inertial patterns of objects such that there is no cosmological expansion taking place. Once one goes beyond the Local Group, the inertial expansion is measurable, though systematic gravitational effects imply that larger and larger parts of space will eventually fall out of the "Hubble Flow" and end up as bound, non-expanding objects up to the scales of superclusters of galaxies. Structures bound by the stronger interactions like electromagnetic and strong are of course not affected. The raisin bread analogue helps understand this: Animation of an expanding raisin bread model. As the bread doubles in width (depth and length), the distances between raisins also double. the dough is expanding, but the raisins are stable in size because the electromagnetic bindings are not affected by the yeast in the dough. The waveguide you are envisaging, is bound together with the electromagnetic force, and any interactions with electromagnetic waves will be within the "raisin".
{ "language": "en", "url": "https://physics.stackexchange.com/questions/273390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Are carbon nanotubes superconductive? I've heard all kinds of various properties ascribed to carbon nanotubes, from amazing (conventional) conductors that work in a different way to metals, to semiconductors with tunable properties and properties that very with mechanical manipulation. How good of conductors are they, under normal and under extreme (space-based) conditions? Can they become superconductive?
As per Wikipedia While there have been reports of intrinsic superconductivity in carbon nanotubes, many other experiments found no evidence of superconductivity, and the validity of these results remains a subject of debate. From Sciecemag Investigation of the magnetic and transport properties of single-walled small-diameter carbon nanotubes embedded in a zeolite matrix revealed that at temperatures below 20 kelvin, 4 angstrom tubes exhibit superconducting behavior manifest as an anisotropic Meissner effect, with a superconducting gap and fluctuation supercurrent. The measured superconducting characteristics display smooth temperature variations owing to one-dimensional fluctuations, with a mean-field superconducting transition temperature of 15 kelvin. From New Scientist Tiny tubes of carbon may conduct electricity without any resistance, at temperatures stretching up past the boiling point of water. The tubes would be the first superconductors to work at room temperature. Each nanotube is typically a millionth of a metre long, several billionths of a metre in diameter and with walls a few atoms thick. The nanotubes cling together in oblong bundles about a millimetre in length. The researchers did not see zero resistance in their bundles. They think this is because the connections between the tiny tubes never become superconducting. But they did see more subtle signs of superconductivity within the tubes themselves. For example, when the researchers put a magnetic field across a bundle at temperatures up to 400 kelvin ($127°\mathrm{C}$), the bundle generated its own weak, opposing magnetic field. Such a reaction can be a sign of superconductivity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/273631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is optical density? I'm a zoology minor and we are doing protein estimation by colorimetric method. I have stumbled upon a term 'Optical density'. I don't understand the term well. Is it a measure of the extent of light that can pass through a particular object? I've checked a related question of this community and it doesn't solve my question completely.
You're a little confused probably because there are two usages of the words "optical density". The first usage is as a synonym for refractive index, as described in the answers to the related question you cite. This is the commoner usage in physics. The second usage is the total attenuation afforded by a protective screen, neutral density filter, laser goggles or the like. $ODx\; \lambda=y$ or even $ODx\; y$ means that the filter, goggles etc afford a power attenuation factor of $10^x$ at a light wavelength of $y$ or light wavelength range $y$. That is, the power transmitted through the filter is $10^{-x}$ of the incident power when the wavelength is as stated. For example, laser goggles marked $OD7\;488{\rm nm}$ means that the goggles will reduce incident power at 488nm by a factor of $10^7$. Goggles marked with a lone wavelength rather than a wavelength range are always meant for use with a particular kind of laser. For example, the $OD7\;488{\rm nm}$ goggles are meant for use with an argon ion laser. You cannot rely on them using another source of wavelength 485nm, for example. For generic use, a wavelength range must be specified. So, for example, one often sees $OD7\;450{\rm nm} - 510{\rm nm}$, meaning, pretty obviously, goggles that will give you seven orders of magnitude of attenuation over the whole range $450{\rm nm} \leq \lambda \leq 510{\rm nm}$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/273740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
Spring force on both sides of spring I am a little confused about springs. I just wanted to know that if I pull an ideal spring of spring constant $k$ such that the spring has been symmetrically pulled and its elongation (total) comes out to be $x$ then would the force on one side by $$F=kx$$ or $$F=kx/2$$ I am a little bit confused and hence I resorted to ask it here.
The other answers simply quote Hookes law, the static relationship between displacement and force. But if you consider the question more deeply, the spring has distributed mass and distributed compliance, and so a spring all by itself is dynamic and therefore does not propagate force instantaneously as Hookes law implies all by itself. So if you push (or pull) on a spring you will displace the coils (or leafs or whatever), and the same force will be seen on the opposite side of the spring - however delayed by the dynamic response of the spring.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/273829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Metal temperature change I have a pipe that’s $70^{\circ}\text{F}$, in a constant room temperature of $80^{\circ}\text{F}$. I would like an equation to solve for pipe temperature ($\text{F}$) after X amount of time. Material: Iron Height: $100~\mathrm{cm}$ Diameter: $3.81 ~\mathrm{cm}$ Based on this equation, using Newton’s Law of Cooling, I will determine the $k$ (rate of change in temperature/time). NOTE: I understand that I’m not accounting for all influences. Nevertheless, using the provided facts I would like the best solution.
In my thinking that you may use following equation $\frac{dT}{dt}\propto-(T-T_0)$ $\frac{1}{T-T_0}dT=-kdt$ $T-T_0=Ce^{-kt}$ from graphical method you may estimate the value of k.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/274030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does empty space have energy? My physics friend suggested that "the answer to why matter exists in the universe" is because all massive particles are just the fabric of space excited into little packets. To illustrate, imagine a blanket on the ground. Then, pinch a small bit of the blanket and twist it. This is a particle that has mass. It was intriguing to hear this (he's only studied up through Freshman year of college physics), but there are clear flaws (i.e. angular momentum of a "particle" tied to a "blanket"??). Regardless, it made me wonder about vacuums. Is there any theory that suggests that a vacuum actually has energy in some form or another?
After looking at the other answers and comments, I understand the question little bit better now. So, will take a shot at it. The question is about own energy of vacuum/space, not about the energy spread in space from big bang etc. There is a difference between the two. Just like many other questions, this also needs looking at gravity. Gravity is curving of space in presence of mass/energy. Right? If space curves in presence of mass, then mass and space must be interacting in some way. Without interaction, the curving would not be possible. If space is capable of interacting with mass/energy, it has to have its own properties. Can something have properties without having any energy? I would think empty space has its own energy. Otherwise gravity won't be there.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/274104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
What is the difference between these two ways to calculate average velocity? Average velocity: $$v_{\rm avg,1}=\frac{v_{\rm final}+v_{\rm initial}}{2}$$ and average velocity: $$v_{\rm avg,2} =\frac{\rm total\;displacement}{\rm time \;taken}=\frac{\Delta x}{\Delta t} $$ What is the difference between them and when do we use them?
The average velocity of a particle during some elapsed time $\Delta t$ is, in words, the constant velocity that gives the same displacement in the same elapsed time. Mathematically, the average velocity is given by $$\mathbf{v}_{avg} = \frac{\Delta \mathbf{r}}{\Delta t}$$ where $\Delta \mathbf{r} = \mathbf{r}_f - \mathbf{r}_i$ is the displacement vector and $\Delta t = t_f - t_i$ is the elapsed time during which the displacement took place. For example, consider the case that a particle moves with constant velocity $1 \mathrm{\frac{m}{s}} \hat{\mathbf{x}}$ for 4 seconds and then with constant velocity $1 \mathrm{\frac{m}{s}} \hat{\mathbf{y}}$ for 3 seconds. The displacement vector for the 7 seconds of motion is, by inspection, $$\Delta \mathbf{r} = (4\hat{\mathbf{x}} + 3 \hat{\mathbf{y}})\;\mathrm{m}$$ and so, the average velocity during the 7 seconds is $$\mathbf{v}_{avg} = (\frac{4}{7}\hat{\mathbf{x}} + \frac{3}{7} \hat{\mathbf{y}})\;\mathrm{\frac{m}{s}}$$ Clearly, if another particle had this constant velocity and started at the same initial point at the same time as the first particle, the two would reach the same final point at the same time. On the other hand, the quantity $$\frac{\mathbf{v}_f + \mathbf{v}_i}{2}$$ is an average of two velocities, which is not particularly useful or meaningful, not an average velocity which has a clear and useful meaning. There are two special cases: (1) In the case that the particle spends half of the elapsed time at a constant velocity $\mathbf{v}_1$ and spends the other half of the elapsed time at a constant velocity $\mathbf{v}_2$, then the average velocity is just the average of the two velocities. (2) In the case that the particle has constant acceleration, the velocity increases linearly with time and so the displacement per unit time (and working in 1-D) $$\Delta r = v_i + \frac{(v_f - v_i)}{2} = \frac{v_i + v_f}{2}$$ and thus, the average velocity is just the average of the initial and final velocities.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/274300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
What exactly is a "volt"? What exactly is a volt? So I Studied the chapter "electricity" in the month of April and got introduced to the concept of "volt". The concept was too unclear for me so I tried to ask some questions to my teachers and to do some searches on google and watch some videos. I observerd that noone is giving me a suitable answer. Everyone just gives the analogy of a water bottle with holes in it. I don't think that a circuit is a water bottle. I didn't want to ask this question on stack exchange but its getting too confusing and I just couldn't grasp it. What exactly is volt? Is it energy? Because everyone talks about it in a way which makes it look like it is something that affects the flow of electricity. I need to ask what exactly is something?
Let $\mathbf{E}(\mathbf{r})$ be the electric field: the work done by the field on a unitary charge $q$ along the path $\gamma$ is, by definition, $$ W_{\gamma} = \int_{\gamma}\textrm{d}\mathbf{r}\cdot\mathbf{E}(\mathbf{r}). $$ If the work done by the field happens not to depend on the path $\gamma$ but only on its boundaries instead, we say the field is conservative and express the associated work done as difference of a function calculated on the boundaries, namely $$ W_{\gamma} = V(A) - V(B) = \int_{\gamma}\textrm{d}\mathbf{r}\cdot\mathbf{E}_{\textrm{cons}}(\mathbf{r}) $$ for conservative fields $\mathbf{E}_{\textrm{cons}}(\mathbf{r})$. Calculating the above along any path $\gamma$ walking by any point in space one defines the function $V(x)$, referred to as the potential energy of the field. Let us take the particular case of a conservative constant electric field. The associated work done along a path $\gamma$ is therefore expressed by the difference of potential $$ V(A) - V(B) = |\textrm{E}|\,\Delta r. $$ We call difference of potential of 1 Volt the work done by the above field of module 1 N/C$ to move a unitary charge of 1 m.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/274647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 1 }
Can gravitational wave create anti-gravity, i.e. repulsive gravity? A very layman question as in title. Like every wave having a negative side, can a gravitational wave have anti-gravity. To put it in different words, a gravitational wave passing through a complete vacuum, if in positive cycle, can create a denser space-time, in it's negative cycle, create a rarer space-time?
Gravitational waves are not cycles of compression and rarefication like sound waves. They're transverse, and there is no such thing as compression or expansion of spacetime. There is curvature of spacetime. In a gravitational wave, the curvature is what oscillates. In general relativity, the precise definition of what we mean by attractive or repulsive gravity is complicated, and difficult to express without some mathematics. We express this definition in a set of various criteria called energy conditions. The energy conditions are all automatically obeyed in a vacuum, so gravitational waves do not contain repulsive gravity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/274735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
Is the speed of light dictated by Vacuum Permittivity, Vice Versa or Neither? Instinct, and my limited knowledge of Maxwell's Equations and the Wave Equation tell me that the first statement is true. By my interpretation, the relationship between the frequencies and wavelengths of e.m. waves (and hence the speed of light) is dictated by the relationship between electric and magnetic fields, which is in turn dictated by Vacuum Permittivity, which I believe (possibly in error) to be an inherent property of our universe. Is this right, or is the speed of light somehow dictating Vacuum Permittivity? Or have I got something totally wrong?
At first sight, I understand that it might be plausible that permeability and permittivity seem to be fundamental constants of spacetime which are together forming the constant of speed of light. However, the speed of light is the more fundamental parameter. The speed of light c is not limited to electromagnetic waves, it is equally the speed of gravitational waves which are not at all electromagnetic. Thus it is easy to see that $c$ is more fundamental than $ε_0$ and $µ_0$. For an intuitive model you can think of EM waves as only one form among others for the propagation at the universal speed limit, however with the particularity that they are based on two kinds of forces so that the speed limit has to be distributed among two kinds of forces (electric and magnetic), giving $ε_0$ and $µ_0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/274849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Why doesn't Helium freeze at 0K? I have read that Helium does not freeze at absolute zero under normal pressures. How could this be possible given that the absolute zero is the lowest attainable temperature and at that temperature, all random movements of the atom stop? Shouldn't the atoms just stop vibrating and solidify instantly? Why do they possess kinetic energy at absolute zero?
The key point here is the following: the contribution from the zero-point energy is seven times larger than the depth of the attractive potential between two He(4) atoms. Therefore, the zero-point energy is enough to destroy any crystalline structure of He(4) that the material would otherwise form. A more rigorous answer can be found here in this Answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/274910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 2 }
Microscopic interpretation of pressure in liquids Pressure can be explained at microscopic level for a gas with kinetic theory of gases. From that the pressure $p$ is linked to the velocity of molecules (and it is caused by the high amount of collisions in the gas). $$p=\frac{m N_a}{V} \frac{\bar{v}^2}{3}$$ Where $m$ is the mass of a molecule, $N_a$ Avogadro's number, $V$ volume, $\bar{v}^2$ the quadratic average velocity of molecules. Nevertheless I did not find a similar microscopic interpretation in the case of liquids. In that case molecules are not as free as in a gas, so it looks like pressure is not linked to the higher or lower velocity of molecules. So what is responsible for liquid pressure, at a microscopic level? Is there a quite simple microscopic description for pressure in liquids, as there is in the kinetic theory of gases?
Unlike a gas a liquid has a finite volume at zero pressure i.e. a liquid floating in vacuum would not expand beyond a certain volume. This volume is determined by the interatomic/intermolecular forces in the liquid. If you look at the potential energy between two liquid molecules as a function of intermolecular distance $r$ it will be something like: (picture from here) And the zero pressure volume will be the one where the intermolecular distances are at the minimum of the potential energy. This will be your zero pressure volume. If you compress the liquid you push the molecules up the higher potential energy curve towards smaller $r$, and that takes work, i.e. a force, which is why the compressed liquid has a pressure. There is some effect of molecular motion, and indeed that's why liquids (usually) expand when you heat them. The potential well is not symmetric, so as you add thermal energy the mean intermolecular distance moves to larger $r$. However the main mechanism for sustaining a pressure is the intermolecular potential.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/275018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Working of potentiometer So can you simply explain why current increases in lower loop as we move towards A why it increases in upper loop as we move towards X and please give explanation based upon what changes occur voltage between A and X. Also why there is 1V and -4V potential at terminals.
Presumably there is a resistive wire between A and X (implied by the question). The two batteries have their positive terminals connected at point P - I am going to assume that part of the wire has no resistance. This gives us the following picture: As the contact point (let's call it C, not shown explicitly in the diagram) moves closer to A, the resistance of the wire that is supporting the 1V potential difference gets smaller, and the current in the ammeter will increase. As we move C closer to B, there will be more resistance in the wire. Note that this is independent of the voltage across the upper circuit - we can ignore this because we have no information about resistance in the lower circuit and assume it to be zero. Here is a more formal analysis of the situation: I am assuming the only resistance in the circuit is provided by the potentiometer wire, where $R_1+R_2=R$, some constant value (which is not given, but which we don’t need to know). This means that we can write $R_2 = R - R_1$ which leaves us with just one variable related to the position of the potentiometer. Now according to Kirchoff’s Law, we can write the currents in the circuit as the sum of current $I_1$ in the upper loop, and $I_2$ in the lower loop. Because the voltage drop around each of the loops must be zero, it then follows that $$V_1 - I_1 (R-R_1) - (I_1+I_2) R_1 = 0\\ V_2 - (I_1+I_2)R_1 = 0$$ From the second of these equations it follows that $(I_1+I_2)R_1 = V_2$ , the voltage at point $C$. Substituting that into the first equation we find $$V_1 - I_1(R-R_1) - V_2 = 0$$ which we can solve for $I_1$: $$I_1 = \frac{V_1-V_2}{R-R_1}$$ In other words, as the slider moves further to the right, the current $I_1$ is increasing. You can easily see that this is so because point $C$ is fixed at -1 V, so with a constant voltage difference between B and C, and a decreasing resistance $R_2 = R-R_1$, the current $I_1$ must get bigger. We can equally solve these equations for $I_2$, by substituting for $I_1$ in the second equation: $$V_2 - (\frac{V_1-V_2}{{R - R_1}+I_2)R_1} = 0\\ I_2 = \frac{V_1-V_2}{R-R_1} - \frac{V_2}{R_1}$$ Because $V_1 = 2 V_2$ this equation becomes nicely symmetrical: $$\begin{align} I_2 &= \frac{V_2}{R-R_1} - \frac{V_2}{R_1}\\ &= \frac{V_2 (2R_1 - R)}{R_1(R-R_1)}\end{align}$$ This shows that the current $I_2$ will be zero when the slider is exactly in the middle - that is, when $R_1 = R - R_1$. Again, this is easily explained: if you consider the slider to be disconnected, then the voltage at the mid point would be exactly - 1 V; connecting a slider that also has a potential of -1 V will not cause a current to flow. Moving the slider away from the center will cause the (absolute value of the) current to increase - and as you can see, the sign changes as you go through zero (the midpoint of the wire).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/275135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Variation of double slits experiment Setup goes something like this: the laser gun fires only 1 photon each time and the only way for the photon to appear on the hidden screen is for them to be reflected from the 2 narrow mirrors.(see image below) I was watching a ping pong match and suddenly this pops into my mind. Will there be any interference pattern based on my setup? I argue that 1 photon now does not have the chance to interfere with itself like the double slits so there will not be any zebra pattern showing up but I might be wrong. Also if I coat both mirrors with Polaroid so that one mirror is left circularly polarized while the other is right circularly polarized, what will appears on the hidden screen if any?
As an experimental physicist I would advice you to do the experiment. What the theory predicts for single photons is what the boundary conditions the wavefunction of the photon has obeyed for the particular experiment. This wavefunction is complex and carries the phase information for building up the classical electromagnetic wave. It should not be surprising because both the classical wave and the photons it is composed of are solutions of the same maxwell equations, in the case of the photon treated as operators on the wavefunction. Thus, if interference is seen in a classical light experiment,the single photon distributions will build up to the interference pattern. The classical em distribution is the probability density of finding a photon at a screen, and thus it is the square of the wavefunction of the individual photon. For links look at this answer of mine.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/275313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is $\pi^2 \approx g$ a coincidence? In spite of their different dimensions, the numerical values of $\pi^2$ and $g$ in SI units are surprisingly similar, $$\frac{\pi^2}{g}\approx 1.00642$$ After some searching, I thought that this fact isn't a coincidence, but an inevitable result of the definition of a metre, which was possibly once based on a pendulum with a one-second period. However, the definition of a metre has changed and is no longer related to a pendulum (which is reasonable as $g$ varies from place to place), but $\pi^2 \approx g$ still holds true after this vital change. This confused me: is $\pi^2 \approx g$ a coincidence? My question isn't about numerology, and I don't think the similarity between the constant $\pi^2$ and $g$ of the planet we live on reflects divine power or anything alike - I consider it the outcome of the definitions of SI units. This question is, as @Jay and @NorbertSchuch pointed out in their comments below, mainly about units and somewhat related to the history of physics.
$g$ is a value with units, and $\pi$ is a dimensionless number. If you consider a unit system that uses miles, days, and grams as the units of length, time and mass, you can see that $g$ will be quite different.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/275669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "154", "answer_count": 8, "answer_id": 1 }
Rewriting bosonic action in Altland and Simon Chapter 4 In page 179 of Altland and Simon, Condensed Matter Field Theory, the author obtained the action \begin{equation} S[\theta]=\frac{1}{2\pi}\int dx\,d\tau\,\left[(\partial_x\theta)^2+(\partial_\tau\theta)^2\right] \tag{4.48b} \end{equation} The author then obtained the canonical momentum corresponding to $\theta$ as $$\pi_\theta=\partial_{\partial_\tau \theta}\mathcal{L}=\partial_\tau\theta/\pi.\tag{4.48c}$$ According to Hamiltonian mechanics, \begin{equation} \mathcal{H}=\dot{q}\frac{\partial\mathcal{L}}{\partial\dot{q}}-\mathcal{L}=\dot{q}p-\mathcal{L} \end{equation} taking $\theta\leftrightarrow q$ and making use of $\partial_\tau\theta=\pi\pi_\theta$, we should have \begin{align} \mathcal{H}&=(\partial_\tau\theta)\pi_\theta-\frac{1}{2\pi}\left[(\partial_x\theta)^2+(\partial_\tau\theta)^2\right]\\ &=\frac{1}{2\pi}\left[\pi^2\pi_\theta^2-(\partial_x\theta)^2\right]. \end{align} However, this expression is different from the Hamiltonian density given in the textbook $$\mathcal{H}=\frac{1}{2\pi}\left[(\partial_x\theta)^2+\pi^2\pi_\theta^2\right].\tag{4.48d}$$ What did I do wrong here? How to obtain the Hamiltonian given in the textbook? And also how to obtain the new action \begin{equation} S[\theta,\pi_\theta]=\frac{1}{2}\int dx\,d\tau\,\left(\frac{1}{\pi}(\partial_x\theta)^2 +\pi\pi_\theta^2 +2i\partial_\tau\theta\pi_\theta\right)~?\tag{4.48e} \end{equation} In particular, where is the last term in the parenthesis $2i\partial_\tau\theta\pi_\theta$ coming from?
TL;DR: The trick is not to Wick-rotate the momentum field $$ \Pi_M~=~i\Pi_E, \tag{1}$$ because it would otherwise lead to a divergent Gaussian momentum integral in the Euclidean (E) path integral. So we will keep the Minkowski (M) momentum $\Pi_M\in\mathbb{R}$ even in the Euclidean formulation. Further details: Standard conventions for the Wick rotation are $$ -S_E~=~iS_M, \qquad t_E~=~it_M, \qquad {\cal L}_E~=~-{\cal L}_M, \tag{2}$$ cf. p. 106 in Ref. 1. The potential density is $${\cal V}~=~\frac{1}{2\pi}(\partial_x\Theta)^2.\tag{3}$$ The Minkowski & Euclidean Hamiltonian densities read $${\cal H}_M~=~\frac{\pi}{2}\Pi_M^2+{\cal V},\tag{4M}$$ $${\cal H}_E~=~\frac{\pi}{2}\Pi_E^2-{\cal V}~=~-\frac{\pi}{2}\Pi_M^2-{\cal V}.\tag{4E}$$ The Minkowski & Euclidean Hamiltonian Lagrangian densities read $$\begin{align} {\cal L}_H^M&~=~\Pi_M\frac{d\Theta}{dt_M} - {\cal H}_M ~\stackrel{(4M)}{=}~ \Pi_M\frac{d\Theta}{dt_M}-\frac{\pi}{2}\Pi_M^2-{\cal V} \cr &\stackrel{\text{int. out } \Pi_M}{\longrightarrow}\quad {\cal L}_M~=~\frac{1}{2\pi}\left(\frac{d\Theta}{dt_M}\right)^2 - {\cal V}. \tag{5M} \end{align}$$ $$\begin{align} {\cal L}_H^E&~=~\Pi_E\frac{d\Theta}{dt_E} - {\cal H}_E ~\stackrel{(4E)}{=}~ \color{Red}{-}i\Pi_M\frac{d\Theta}{dt_E} + \frac{\pi}{2}\Pi_M^2+{\cal V}\cr &\stackrel{\text{int. out } \Pi_M}{\longrightarrow} \quad {\cal L}_E~=~\frac{1}{2\pi}\left(\frac{d\Theta}{dt_E}\right)^2 + {\cal V}. \tag{5E}\end{align}$$ The last expression in eq. (5E) corresponds to OP's first eq. (4.48b). The second expression in eq. (5E) corresponds to OP's sought-for eq. (4.48e), although with an opposite sign marked in red. We have not investigated further the origin of this discrepancy. References: * *A. Altland & B. Simons, Condensed matter field theory, 2nd ed., 2010.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/275918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
What's the difference between Quark Colors and Quark Flavours? Each of the six "flavors" of quarks can have three different "colors". The quark forces are attractive only in "colorless" combinations of three quarks (baryons), quark-antiquark pairs (mesons) and possibly larger combinations such as the pentaquark that could also meet the colorless condition. Quarks undergo transformations by the exchange of W bosons, and those transformations determine the rate and nature of the decay of hadrons by the weak interaction. What's the difference between Quark Colors and Flavors, I've heard them used in the same way before. So what exactly is the difference between the three colors and 6 flavours?
The "flavor" is the type of quark, like up or down. "Color" is a characteristic property, somehow similar to electric charge just that it can have three values and not just two. Going back to a less deep level, an analogy may be particles that can be protons, neutrons, electrons, mesons, etc. These will be like "flavors" of particles. Each one of these have some electric charge associated. (like color).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/276008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
What exactly are "primordial fluctuations"? Are "primordial fluctuations" essentially the same as "virtual particles" and "quantum fluctuations" that created the universe from nothing like what is featured in the Lawrence Krauss book, A Universe from Nothing?
It's well-known, that the large-scale structure is incredibly rich. Although on the largest cosmological scales the Universe looks boring (isotropic and homogeneous), at slightly smaller scales that are still very large (we're talking about considering clusters of galaxies as a single entity), we see incredibly rich structure, of filaments, walls and voids, like a sponge, e.g. https://kicp.uchicago.edu/research/highlights/images/highlight-060611-3.jpg. In the early universe, there was no structure, only a hot plasma. The radiation pressure from photons etc. prevents gravity from condensing other non-relativistic particles. This `washing out' of structure means that if we assume only normal matter contributes to gravity, we cannot dynamically generate the observed rich sponge-like structure we see at the large-scale today. In order to see the amount of structure in the Universe, we need dark matter. But even that is not enough. In addition to the dark matter, we need `seeds' of gravitational potential wells to speed up the gravitational condensation of dark matter. The seeds are called `primordial fluctuations' and occur before the existence of a hot thermal plasma, in the very early universe. In inflationary cosmology, it's proposed that the quantum fluctuations provide the seeds for structure. Inflation enlarges to the quantum fluctuations, causing them to become classical and form the seeds necessary for large-scale structure.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/276110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Does a larger camber produce more lift? I'm doing an experiment using two airfoils of the same dimensions except for the camber. I am getting results in which more lift is produced using the smaller wing. Is this correct or are my results incorrect? Thanks
When you say smaller wing, I assume you mean less camber? Because you say they are the same dimensions, which I take to mean wing plan. Here is one explanation, it all depends on the regime the wing is designed for. I am sure you know most of it already though, sorry. If an airplane is being designed to fly at low speed (0 - 100 mph), it will have a different camber than an airplane designed to fly at supersonic speed (760 - 3,500 mph). In general, low to medium speed airplanes have airfoils with more thickness and camber. Greater camber gives greater lift at slower speeds. At faster speeds (supersonic) and at higher altitudes airfoil shapes need to be thinner, so you reduce the camber to delay the formation of a shock wave. I don't think this applies to you. There are NASA sites which have calculators for total lift on them. NASA lift calculator. If they don't cover camber, keep searching, their should be one site that does So unless you specify the speed regime, it's not a yes or no answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/276230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How can we show the increase of number of microstates intuitively? After the thermal exchange of two bodies with different temperatures $T_1$ and $T_2$ reaching a equilibrium temperature $T_2$< $T_3$ < $T_1$, how can we prove the number of microstates is increased intuitively? Don't use the entropy explanation, since the entropy is defined on the number of microstates.
To get an intuitive idea we start by assuming that no. of microstates$(N)$ are monotonically increasing(or simply linear) function of temperature. This is fairly intuitive since more temp usually allows the system to access more of its energy levels. Thus $N$ $\alpha$ $T$. Now one of the bodies are at temperature $T_1$. For that number of microstates are $N_1(T_1)$. And for the other body $N_2(T_2)$. Now since the microstates of each body are independent of the other, the total number of microstates for the whole system is given by $N = N_1*N_2$. Now we can maximise this number $N$. Energy conservation will give $T_1 + T_2 = Constant.$ $=>$ $N_1+N_2= Const.$. Use this to maximise N and you will get that it is maximum when both temp. are equal. Thus at thermal equilibrium number of microstates are maximised.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/276307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can someone please explain what happens on microscopic scale when an image becomes unfocused on a screen from a projector lens? My questions is basically asking when you move a projector back farther from a screen the image tends to blur unless you focus it. Logically I would think that every point(ray of light) of the image would expand proportionally allowing just a larger clear image. Instead you get a blurred image that is larger and requires focusing. My question is what exactly is happening to the light when you create a larger distance between the projector and the screen. Why can't the image just get larger and stay clear without the need to focus. Is every light ray independent? So each ray of light expands and over laps the other when the image is not focused? If so what exactly is a single ray of light and how thick is it? Am I thinking too deep about this ? Why does such a simple concept seem impossible to explain?
Only the lens equation is relevant: $\frac{1}{f}=\frac{1}{g}+\frac{1}{b}$. $g$ is the distance from the object inside the projector to the projector lens and $b$ is the distance from the lens to the image plane. If you change your $b$, but let $f$ of your lens system constant, you have to change $g$ by changing position of your lens.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/276384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Do we actully feel a change in acceleration? Let's say you were in a sports car with your foot to the floor racing at maximum acceleration then all of a sudden you completely stop accelerating and maintain the speed you are going. Would you actully feel this "jerk" as it's called or do you only feel the actual acceleration? Thanks, Dylan
I have to record this comment by user Velut Luna for its pithy logic: When we can feel something, we can feel the change of it. We can feel acceleration, therefore we can feel jerk. which is certainly true, but there is another sense wherein jerk can directly affect our bodies in some cases. Those cases are when one's body is accelerated through the reaction force between the body and a "thrusting" object, such as the seat of a car undergoing acceleration. Our bodies are deformable, and not all parts of them accelerate in the same way: the seat thrusts the parts of the body in contact with it, and these deform. It takes some time for that force to be transmitted through the tissue in direct contact with the seat to the tissue furthest away from the seat. Therefore, accelerations with different jerk as a function of time will give rise to different strains / stresses in the body as a function of position and time. The same is not true if the body is accelerated by a body force, such as, for example, if it were uniformly charged and accelerated by an electric field. All volumes would undergo exactly the same acceleration so no internal strain would result, whatever the acceleration or jerk or any higher derivative may be. See my answer and the companion answer it references here.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/276527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the physical meaning of the Schwarzschild radius for objects that aren't black holes? Earth has a Schwarzschild radius of a little less than a centimeter. What does this mean for the matter of Earth's core that is within this radius? A related question comes up for what happens when an almost black hole accretes matter and slowly becomes a black hole. Prior to the moment of the Schwarzschild radius crossing the boundary of the object, what does the matter within the radius experience?
The answers by tparker and Симон Тыран work well enough. There is though I think a bit more. Suppose you put a black cloak around a gravitating body so you could not probe beneath it. The body's gravitation would be the same if it were a star of some mass, or the same mass collapsed into a black hole. So from that perspective if you were standing on Earth or this cloak at one Earth radius around an Earth-mass black hole there would be no difference with regards to gravity. Now if we strip away the cloak we now see a difference. Both have gravitational fields that are Schwarzschild up to a certain radius. For the Earth that ends at its surface, but for the black hole it continues all the way into the black hole and up to the singularity. For a material body, the vacuum solution ends, and there is a continuity condition that has to be established between the vacuum solution and the non-vacuum solution in the material bulk of the body. In the interior one has to work with Ricci tensors for source terms and stress-energy tensors. In general this is the Birkhoff problem, which of course is applied to stars, white dwarfs and neutron stars. In general this is a computationally difficult problem to work with. It is not so hard if you have an idealized body with a constant density.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/276610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
How is heat represented on a quantum level? Heat is just a form of kinetic energy for molecules, because as temperature rises, the heated molecules are "shake" and "vibrate" more and more. But how does that show up on a quantum scale? What element actually carries the kinetic energy: the heated molecule as a whole, its atoms, the nuclei, or the electrons' orbits? (Maybe even the quarks found in the nuclei?). Or is it that the shaking described is only an analogy for a notion of energy that is more difficult to grasp as their is no real physical movement in the heated object?
It is somehow an open subject of research. No way through just subdividing you end linking quantum world to classical one. At quantum level, different classical processes reading to a particular concept, always rely on just already determined positions and energy of a system. When one goes to systems with very low entropy we tend to lose track of measurable energies whereas high energy systems are also source of different other particles. We then find different divisions inside physics itself, where, it is not possible to reconcile small to big or low to high energy. Always different sub-domains are to be made for different scenarios. The main question to ask some times is, could be their parameters other than just scientific limitations in understanding, which kind of block different levels of understanding to combine information in a correct way, which can simply, show clearly how exits to a particular understanding are hold up by other than just personal perceptions of things.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/276724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Newtonian derivation of perturbation in density In Barbara Ryden's Introduction to Cosmology, chapter 12.3, she derives an equation describing the evolution of mass perturbations with time, for small perturbations $|\delta|\ll 1$. Before she starts the derivation, a disclaimer is added stating that By performing Newtonian analysis of this problem, we are implicitly assuming that the radius $R$ is small compared to the Hubble distance and large compared to the Jeans length. I have added the derivation below, and was wondering where these implicit assumptions are taken into account. Suppose a universe with pressure-less matter, with mean mass density $\overline{\rho}(t)$. As the universe expands, the density decreases as $\overline{\rho}(t)\propto a^{-3}(t)$, where $a$ is the scale factor of the universe. Consider a spherical region of radius $R$, to which a small amount of matter is added (or removed) so the density within the sphere is now $\rho(t) = \overline{\rho}(t)(1+\delta(t))$, where $|\delta| \ll1$. The total gravitational acceleration at the surface will be $$ \ddot{R} = -\frac{GM}{R^2} = -\frac{4\pi}{3}G\overline{\rho}R - \frac{4\pi}{3}G\overline{\rho}\delta R $$ By mass conservation, we know that $$ \frac{4\pi \overline{\rho}(t)}{3}R^3(t)(1+\delta(t)) \equiv \textrm{const} $$ From this, we get that $$R(t)\propto a(t)(1+\delta)^{-\frac{1}{3}}\approx a(t) - \frac{1}{3}\delta a(t)$$ Taking double time derivative, we get $$ \ddot{R} \approx \ddot{a}(t) - \frac{1}{3}\ddot{\delta}a-\frac{1}{3}\delta\ddot{a} - \frac{2}{3}\dot{\delta}\dot{a} $$ Dividing by $R$, we get $$ \frac{\ddot{R}}{R} \approx \frac{\ddot{a}(t) - \frac{1}{3}\ddot{\delta}a-\frac{1}{3}\delta\ddot{a} - \frac{2}{3}\dot{\delta}\dot{a}}{a(1-\frac{1}{3}\delta)} \approx \frac{\ddot{a}}{a}\left(1+\frac{1}{3}\delta\right) - \frac{1}{3}\ddot{\delta}\left(1+\frac{\delta}{3}\right) - \frac{1}{3}\delta\frac{\ddot{a}}{a}\left(1+\frac{1}{3}\delta\right) - \frac{2}{3}\dot{\delta}\dot{a}\frac{1}{a}\left(1+\frac{1}{3}\delta\right) $$ Taking only linear terms in $\delta$ (we neglect $\delta^2,\ddot{\delta}\delta$ as they are of second order in $\delta$), we get $$ \frac{\ddot{R}}{R} \approx \frac{\ddot{a}}{a} - \frac{1}{3}\ddot{\delta}-\frac{2}{3}\frac{\dot{a}}{a}\dot{\delta} $$ Comparing with the gravitation equation, we get that the linear term, in charge of the small perturbation, yields $$ \ddot{\delta} + 2H\dot\delta = 4\pi G\overline{\rho}\delta $$ where $H$ is the Hubble parameter. At which point are the above assumptions are implicitly applied?
When you write out the gravitational acceleration as $$ \ddot{R} = -\frac{GM}{R^2}, $$ you are approximating gravitation as Newtonian. Your source is telling you that this approximation implicitly demands that $R$ is much smaller than the Hubble length, and much greater than the Jeans length. I would wager the latter is so that we can treat the matter content as approximately homogeneous, while the former is so that the expansion speed is small.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/276839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is a $5-60 mph$ time slower than a $0-60 mph$ time for some automobiles? This doesn't make a lot of sense to me, from a physics 101 point of view. I've read a few blog entries on why this is, but none of them explain it well or are convincing. "something-something launch control. something-something computers." Nothing in physics terms or equations. For instance, Car and Driver magazine tested the Porsche Macan GTS. The $x-60$ times are: * *Rolling start, $5-60\; \mathrm{mph}: 5.4\;\mathrm{ s}$ *$0-60\;\mathrm{mph}: 4.4\;\mathrm{s}$ That's a whole second - about $20$% faster from a dead stop than with some momentum - which seems rather huge. edit: here is the article for this particular example. But I've noticed this with many cars that are tested for $0-60$ and $5-60$ times. Here is another example - an SUV. Another example. And finally, interesting, even for the Tesla Model S (EV) where power doesn't depend on engine RPM, $0-60$ is still slightly faster than $5-60.$
This is not so much a question of physics as it is a question for mechanics. The 0–60 mph benchmark is commonly quoted in publications for car enthusiasts. As with any benchmark, manufacturers will try to game the system. Fancy sports cars have launch control systems: if the car starts from standstill and the accelerator is floored, then special programming kicks in, with extremely aggressive shifting and engine tuning, without regard for usual considerations such as longevity and emissions. Basically, it's a bit like Volkswagening a test, but less evil since the test case rarely happens in real life. Arguably, if the tuning technique achieves the desired result of maximizing acceleration at any cost, then it's not cheating.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/276932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89", "answer_count": 4, "answer_id": 2 }
Helmholtz Free Energy minimization during an irreversible process Consider the classical $(N,V,T)$ system, and its Helmholtz free energy (HFE) $A=U-TS_{system}$. The system is placed in contact with an hotter heath bath. It is said that, at equilibrium, the HFE of the system reaches a minimum, i.e. $dU - TdS_{system} = 0$. But, for an irreversible heat transfer $dQ$ from the heat bath to the system, we end up with $dQ=dU<TdS_{system}$. So when does the equilibrium get realized? Do we need an "extra" transfer of heat to the system?
Your differential form is incomplete. Actually: $$dF=dU -SdT-TdS =-SdT$$ Equilibrium is reached when $dF=0$, so when $-SdT=0$, meaning the temperature constant over time ($T_{system}=T_{bath}$). That's why at constant temperature Helmoltz free energy is the minimum potential. In the same way regarding a constant pressure process the enthalpy is given by: $$H=U-PV$$ Because it's differential form is then $dH=dU+VdP+PdV=VdP$. At equilibrium with a pressure bath, enthalpy do not vary and is in a minimum. Each of these is obtained by a Lagrange multiplier meaning that for a constraint on an intensive parameter $X$ coupled to an extensive parameter $Y$ such as $\frac{dU}{dY}=X$ you can construct Z the thermodynamic potential associated like: $$Z=U-XY$$ Such as its differential form vanishes at equilibrium. I hope this helps, a bit, the underlying physics is better understood with solid math background in my opinion. edit: I made a little mistake, the first formula is only valid at constant volume as my differential is $dF=-PdV-SdT$ in the general case. In any case at equilibrium the volume of the system is supposed constant and no irreversible work is involved. The same reasoning apply as for the internal energy at equilibrium of a closed system.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/277187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How does curved spacetime affect gravitational waves? How differently will a LIGO detector detect a gravitational wave which came directly to it with a detector which happened to have a black hole between it and the source?
I hope you get a proper answer from an expert, but just in case you don't, I don't think a black hole would have much effect on gravitational waves. I say this because I asked a similar question previously and I think it was made clear to me regarding the small amount of scattering involved. From Black Holes And Gravitational Waves Due to the weakness of the coupling of gravity to matter, gravitational waves experience very little absorption or scattering, even as they travel over astronomical distances. In particular, gravitational waves are expected to be unaffected by the opacity of the very early universe before space became "transparent"; observations based upon light, radio waves, and other electromagnetic radiation further back into time is limited or unavailable. Therefore, gravitational waves are expected to have the potential to open a new means of observation to the very early universe. From Scattering of Gravitional Waves In principle they can undergo scattering off of massive objects just like electro-magnetic waves scatter off of charged particles.  Unfortunately, the effect is very, very, very weak.     It is difficult enough just to see two black holes collide and merge.   Any fine details such as scattering are undetectable with current, and probably future detectors. 
{ "language": "en", "url": "https://physics.stackexchange.com/questions/277282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Gravitational Force - Newton Mechanics Why do we use gravitational force in earth by relating just the mass of an object with the acceleration produced by the gravitational field: $$ F_{g} = m\cdot \vec{g} $$And when we're dealing with planets, we use a relation defined by the masses of two planets, distance squared and gravitational constant: $$ F_{g} = G \cdot \frac{M_{1} \cdot M_{2}}{d^{2}} $$ I really don't get why we use just the first relation here on earth, because we're dealing with a interction between two objects... It's because our mass is irrelevant?? Thanks!
The second equation is always correct, and you can derive the first equation from it. Here on the surface of the Earth, $d$ is the radius of the Earth $r_e$ plus our height $h$. $$ F = G \frac{M_e M_2}{(r_e + h)^2} $$ The radius of the Earth (6,371 km) is huge compared to our height above the surface (at least, when we're near the surface), so we can simplify the equation by assuming $r_e \gg h$ and therefore $r_e \approx r_e + h$. $$ F = G \frac{M_e M_2}{r_e^2} $$ $G$, $M_e$ and $r_e$ are all constant, so we bundle them all into another constant $g = \frac{GM_e}{r_e^2}$ and voila $$ F = gM_2 $$ It's because our mass is irrelevant? No, it's because the equation assumes that our height above Earth's surface is negligible compared to the radius of the Earth (which most of the time, for me at-least, it is). Example: Suppose I'm a 70 kg man whose just spent the last week hiking up Mount Everest, which is 9 km above sea level. Using the correct equation we get $$ F = G \frac{70 M_e}{(6,371,000+9,000)^2} = 685 N $$ Using the approximate equation we get $$ F = 70g = 687 N $$ which is about $0.3$% different. Whether this is an acceptable error or not will depend on how precise you need your calculations to be, but for every-day purposes it's probably fine :)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/277442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Orange sky 3.7 billion years ago because there was little oxygen? PhysOrg quotes Martin VanKranendonk of the University of New South Wales and director of the Australian Center for Astrobiology: Because the atmosphere had very little oxygen and oxygen is what makes the sky blue, its predominant color would have been orange Is this correct?
Nope! The blue sky comes from Rayleigh scattering through air, but pretty much any gas whose molecules are decently polarizable will work equally well - it doesn't have to be oxygen. I suspect the guy probably said that there was very little atmosphere and atmosphere is what makes the sky blue (although does it really count as a "sky" if there's no atmosphere?...) and the reporter misunderstood.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/277544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Good resources for understanding inflationary cosmology I'm currently trying to self study inflationary cosmology and am finding it difficult to find good resources which explain the motivation behind such theories while providing all the mathematical details. Does anyone know any good text or resource on inflationary cosmology?
I was trying to understand scalar field models of dark energy which are motivated by inflationary cosmology. The motivation is that we may also explain the late-time acceleration using scalar fields(e.g quintessence, k-essence etc.). Anyway the following textbook was useful for me. Cosmological Inflation and Large-Scale Structure by Andrew R. Liddle & David H. Lyth (2000, Cambridge University Press). And, I also recommend the other lecture notes & documents prepared by Liddle. For example An introduction to cosmological inflation (arXiv:astro-ph/9901124) His materials are highly pedagogical and have rigorous math.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/277898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Has the Landauer Limit really been overturned? What was wrong with the original analysis? This news, summarizing results from M. López-Suárez et al. Sub-$k_B T$ micro-electromechanical irreversible logic gate, Nature Commun. 7, 12068 (2016). Makes the claim that It clearly shows that there is no such minimum energy limit and that a logically irreversible gate can be operated with an arbitrarily small energy expenditure. Simply put, it is not true that logical reversibility implies physical irreversibility, as Landauer wrote. The results of this experiment by the scientists of NiPS Laboratory at the University of Perugia are published today in Nature Communications. They measured the amount of energy dissipated during the operation of an "OR" gate (that is clearly a logically irreversible gate) and showed that the logic operation can be performed with an energy toll as small as 5 percent of the expected limit of kBT ln2. The conclusion of the Nature Communications article is that there is no fundamental limit and reversible logic is not required to operate computers with zero energy expenditure. First of all, is this for real? If so, what was wrong with Landauer’s analysis?
Edit: my first answer was wrong. What this paper appears to be doing is creating a reversible element which is being treated logically as an irreversable OR. Because the element itself is reversible, it can easily avoid Landauer's principle. As best as I can tell, this has been known for a long time: you can have a combination circuit of reversible logic, and you only pay the energy cost for the measurements taken at the end of the process (which must latch, so are subject to Landauer's principle).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/277985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 2 }
What does thrust and thrust axis mean in particle physics? Would someone be kind enough to explain to me: 1) How thrust and thrust axis are calculated/determined 2) What is the significance/interest in these quantities for an event in particle physics? Though I have seen the general formula, I haven't found a good explanation of what it tells you about the event or why it is useful.
Sphericity and thrust came into being when scattering experiments demonstrated that the parton model of particle physics could not explain the data, that there was a type of "hard core" giving tracks with high p_transverse. The need arose to be able to orient the individual events in a way that would demonstrate the emergent jet structure. Spear (SLAC): mid-70’ies, e+e− → qq should have 1 + cos2 θ angular distribution if quarks have spin 1/2. Solution: Sphericity. Fixed-target pp experiments study alignment of collision. Solution: Thrust. The thrust variable characterizes the event shape: an event with spherically distributed tracks would have Thrust =1/2, a two jet event would have close to 1. It is a variable which can be calculated phenomenologically in QCD and compared with the data and was important in establishing the correspondence of the jet structures to the quark and gluon content of the interactions. See also Why is the value of thrust for a perfectly spherical event equal to ${\frac{1}{2}}$? for the calculation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/278256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Weight factor in Path Integral Formalism In Quantum Mechanics, transition amplitude between two states in given by (path integral approach): $$ \left\langle q';t'|q;t\right\rangle= \int[\mathrm dq] \exp \left(i \int L(q,\dot{q})~\mathrm d\tau\right) $$ This tells that contribution of the paths to the amplitude is given by the weight factor : "i times the action". Can anybody explain "intuitively" why this should be the weight factor?
Although you are looking for a more intuitive explanation i think the best way to see it is to simply derive it mathematically which is done in every QFT book. Since $H=i \frac{\partial}{\partial t}$ the time evolution transition amplitude between two infinitesimally close states would be $\langle q_i|1-iH\Delta t|q_{i+1} \rangle = \langle q_i|e^{iH\Delta t}|q_{i+1} \rangle$. To obtain the full amplitude between states $|q \rangle$ and $ |q' \rangle$ one would need to sum over all infinitesimal time transitions resulting in some integrals. The definition of the Hamiltonian $H=\frac{p^2}{2m} - V(q)$ would allow for some gaussian integrations over $p$. Taking the analytical limit in the end the discretized sum over all intermediate states can be expressed as an continuous integral of the Lagrangian over time.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/278311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is there a geometric object analagous to a spinor that encodes projections onto bivectors? The most sensible geometric interpretation of spinors that I've come across is that they encode projections in the Clifford algebra. So if $\mathbf A$ is a vector with components $A_i$ and $\psi$ is a spinor, then $\psi^\dagger A_i \sigma^i \psi = \mathrm{Tr} (A_i\sigma^i \psi \psi^\dagger)$ gives the component of $\mathbf A$ along the direction encoded by $\psi$. Is there a geometric object analagous to spinors, which projects onto a bivector orientation rather than a vector direction? So if $S_{ij}$ were the components of a bivector and $\chi$ were such an object, $\chi^\dagger S_{ij} \frac12[\sigma^i,\sigma^j] \chi$ might give the component of the bivector $\mathbf S$ with orientation encoded by $\chi$. As in the case of spinors, $\frac{1}{2}(\chi_1+\chi_2)(\chi_1+\chi_2)^\dagger$ would be another such projection. Or can this already be accomplished using spinors?
The geometric object which corresponds to spinors is the external bundle, the bundle of differential forms. The natural equation on such a bundle is the Dirac-Kähler equation. This bundle is essentially used in lattice computations under the name "staggered fermions". A problem with this geometric interpretation is that it has too many components. So, the complexified bundle $\Lambda(\mathbb{R}^4,\mathbb{C})$ would describe four Dirac fermions. In arXiv:0908.0591 (see also http://ilja-schmelzer.de/matter/ ) a splitting spacetime into space and time allows to reduce this to two Dirac fermions, which, then, can be interpreted as an electroweak pair.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/278437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Centrifugal Pump Head What is pump head? and how is it different from the difference in elevation between the suction and delivery reservoir? Also why must the kinetic energy of the fluid leaving the pump must be least? I mean if it leaves with more velocity then it can go farther up to the delivery reservoir. The energy equation we write is $$P/\rho(g) + v_2/2g + z_1 = P_2/\rho(g) + v_2(2)/2g + z_2 + Head $$ What is this head?
The head of a pump is a measure of how big of a pressure difference that pump can generate. I am not sure what the historical or practical reason for it is, but head is expressed as the height of a water column. The pressure $p$ required for such column with head $h$ can be calculated with, $$ p=\rho\,g\,h, $$ where $\rho$ is the density of water and $g$ the acceleration due to gravity.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/278533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Quantification of entropy mathematically when $T$ and $V$ both change $$ln\frac{W_f}{W_i}=N ln\frac{V_f}{V_i}=n N_a ln\frac{V_f}{V_i}$$ $$\Delta S=nRln\frac{V_f}{V_i}$$ $$ln\frac{V_f}{V_i}=\frac{1}{n N_a}ln\frac{W_f}{W_i}$$ $$\Delta S=\frac{R}{N_a}ln\frac{W_f}{W_i}=kln\frac{W_f}{W_i}=klnW_f-klnW_i$$ hence $$S=klnW$$ For T change taken from Atkins physical chemistry: Above micro entropy S=klnW and macro entropy $\Delta S=\frac{\Delta Q}{T}$ are united isolated for T and V change. Entropy being a state function my problem is how one can understand that this is the $quantification$ that works for paths in change of both V and T? How does T and V quantify against each other? Can this be justified or explained?
For stable pressure it is ΔS= nCplnTf/Ti= nCplnVf/Vi (Cp=molecular specific heat) To quantify,we set initial values Ti=1=Vi and derive: S= nCplnT = nCplnV
{ "language": "en", "url": "https://physics.stackexchange.com/questions/278595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Definition of a calorie? My copy of "Resnick and Halliday" states the following: "Before scientists realized that heat is transferred energy, heat was measured in terms of its ability to raise the temperature of water. Thus, the calorie(cal) was defined as the amount of heat that would raise the temperature of 1g water from 14.5°C to 15.5°C." This definition seems to account for the fact that heat really is energy in transit so why was this definition changed? Exactly what is so inherently wrong with defining heat in this manner? I'm afraid that I may have misunderstood the subtle distinction between heat and energy, if there is one. Please share your knowledge and help me. Much thanks in advance :) Regards.
You are right, heat really is energy, and the calorie is a unit of energy. However, the definition you gave is not up to current standards of defining units. As far as I know, there are two reasons: First is that the definition is not unambiguous, and the other is that nowadays SI units are the way to go when it comes to science and official measurements, and in SI, the unit of energy, Joule, is defined in a more straightforward way based on other "base units". First, the specific heat capacity of water changes by temperature and pressure, so the amount of heat (=energy) needed for a temperature change of 1 Celsius varies. The Wikipedia article lists quite a few different definitions for the calorie, where the temperature endpoints are 3.5 °C...4.5 °C, 14.5°C...15.5 °C, 19.5 °C...20.5 °C, and one where the definition is one hundredth of the energy needed to warm form 0 °C to 100 °C. All of these give slightly different amounts of energy. Also the definition you quoted doesn't specify the pressure at all, so even that doesn't give a single fixed value of the calorie. The second reason is that the international system of units is nowadays used for units used in science. The idea is that there are a few (as few as possible) base units that are defined in terms of things occurring in the nature, and the rest are defined in terms of the base units. The unit of energy (=heat) is Joule whose definition is simply $\text{kg} \text{ m}^2/ \text{s}^2$, where kilogram, metre, and second are defined in a certain way.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/278700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Electric flux model for two different media Some books define (although some do not) the electric field flux as the number of electric field lines passing through a given area. Suppose that there is a electric field created by a charge plate and the field lines emerging from the plate are passing through two adjacent media having two different permittivities. In this case, the electric fields observed in two media should be different due to the differences in electric permittivities. So when the electric filed line model is applied here, which tells something about the field strength, we see that number of filed lines in two media are different. Does this mean that electric filed lines are created or destroy at the interface between media? I know obviously this is not the case, but how can we explain this situation? Should we apply the electric flux model separately to two media considering them individually?
The electric field can be considered to start and end on total charges represented by the sum of free and bound charges in the material. At the interface of two different dielectrics with different polarizations P a net bound charge appears which is the source and sink of the electric field lines in addition to free charges. Therefore the electric field lines can start and end on the dielectric interfaces in contrast to the displacement field lines which start and end only on free charges.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/278797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Do fluorescent white materials exist? From my understanding of fluorescence, a "fluorescent yellow material" (like in highlighters) is a material that contains yellow dyes and fluorescent dyes absorbing green to give yellow. Then, the material appears "more yellow" than usual objects because it has two sources of yellow. If the above is right, it appears possible to create a "fluorescent white material" simply by having : * *three base colors (blue green red) *three fluorescent dyes with their corresponding quantities to counter absorption from other dyes (2blue 2green 1red) Does it exist ? Or am I making a mistake there ? EDIT : * *The source is supposed to be sun light *"white" is seen from the observer point of view, I don't mean all the visible spectrum, just 3 colors are enough.
The phosphor on ordinary fluorescent tubes and also in white leds would seem to qualify; they take ultraviolet light and convert it to white(ish) light, using a blend of substances to get the color balance right.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/278961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Charging Up a Capacitor Well what I read that in the process of charging a Capacitor, charges are transferred from one plate to another. The work done to move a charge from one plate to another stores as electrical potential energy in it and the capacitor is charged up. Before this I read that when a Capacitor is placed in a circuit with switch closed, positive charges pile up at one end/plate of the capacitor inducing same amount of negative charges on the other end of the capacitor. This continues till the voltage across the capacitor becomes equal to the voltage of battery. This way a capacitor is charged up. In both of these what I found that there is only induction of charges at another plate due to charge present at the 1st plate. There is no transference of charges between the plates of capacitor while charging up the capacitor. Why this is so? I know I am wrong somewhere but where I don't know. Please tell me an appropriate answer for this doubt. Thanx
There are no charges traversing between the plates because between the plates there is a strong insulating dielectric material. Charges on both plates are supplied by the battery. Through the electric field that crosses the dielectric they feel the presence of the charges on the other plate.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/279341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Differentiating D'Alembert operator It has been a while since I did field theory. Euler-Lagrange equation $$\partial_\mu \frac{\partial L}{\partial (\partial _\mu \phi)} - \frac{\partial L}{\partial \phi} = 0$$ If I have $$L = \phi \Box \phi - m^2 \phi^2,$$ do we just get $$\Box \phi - 2 m^2 \phi = 0$$ Because we don't differentiate the D'Alembert operator?
Since the Lagrangian contains second derivative, you will need to use $$ \frac{\partial L}{\partial \phi}-\partial_\mu \frac{\partial L}{\partial (\partial _\mu \phi)} +\partial_{\mu}\partial_\nu \frac{\partial L}{\partial (\partial _\mu\partial_\nu \phi)} = 0$$ which yields equation of motion $$\square\phi-m^2\phi=0$$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/279441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Is speed of Hot air rising gravity dependent? Would say a heated air rise twice as fast in 2G than in the environment with standard Earth gravity?
Yes. The hot air rises due to the force of buoyancy as the hot air expands and becomes less dense. So, yes it rises due to gravity. The force of buoyancy is = (weight of air at regular density) - (weight of air at heated density). The weight involves g. So, yes your statement is true if we exclude complications like resistance/friction etc. As it rises, the force keeps decreasing because the air at higher level is lighter already. So, that slowness is not because gravity has decreased at a height, but mostly due to the fact that force of buoyancy has decreased. 1) air is lighter at height, 2) the hot air may have cooled as it rises. The gravity decreases with height but that impact is negligible.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/279660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Determine the maximum ratio $h/b$ for which the homogenous block will slide without toppling under the action of force F Determine the maximum ratio $h/b$ for which the homogenous block will slide without toppling under the action of force F.The coefficient of static friction between the block and the incline is $\mu_s$. I have a doubt.About which point should the rotational equilibrium be applied?Should it be applied about centre of mass?Or should it be applied about the vertex opposite to the vertex where F is applied?Why? MY ATTEMPT: Translational Equations $F+mg\sin(\theta) \geq \mu N$ and $N=mgcos(\theta)$ Rotational Equations This is where I'm facing a problem.Depending upon which point the equilibrium is applied the required ratio will be obtained. MY VIEWS: Rotational equilibrium should hold at all points if no toppling/rotation happens.However the answer varies depending on the point of application of equilibrium.Strange. I hope this is a conceptual doubt and will not be closed as off-topic or homework.If it needs to be closed please inform me if the post can be improved somehow.
You can apply it to either location, but there are some considerations: * *If you consider rotation about COM, then you need to understand the torque from the normal force. As you push the box, the normal force will move toward the front to counteract. At the tipping point, all the normal force will be there. See also: When does the shifting of normal force occur? *If you consider rotation about the front vertex, you can ignore the forces that act through it. But it is likely that the box as a whole is accelerating down the ramp. If so, that axis is also accelerating. When that happens, fictional forces appear that act on the center of mass. You can't ignore those. But both considerations should yield the same answer.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/279798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Degenerate modes in cylindrical wavguide The $H_z$ field (TE mode) in the case of the cylindrical weveguide is given by: $H_z(\rho, \varphi, z) = H_0 J_m (k_t\rho)e^{i k_z z} e^{\pm i m \varphi} $, where the part that gives the azimuthal modal dependence is given by: $e^{\pm i m \phi} $ and corresponds to the two degenerate modes that exist in this waveguide due to symmetry. These degenerate modes can be separately represented like: $$ H_z(\rho, \varphi, z) = H_0 J_m (k_t\rho)e^{i k_z z} \cos(m \varphi)$$ and $$ H_z(\rho, \varphi, z) = H_0 J_m (k_t\rho)e^{i k_z z} \sin(m \varphi) \;.$$ However, I don't understand how the compact notation $e^{\pm i m \varphi} $ is equivalent to the two separate cases: $\cos(m \varphi)$ and $ \sin(m\varphi)\,.$
Do you mean $\cos(m\phi)$ and $\sin(m\phi)$ in the last sentence? Due to Euler's formula $$e^{\pm i m \phi}=\cos(m\phi)\pm i \sin(m\phi)$$ you can represent the two degenerate modes written in exponential form in terms of the ones with $\sin$ and $\cos$. You can write $$H_z^{\pm}(\rho, \phi, z) = H_0 J_m (k_t\rho)e^{i k_z z} e^{\pm i m \phi} =$$ $$H_0 J_m (k_t\rho)e^{i k_z z} (\cos(m\phi)\pm i \sin(m\phi)) = H_z^{c}(\rho, \phi, z)\pm i H_z^{s}(\rho, \phi, z)$$ and vice-versa, so that the last two modes are just simple complex linear combinations of the first two that you wrote.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/280087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Heisenberg uncertainty and Lorentz contraction Consider a particle in a frame moving with speed $v$ relative to the lab frame. By Lorentz contraction, the width of the wavefunction will be smaller in the lab frame, resulting in smaller $\Delta x$. If $v$ is high enough, then the uncertainty principle $\Delta x \Delta p \ge \hbar/2$ will be violated in the lab frame. What's wrong here? Does $\Delta p$ increase somehow? This seems unlikely, since simply translating momentum distribution by a constant should not alter the standard deviation.
With apologies for the many typos (and worse) in the first version of this answer: Write the wave function as $f(x)$ in the comoving frame. Then in the lab frame, the wave function is $g(x)=\sqrt{a}f(ax)$ where $a$ is some positive constant. Write $\hat{f}(x)$ for the Fourier transform of $f$. Then $\hat{g}(x)=\hat{f}(x/a)/\sqrt{a}$. The change in frame changes the variance of position from $\int x^2 |f(x)|^2$ to $\int x^2 |g(x)|^2$, which means the variance is multiplied by $1/a^2$. (Check this by substituting $u=a x$ in the second integral.) The change in frame changes the variance of momentum from $\int x^2|\hat{f}(x)|^2$ to $\int x^2 |\hat{g}(x)|^2$, which means the variance is multiplied by $a^2$. (Check this by substituting $u=x/a$ in the second integral.) The product of the variances is therefore unchanged. While I hope the above is enlightening, it's really unnecessary. The key is that $f$ is some arbitrary wave function and $g$ is some other wave function. Some argument must have convinced you that $f$ satisfies the uncertainty principle in the first place. Whatever that argument is, it applies equally well to $g$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/280303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Why was an 8 TeV collider needed to find a 125 GeV Higgs? This might be very naive, but why wouldn't a (say) 209 GeV LEP do the job?
LHC is a hadron (proton) collider. But it's being used mainly as a gluon collider. Protons are composite particles, and at high energies they become a complete mess of quarks and gluons. While protons have a huge energy, the gluons that produce Higgs bosons only carry a small proportion of that energy. The rest of the energy goes to other gluons and quarks, that produce "undesired" jets of particles. The probability of each gluon/quark having a proportion of the energy is modelled by parton distribution functions. LHC is a discovery machine. When it was designed, the Higgs mass was unknown. The fact that gluons get a variable proportion of the energy allowed them to probe a large range of masses at the same time. On the other hand, the next "big" accelerator most probably will collide electrons and positrons. Those are [believed to be] elementary particles, and can produce a Higgs boson directly, without any sharpnel particles. Therefore, the energy of the collision can be tuned to the Higgs mass. Such a machine would work as a Higgs factory, and would allow us to study the Higgs' properties in a more systematical way.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/280393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is depletion region formed when electrons are so mobile? A depletion region is formed by electron hole combination at the junction and it creates positive ions on the $N$ side and negative ions on the $P$ side. Can someone let me know why wouldn't the electron just beside the $(+)$ move towards it and neutralize everything. Aren't electrons mobile? Electrons are mobile so it sounds odd that such a positive ion region can be created? Is it because of the effect of $(-)$ which is farther to the right? The immediate $(+)$ should take precedence right? And that also leads me to this question, if there is a $+-$ region like this, would there be any electric field felt outside that region?! Strangely, on the other side it all makes, holes besides $(-)$ ions which lack electrons, have no electrons to pull and there is a negative charged region that won't allow any electron migration. I would have expected a symmetrical behavior but that doesn't seem to be so. N side P Side | | | | | | | | | | | | ─●──●──●──●──+──+──-──-──○──○──○──○─ | | | | | | | | | | | | ─●──●──●──●──+──+──-──-──○──○──○──○─ | | | | | | | | | | | | ─●──●──●──●──+──+──-──-──○──○──○──○─ | | | | | | | | | | | | ─●──●──●──●──+──+──-──-──○──○──○──○─ | | | | | | | | | | | | [● is a valence electron and ○ is a hole] I am just trying to get a better feel of depletion region because it gives me a way to visualize or model things in that region. It all sounds very easy to understand but when you go deeper it doesn't seem so obvious. Probably I have to go into band theory and such to really see what is happening?
Why am I missing the Coulomb's forces of a charge particle here? As far as I understand, the negative ions collect at the junction until there's enough force to repel the free electrons on the N side by Coulomb's force (both have like charges), this is the equilibrium (balanced) state. Here's a good link: PN Junction
{ "language": "en", "url": "https://physics.stackexchange.com/questions/280530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Lagrangian and finding equations of motion I am given the following lagrangian: $L=-\frac{1}{2}\phi\Box\phi\color{red}{ +} \frac{1}{2}m^2\phi^2-\frac{\lambda}{4!}\phi^4$ and the questions asks: * *How many constants c can you find for which $\phi(x)=c$ is a solution to the equations of motion? Which solution has the lowest energy (ground state)? *My attempt: since lagrangian is second order we have the following for the equations of motion: $$\frac{\partial L}{\partial \phi}-\frac{\partial}{\partial x_\mu}\frac{\partial L}{\partial(\partial^\mu \phi)}+\frac{\partial^2}{\partial x_\mu \partial x_\nu}\frac{\partial^2 L}{\partial(\partial^\mu \phi)\partial(\partial^\nu \phi)}=0 $$ then the second term is zero since lagrangian is independent of the fist order derivative. so we will end up with: $$\frac{\partial L}{\partial \phi}=-\frac{1}{2} \Box \phi+m^2\phi-\frac{\lambda}{3!}\phi^3$$ and:$$\frac{\partial^2}{\partial x_\mu \partial x_\nu}\frac{\partial^2 L}{\partial(\partial^\mu \phi)\partial(\partial^\nu \phi)}=-\frac{1}{2}\Box\phi$$ so altogether we have for the equations of motion: $$-\frac{1}{2}\Box\phi+m^2\phi-\frac{\lambda}{6}\phi^3-\frac{1}{2}\Box\phi=0$$ and if $\phi=c$ where "c" is a constant then $\Box\phi=0$ and then the equation reduces to $$m^2\phi-\frac{\lambda}{6}\phi^3=0$$ which for $\phi=c$ gives us 3 solutions:$$c=-m\sqrt{\frac{6}{\lambda}}\\c=0\\c=m\sqrt{\frac{6}{\lambda}}$$ My question is is my method and calculations right and how do I see which one has the lowest energy (ground state)? so I find the Hamiltonin for that?
is just want to add to this discussion that book has no type error according to page 30 kinetic terms are billinear meaning that they have exactly two fields so kinetic terms in this case are: T=−1/2(ϕ□ϕ)+1/2(m2ϕ2)
{ "language": "en", "url": "https://physics.stackexchange.com/questions/280904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Does the Dirac equation ever get used in Physical Chemistry? I'm just curious as to know if there are any examples in physical chemistry or condensed matter physics where the Dirac equation is preferable to the Shrodinger equation for making predictions on the material at hand?
Graphene is a material that needs the Dirac equation for example. The electron band structure of this material has a closed gap some electrons have "mass=0", that can only we treated with the dirac equation. I dont know if this affects the chemical properties but it sure effects the electric ones.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/280978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What IS the precise angle of repose of Cadbury Creme Eggs, anyways? So, in What If, made by xkcd guy Randall Munroe, an off-handed joke about his famous love of Cadbury Creme Eggs was made. In the image's mouseover text, Randall jokes his life's dream is to own enough Cadbury Eggs to determine their precise angle of repose. For those who don't know, an object's angle of repose is the angle of the slope of a pile of that thing on the ground. Here's a subpar MS Paint reference. So, at room temperature, with the wrappers on, what is the precise angle of repose for a standard Cadbury Creme Egg?
Packaging engineer here who legitimately needed the answer to this question. The answer is roughly 25 degrees. Note the angle of repose is defined from the ground plane, opposite to your diagram. I suppose this makes me the sad engineer stuck in a Cadbury factory.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/281236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Understanding tension based on assumptions of pulley system If we consider a simple pulley system with two masses hanging on each end of a MASSLESS and INEXTENSIBLE string around a MASSLESS and FRICTIONLESS pulley, how then can one reason that the tension at each end of the string must be the same? My own reasoning: MASSLESS ROPE means that for any segment of the rope with tension $T_1$ and $T_2$ we have that $\sum F = T_ 2 - T_1 = 0$ (since $m = 0$) and thus the tensions must be the same, on a non curved rope at least! INEXTENSIBLE means that no energy can be stored in the string, however I fail to see how this is a neccesary condition (for equal tension) MASSLESS PULLEY means that no rotational inertia exists, and thus no force can alter the tension of the string (?) FRICTIONLESS PULLEY is hard for me to figure. Needless to say, I feel quite at a loss conceptually!
Since the rope is massless, and since two identical masses are attached to rope-ends, then as far as forces on rope are concerned, the problem has left-right symmetry. This symmetry itself assures you that tension in both sides of the rope must be equal. This is true whether or not the pulley is frictionless.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/281406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Why is $p_y$ conserved in the Landau gauge when we know the electron moves in circles? Considering the cyclotron in $xy$-plane where the magnetic field is $\vec{B}=(0,0,B)^{T}$. In the Landau gauge, we have $\vec{A}=(0,Bx,0)^T$ and we obtain the Hamiltonian $$H=\frac{\hat{p}_x^2}{2m}+\frac{1}{2m}\left(\hat{p}_y-\frac{eB\hat{x}}{c}\right)^2,$$ where $m$ is the mass, $-e$ is the charge, $c$ is the speed if light. This is also called the translational invariant gauge because $\hat{p}_y$ is a conserved quantity in this Hamiltonian. Now I am confused a bit here by physical insight rather than mathematical derivation. How can we have a Hamiltonian for which $p_y$ is conserved when we know that the electron moves in circles? How is this achieved only by a gauge transformation, even without coordinate transformation?
The short answer is that one must distinguish between the canonical/conjugate momentum $\hat{p}_{\mu}$ and the kinetic/mechanical momentum $m\hat{v}_{\mu} ~=~ \hat{p}_{\mu} - qA_{\mu}(\hat{x})$, cf. e.g. this post.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/281687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Self propelling vacuum container in water If I understand correctly: a pressurized container can propel itself if you would take off the "lit" because there is now an open end that can no longer apply a normal force for the pushing gas, resulting in a net force at the other end of the container. I would say the concept above applies if the container would be in space as well as if it would be in a medium such as water. Now my question is: if we would now take a vacuum container underwater (I assume a vacuum container in space would just be called an empty container) and we would remove the lit, would it also be propelled (in the direction of the lit now of course)? Intuitively, one part of me says yes, as long as the difference pressure with respect to the water is the same the resulting force should be of equal magnitude in the opposite direction. However an other part of me says no, a low pressure inside the container would just decrease the time it takes for the container to fill up with water and besides the water rushing in would push the closed end of the container, resulting in no net displacement. This little thought-experiment has been bugging me for the past couple of days so any input would highly appreciated!
It would move in opposite direction. This is because the pressure on the outside of the can and the force exerted on the can by the pressure is greater than the force exerted on the inside of the can because there is lower pressure. The net force is the opposite direction of the pressurized can, so it moves in the opposite direction.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/281788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why do we feel cool when we turn our fans on? It is a question that came to me, but evaporation doesn't seem a nice answer. Please help.
Two main mechanisms help cool your body when a fan blows on it: 1. Forced convection: Newton's cooling law tells us that an object at temperature $T$ surrounded by a cooling medium at $T_{amb}$ will lose heat at a rate of: $$\dot{q}=hA(T-T_{amb})$$ Where $h$ is the heat transfer coefficient and $A$ the surface area between object and cooling medium. As long as $T>T_{amb}$ heat will be carried off the object by the medium (air, in this case). Conversely, the medium will heat up (conservation of energy principle). But by providing a constant flow of air at $T_{amb}$, as a fan does, $\dot{q}$ is maximised. The air speed also has the effect of increasing $h$ somewhat, causing greater values of $\dot{q}$ and thus better cooling. 2. Perspiration: We sweat because evaporative cooling helps keep us cool: evaporating water costs heat, the so-called Latent Heat of Evaporation. Fanning also enhances evaporative perspiration because the fresh air is low in moisture, which speeds up the mass transfer of water from skin to air, increasing the amount of heat needed to achieve this.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/281875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Mix of oil and water under pressure I'm not a physicist so please ignore my ignorance. I'm wondering what would happen if: Imagine a mug with hollow handle. Now, one half of that mug is filled with water, another half with oil (of some kind). If i seal that mug and apply a lot of pressure on top (where the opening is), taking into account oil and water have different characteristics would oil start flowing through handle and back to the surface or simply nothing would happen? EDIT: Basically, my idea was without high temperatures - would high pressure cause oil to perhaps heat a bit, expand a bit and start flowing through handle to the bottom of the mug (literally, a mug shaped container) and then back to the surface? Would a high static pressure be enough to create a motion of oil through the water taking into account different properties (and possibly different reactions to pressure) of those different liquids?
I worked with a wide range of drilling fluids in the past and at the time they were water mud with oil in them and lessor times oil mud with water in it. Daily, multiple times a battery of tests were run and analyzed by sometimes multiple people. Pressures and heat were all part of the testing usually to do the opposite of the question. We had to picture what was happening down hole and believe me at times there is plenty of pressure on an emulsion that is breaking down. In the testing we found ALWAYS we needed to keep the pumps running and the pressure on so as not to go back to two phase which is just the opposite of what you hope can happen. There was motion before or flow even. It was just that the emulsion was always too strong to break and the added pressure from the flow just made that emulsion so much stronger. I just do not see any way your premise could work. It is imaginative and had me pondering it for some time this night.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/281978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Schwarzschild metric in expanding Universe In Schwarzschild coordinates the line element of the Schwarzschild metric is given by: $$ds^2=\Big(1-\frac{r_s}{r}\Big)\ c^2dt^2-\Big(1-\frac{r_s}{r}\Big)^{-1}dr^2-r^2(d\theta^2+\sin^2\theta\ d\phi^2).$$ In the asymptotic limit where $r>>r_s$ the Schwartzschild metric becomes: $$ds^2=c^2dt^2-dr^2-r^2(d\theta^2+\sin^2\theta\ d\phi^2),$$ which is the Minkowski metric of flat spacetime. But observations show that real astronomical objects are embedded in an expanding spatially flat FRW metric given in polar co-ordinates by: $$ds^2=c^2dt^2-a^2(t)\ dr^2-a^2(t)\ r^2(d\theta^2+\sin^2\theta\ d\phi^2).$$ Therefore maybe the Schwarzschild metric should be given by: $$ds^2=\Big(1-\frac{r_s}{r}\Big)\ c^2dt^2-a^2(t)\Big(1-\frac{r_s}{r}\Big)^{-1}dr^2-a^2(t)\ r^2(d\theta^2+\sin^2\theta\ d\phi^2).$$ Perhaps this metric would only be useful to describe a gravitational system whose size is comparable to the Universe itself?
real astronomical objects are embedded in an expanding spatially flat FRW metric Not really. If you think of the cosmos as clumps of matter on top of a FRW background, you're counting the same matter twice: once in a perfectly uniform distribution and then again in its actual clumped location. You can start with FRW if you plan to construct a so-called swiss cheese solution by completely removing spherical regions of matter and replacing them with inhomogeneous spherically symmetric geometries with the same mass (such as Schwarzschild black holes). In that case you aren't counting anything twice, you're just treating some of the matter as homogeneous and some of it as clumped. If you want to build the whole cosmos out of clumped matter, then you don't start with FRW. You start with a Minkowski or (anti) de Sitter vacuum, and you end up with an FRW geometry when all of the matter is added. FRW is essentially a bunch of Schwarzschild patches sewn together and then smoothed to remove the local bumps.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/282165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why ket and bra notation? So, I've been trying to teach myself about quantum computing, and I found a great YouTube series called Quantum Computing for the Determined. However. Why do we use ket/bra notation? Normal vector notation is much clearer (okay, clearer because I've spent a couple of weeks versus two days with it, but still). Is there any significance to this notation? I'm assuming it's used for a reason, but what is that reason? I guess I just don't understand why you'd use ket notation when you have perfectly good notation already.
I think there is a practical reason for ket notation in quantum computing, which is just that it minimises the use of subscripts, which can make things more readable sometimes. If I have a single qubit, I can write its canonical basis vectors as $\mid 0 \rangle$ and $\mid 1 \rangle$ or as $\mathbf{e}_0$ and $\mathbf{e}_1$, it doesn't really make much difference. However, now suppose I have a system with four qubits. Now in "normal" vector notation the basis vectors would have to be something like $\mathbf{e}_{0000}$, $\mathbf{e}_{1011}$, etc. Having those long strings of digits typeset as tiny subscripts makes them kind of hard to read and doesn't look so great. With ket notation they're $\mid 0000\rangle$ and $\mid 1011\rangle$ etc., which improves this situation a bit. You could compare also $\mid\uparrow\rangle$, $\mid\to\rangle$, $\mid\uparrow\uparrow\downarrow\downarrow\rangle$, etc. with $\mathbf{e}_{\uparrow}$, $\mathbf{e}_{\to}$, $\mathbf{e}_{\uparrow\uparrow\downarrow\downarrow}\,\,$ for a similar issue.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/282286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43", "answer_count": 9, "answer_id": 0 }
How would you include gravity in a momentum problem? Say you have a big ball of mass $m_1$ and a little ball on top of that of mass $m_2$ (assume they are a small distance apart, like $1~\mathrm{mm}$). Now lets drop these from a height of $h$ so that the big ball will bounce off the ground and collide into the little ball in an elastic collision. Now I know gravity would play a key role in this example but how would one perform calculations with it? I know $F=p/t$ and momentum will not be conserved since there is an external force (gravity). So, knowing this how can one determine the height each ball will rise after the collision?
Since the collision is elastic, you have 2 equations right off the bat. First, we have the conservation of kinetic energy at the moment where the bigger ball colliding with the lighter ball as the heavier ball moves up (dropped factor of 1/2): $$m u_{1}^{2} + M u_{2}^{2} = m v_{1}^{2} + M v_{2}^{2}$$ where $u$ is velocity before collision and $v$ is velocity after collsion, $m$ and $M$ are the small ball and big ball respectively. Next, we have the conservation of momentum at this moment as well: $$m u_{1}+ M u_{2} = m v_{1} + M v_{2} $$ The arrangement of the 2 equations into $v_{1}$ and $v_{2}$ will be left as an exercise for the reader (always wanted to type that!) and as a hint, $$v_{1}=\frac{m-M}{m+M}u_{2}+\frac{2M}{m+M}u_{2}$$ Note that I have not included gravity yet. To include the contribution from gravity, include it as an gravitational potential energy term: $mgh=\frac{1}{2}mv^{2}$
{ "language": "en", "url": "https://physics.stackexchange.com/questions/282376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Equations of motion for a free particle on a sphere I derived the equations of motion for a particle constrained on the surface of a sphere Parametrizing the trajectory as a function of time through the usual $\theta$ and $\phi$ angles, these equations read: $$ \ddot{\theta} = \dot{\phi}^2 \sin \theta \cos \theta $$ $$ \ddot{\phi} = - 2 \dot{\phi} \dot{\theta} \frac{1}{\tan \theta} $$ I've obtained them starting from the Lagrangian of the system and using the Euler-Lagrange equations. My question is simple: is there a way (a clever substitution, maybe), to go on and solve the differential equations? I would be interested even in a simpler, partially integrated solution. Or is a numerical solution the only way?
Considering you are aware of conservation of total angular momentum in a sphere (if not, I will prove it below), from the lagrangian I think you are using you get: $$\mathcal{L}=\dfrac{1}{2}R^2\left(\dot\theta^2+\sin^2\theta\,\dot\phi^2\right)$$ $$l_\theta=\dfrac{\partial\mathcal{L}}{\partial\dot\theta}=\dot\theta$$ $$l_\phi=\dfrac{\partial\mathcal{L}}{\partial\dot\phi}=\sin^2\theta\,\dot\phi=const\quad \left(\text{since }\dfrac{\partial\mathcal{L}}{\partial\phi}=0\right)$$ for $r=R\,$ fixed for a sphere of radius $R$. You can see $l_\theta$ and $l_\phi$ are the conjugated momenta associated to $\theta$ and $\phi$, respectively. The total angular momentum of the system $L$ obeys the following: $$L^2=mR^2\left(\dot\theta^2+\sin^2\theta\,\dot\phi^2\right)$$ Defining $l^2=\dfrac{L^2}{mR^2}$ and using what we found above: $\,\dot\theta^2=l_\theta^2\quad\text{and}\quad\,\dot\phi^2=\dfrac{l_\phi^2}{\sin^4\theta}$. Thus: $l^2=l_\theta^2+\dfrac{l_\phi^2}{\sin^2\theta}$ We would like to show that this total angular momentum is conserved as well. Noting that differentiating respect to a parameter $\lambda$ we get this is conserved for the curve parametrized by $\lambda$: $$\dfrac{d\,l^2}{d\lambda}=\dfrac{d}{d\lambda}\,\dot\theta^2+\dfrac{d}{d\lambda}\left(\dfrac{l_\phi^2}{\sin^2\theta}\right)=2\left(\ddot\theta-\dot\phi^2\sin\theta\cos\theta\right)\dot\theta=0$$ because the result involves the equation of motion for $\theta$ you already computed, when is equal to zero. Furthermore, $$\dot\theta=\sqrt{l^2-\dfrac{l_\phi^2}{\sin^2\theta}}$$ $$\dot\phi=\dfrac{l_\phi}{\sin^2\theta}$$ and from here you can as well try to integrate both equations separately. My recommendation would be trying to find $\phi=\phi(\theta)$, so for instance you can make: $$\dot\phi=\dfrac{d\phi}{d\theta}\dot\theta$$ What's more: $$\dfrac{d\phi}{d\theta}\sqrt{l^2-\dfrac{l_\phi^2}{\sin^2\theta}}=\dfrac{l_\phi}{\sin^2\theta}$$ $$\Rightarrow\quad\dfrac{d\phi}{d\theta}=\dfrac{l_\phi}{l}\dfrac{1}{\sin\theta\sqrt{\sin^2\theta-\left(\frac{l_\phi}{l}\right)^2}}$$ Finally, integrating respect to $\theta$ leads to : $$\phi(\theta)=\phi_0+\arctan\left(\dfrac{\frac{l_\phi}{l}\cos\theta}{\sqrt{\sin^2\theta-\left(\frac{l_\phi}{l}\right)^2}}\right)$$ You can use any plotter you know for seeing how this can give you portions of arc of a sphere (parallels and meridians, e.g.) for a Parametric 3D Plot, setting $(r=R,\theta\in[0,\pi],\phi=\phi(\theta))$. You can get, for example, the equator for $l_\phi=0$.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/282513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Why do superconductors conduct electricity without resistance? Many authors have suggested that persistent currents in superconducting rings arise from the energy gap in the single-particle spectrum. Indeed, the argument has been put forward many times on this site! It is usually suggested that because there is an energy gap, Cooper pairs are prevented from scattering out of the condensate. However, this cannot be correct. For one, high temperature superconductors have d-wave symmetry, which implies a node (i.e. it takes zero energy to excite an electron along this direction). This seems to suggest that a complete gap is not necessary for persistent currents. Furthermore, it has been shown by Abrikosov and Gorkov that when one introduces magnetic impurities into an s-wave superconductor, the gap closes before persistent currents are destroyed. Therefore, the single-particle gap is not a necessary condition for superconductivity and any attempt to explain persistent currents by appealing to an energy gap in the single-particle spectrum cannot be correct. Is there therefore a simple way to understand why persistent currents exist in a superconductor intuitively? What are the necessary requirements?
A superconductor is characterized by two main properties: * *zero resistivity, and *the Meissner effect. Equivalently, these can be stated more succinctly as * *$E = 0$ (remember that resistivity is defined as $\frac{E}{j}$), and *$B = 0$. So even more succinctly: superconductors are characterized by no internal electromagnetic fields! What is the intuitive reason for this? It can be understood from the fundamental/microscopic property of superconductors: superconductors can be described in terms of superpositions of electrons and holes. Note that these two components have different electric charges, hence such a superposition can only be coherent if nothing couples to the charges inside a SC! Indeed, if there were an electromagnetic field inside the SC, it would couple differently to the electron and hole, decohering the superposition and destroying the SC. [Of course this doesn't do full justice to the theory of superconductivity, since this reasoning doesn't explain why we have superpositions of holes and electrons. Rather, my point is that once we start from that, then the aforementioned is hopefully intuitive.]
{ "language": "en", "url": "https://physics.stackexchange.com/questions/282618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 5, "answer_id": 0 }
Proof of equality of chemical potentials and temperature for diphasic system My question is the following : I have a system at (T,V,N) which is composed of two phases : (T1,V1,N1) and (T2,V2,N2). Initially I wanted to proove that $ \mu_1 = \mu_2 $, but I had troubles. To proove it i use the fact that $F=F_1+F_2$ must be minimised at equilibrium because we are in (T,V,N). So we have : $dF=dF_1+dF_2$. I can write : $V=V_1+V_2$, $N=N_1+N_2$ and s N and V are fixed, $dV_2=-dV_1$, $dN_2=-dN_1$. So : $$dF=(-P_1+P_2)dV_1+(\mu_1-\mu_2)dN_1-S_1dT_1-S_2dT_2=0$$ Then I wanted to say "well, $N_1$, $V_1$, $T$ are independant variables so I have to cancel the terms in frond of $dN_1$, $dV_1$ and I would have $P_1=P_2$ and $\mu_1=\mu_2$. But there is these terms in $dT_1$ and $dT_2$ that I don't know how to replace by $dT$...
Since in your case temperature does not remain constant, there is no point in trying to minimize $F$. If the reaction occurs in an isolated container, then total internal energy of the combined system, $U=U_1+U_2$, remains constant. If this is the case then it proper to maximize entropy, $S$, which is a function of $U,V,N$ ($F$ is obtained by Legendre transform of this fundamental relation). $dS=(\frac{1}{T_1}-\frac{1}{T_2})dU_1+(\frac{p_1}{T_1}-\frac{p_2}{T_2})dV_1+(-\frac{\mu_1}{T_1}+\frac{\mu_2}{T_2})dN_1=0$, at equilibrium. Refer: Thermodynamics by Callen.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/282732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Problem in understanding the derivation of Bernoulli's principle I am trying to understand the derivation of Bernoulli's principle by using the conservation of energy. This is the sketch I will be referring to. I am stuck in understanding a seemingly basic step in finding the total work done by the fluid without gravitational work. The fluid is flowing to the right, the two forces that are doing work are $p_1A_1$ and $p_2A_2$, and the two works are equal to $W_1=p_1A_1s_1$ and $W_2=p_2A_2s_2$. Now this is the part I do not understand. The total work done is $$W_1-W_2=(p_1-p_2)\Delta V.$$ Why is that when both forces act in the same direction? If we are finding the difference in work done (increase in energy) wouldn't it be $$W_2-W_1=(p_2-p_1)\Delta V~?$$ However if I go this way my signs at the end don't match up with according dynamical and hydrostatical pressures.
Ideal fluids are, by definitions, continuous bodies which support only compressive stresses. It means that a portion of fluid, say, a volume with regular boundary, is such that every small area of its boundary receives a surface force (proportional to the area) from the external part of fluid, and this force is always directed towards the interior of the portion of fluid and is orthogonal to its boundary. A portion of fluid may move only if the sum of these compressive stresses in not vanishing. We cannot pull (ideal) fluids, we can only push them! In your example, you are considering an approximatively cylindrical portion of fluid bounded by two lateral surfaces A1 and A2. The remaining part of the boundary is irrelevant for the computation of the work due to the stresses on this portion of fluid, since these forces are normal to the velocity of the particles of fluid. As the forces are always compressive, the forces on the lateral surfaces must be directed along opposite directions. Since the fluid moves from the left to the right, the force on A1 must have intensity greater than the one on A2. This difference moves (pushes) the cylinder.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/282971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Intensity of light after it passes through a convex lens When a parallel beam of light falls on a convex lens and get converge to its focus, does the intensity of light change?
Yes, the intensity changes, because intensity is just energy per area per second. You might be thinking of the related concept of "specific intensity", which is also called "brightness", which is the energy per second per area per incident solid angle (and can also be per frequency bin, but that's not of importance here). The key part is the "per incident solid angle", because it means that the specific intensity of a light source, say our Sun, does not change when you get farther away. Instead, the Sun just looks smaller, so occupies a smaller incident solid angle, and that's why it looks less bright though has the same formal "brightness" in each tiny incident solid angle bin. The specific intensity does not change in a lens, what happens is the apparent size of the source is distorted. So using a lens to burn a spot on paper is like making the Sun look larger at that spot, while keeping the same specific intensity. But that means a larger intensity, because intensity is integrated over incident solid angle.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/283064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does the unit hypercube in Minkowski space always have the 4-volume of 1? Suppose we have a unit hypercube in Minkowski space defined by the column vectors in the identity matrix $$ \mathbf I = \begin{bmatrix} 1 & 0 & 0 & 0 \\[0.3em] 0 & 1 & 0 & 0 \\[0.3em] 0 & 0 & 1 & 0 \\[0.3em] 0 & 0 & 0 & 1 \end{bmatrix}$$ Now; the length of one edge would have units of time, but this is solved by multiplying the time interval with the speed of light $c = 1.$ Obviously, this hypercube would have the 4-volume of 1, as seen by its determinant: $$\det \left(\begin{bmatrix} 1 & 0 & 0 & 0 \\[0.3em] 0 & 1 & 0 & 0 \\[0.3em] 0 & 0 & 1 & 0 \\[0.3em] 0 & 0 & 0 & 1 \end{bmatrix}\right) = \det \mathbf I = 1$$ Now, I have performed some numerical testing on the using the Lorentz transformation written as a matrix, $$ \left[ \begin{array}{c} t' \\x'\\ y' \\ z' \\\end{array}\right] = \\ \left[ \begin{array}{c} t \\x\\ y \\ z \\\end{array}\right]\left[\begin{array}{cccc} \frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & -\frac{v_x}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & -\frac{v_y}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & -\frac{v_z}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} \\ -\frac{v_x}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & \frac{\left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right) v_x^2}{v_x^2+v_y^2+v_z^2}+1 & \frac{v_x v_y \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} & \frac{v_x v_z \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} \\ -\frac{v_y}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & \frac{v_x v_y \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} & \frac{\left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right) v_y^2}{v_x^2+v_y^2+v_z^2}+1 & \frac{v_y v_z \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} \\ -\frac{v_z}{\sqrt{-v_x^2-v_y^2-v_z^2+1}} & \frac{v_x v_z \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} & \frac{v_y v_z \left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right)}{v_x^2+v_y^2+v_z^2} & \frac{\left(\frac{1}{\sqrt{-v_x^2-v_y^2-v_z^2+1}}-1\right) v_z^2}{v_x^2+v_y^2+v_z^2}+1 \\ \end{array}\right]$$ and the determinant of the resulting matrix always seem to be $1 \forall \{v_x, v_y, v_z\}$, even when $\sqrt{v_x^2 + v_y^2 + v_z^2} >1$, indicating that this "cube" will always have the same 4-volume, regardless of the inertial frame of reference (including tachyonic ones). It seems that if the 4-volume of an arbitrary "hypercube" is 1 in one intertial reference frame, it must also have the 4-volume equal to 1 in every other inertial reference frame. Is this really true? How would one prove a such proposition?
In terms of matrix components, Lorentz transformations have matrices that satisfy $$\eta = \Lambda^T \eta \Lambda$$ where $\eta$ is the Minkowski metric. Taking determinants of each side, we have $$|\eta| = |\Lambda^T| |\eta| |\Lambda| = |\Lambda|^2 |\eta|$$ which implies that $|\Lambda| = \pm 1$. Since a transformation by $A$ changes volume by $|A|$, this implies that Lorentz transformations preserve the (absolute) volume.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/283186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are there any additional fundamentals of physics in addition to space-time, energy, mass, and charge? What do you consider the fundamental quantities in physics to be? By fundamentals, I mean quantities that cannot be described by a combination of other quantities. Fundamentals are things that just are.
With fundamental quantities, I could imagine that you mean properties that differentiate various particles. In particle physics, there are multiple charges: * *electric charge *color charge *weak isospin *mass (“Higgs charge” so to speak) Then also discrete symmetries like parity and charge conjugation that give you more quantum numbers: * *parity *charge conjugation parity *spin *$g$-parity (although that is a combination of the other ones) Then one could look at like the core concepts of QFT: * *spinor fields *gauge fields *spin-0 fields All this needs the spacetime with its curvature and the various symmetry group manifolds. One could also take things like the action to be fundamental. From the action or the Lagrange density one can derive the equations of motion. Using the action one can compute (using Feynman's path integral') all the possible interactions. Using lattice field theory one can simulate it on the computer. It is not completely clear how the microscopic theory of quantum chromodynamics (QCD) generates the mesoscopic degrees of freedom that we see: the proton, neutron and other hadrons. It is believed that the microscopic theory can explain it. But is the theory fundamental if one cannot (yet) compute how the emerging structures are going to be? I think it depends on your perspective. You can take the stance that the standard model is an effective theory which one gets by integrating out all the string theory physics. Then string theory would be fundamental.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/283690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
If you hold a compass needle vertical does it point down or up differently on which hemisphere you are? Usually our compass is hold horizontally, and in the northern hemisphere it will point in the direction of to the north of the earth (actually to the South pole of the Earth's 'magnet'). But looking closer at some points, the magnetic field is not only horizontally aligned but also vertically. So in some point of view, the direction is not the north or the south of the Earth, but in or out of the earth. But does this influence a compass the same as when it is held vertically?
The north end of the compass needle is pulled down towards Earth when you hold it in the normal horizontal position in the northern hemisphere. So much so, in fact, that the south end needs to be slightly heavier to balance it. If you bring that same compass to Australia, the south end, already weighted, will be pulled down even further, perhap even dragging on the base of the compass. So you need to reverse the weights in any compass used in the southern hemisphere. A chart of magnetic dip contour lines along which the dip measured at the Earth's surface is equal. These are called  isoclinic lines. Magnetic inclination, or dip angle, is the angle that the Earth's magnetic field makes with the horizontal. If you hold the compass vertically in the northern hemisphere, the North Pole end should be pulled downwards. Image Source: How Magnets Work A dip needle is just like a conventional compass, but instead of holding it horizontally, it is held vertically. It is a magnetic needle used for navigational purposes just like a compass, but is used predominantly when traveling around the north and south poles. Instead of measuring horizontal magnetic deflection, the dip needle measures vertical magnetic inclination. When over the equator, the magnetic field of Earth is parallel to the surface of the Earth. At the poles, or near them, a conventional compass is very unreliable.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/283761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 2, "answer_id": 0 }
Help with this geometrical approach to deriving the lens equation for weak lensing All images and quotations are from Schneider, Kochanek and Wambsganss. Here is an image of a typical weak lensing setup. Since $D_{ds}$ and $D_s$ are much larger than the extent of the lens and source plane, we can model the curvature of the light ray as a kink at the point of the lens. $\hat{\alpha}$ is the deflection angle. $\eta$ is the 2d position of the source on a source plane. $\xi$ is the ray impact parameter. Small angle approximations apply to the deflection angle. From the figure we can read off the geometric condition that $$\vec{\eta}=\frac{D_s}{D_d}\vec{\xi}-D_{ds}\vec{\hat{\alpha}}(\vec{\xi}).$$ I am struggling to understand where this has come from geometrically. Could someone please explain? For completeness I will include the rest of the derivation in case it aids any explanations. We introduce angular coordinates by $$\vec{\eta}=D_s\vec{\beta}$$ and $$\vec{\xi}=D_d\vec{\theta}.$$ Now we transform the first equation to $$\vec{\beta}=\vec{\theta}-\frac{D_{ds}}{D_s}\vec{\hat{\alpha}}(D_d\vec{\theta})=\vec{\theta}-\vec{\alpha}(\vec{\theta}).$$
Took me a while, but i think I figured it out. Lets use a distance $\Gamma$. In first order approx $\Gamma=\theta*D_s$ and $\eta=\Gamma-\alpha*D_{ds}$. So that: $\Gamma=\theta*D_s=\eta+\alpha*D_{ds}$ And as $\theta=\xi/D_s$ $\eta=\theta*D_s-\alpha*D_{ds}=\frac{\xi}{D_d}D_s+\alpha D_{ds}$ Giving the previous relation. I am sure maybe there is a way to prove this without considering $\Gamma$ but it explains the relation.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/283873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How electron get deflected in magnetic field while moving? I don't understand why electron moves this way... e.g. A light object (crampled paper) going down until gets hit by the wind will go parallel (at least a few seconds) to the wind direction ... why not with electron?
Some basics at the beginning: * *an electron, moving parallel to a magnetic field won't be deflected *a positron as well as a proton will be deflected in the opposite direction to the direction of an electron or a antiproton *during deflection these particles emit photons *loosing energy these particles slowing down, the deflection will has a spiral path and at the end the particles get stopped. Another important facts: * *all these particles have magnetic dipole moments and an intrinsic spin *due to the Einstein-de Haas-experiment this spin has an angular momentum *the axis of the spin and the direction of the magnetic dipole moment are parallel or antiparallel *is parallel and antiparallel is a convention, but if for an electron it will be defined as parallel, than for a antiproton it is also parallel (and the antiproton will be deflected in the same direction as the electron) and for positrons and protons it will be antiparallel How electron get deflected in magnetic field while moving To stick it together one has to recognise that the external magnetic field will align the electrons magnetic dipole moment, this led to the emission of photons, this to a disalignment and ... The game starts again as long as the electron has kinetic energy. More in detail see my elaboration About the internal cause of Lorentz force
{ "language": "en", "url": "https://physics.stackexchange.com/questions/284081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What is the location of an object in expanding and finite space? I was thinking that does it mean the same thing when we say that we are trying to find a stationary object in expanding space or trying to find a moving object in a finite region of space?
You need to be clear about what you mean by "stationary" and "moving". If an object in expanding space(time) is stationary with respect to the expansion, it will appear to other such "stationary" observers to be moving away at the expansion rate. If that object is the only thing you have experimental access to you wouldn't be able to tell whether the spacetime is expanding or whether the object is just moving away. However, in expanding spacetime objects which are not stationary in this sense will tend eventually to come to rest with respect to the expansion. That is, over time, objects which are moving away at some rate different from the expansion rate will asymptote towards it. That won't be true in flat spacetime, where any rate of inertial motion is indefinitely sustainable. So if you see very many objects all moving away at a uniform expansion rate, expanding spacetime is a more natural description.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/284313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If energy is quantized, does that mean that there is a largest-possible wavelength? Given Planck's energy-frequency relation $E=hf$, since energy is quantized, presumably there exists some quantum of energy that is the smallest possible. Is there truly such a universally-minimum quantum of $E$, and does that also mean that there is a minimum-possible frequency (and thus, a maximum-possible wavelength)?
since energy is quantized You have a misunderstanding here on what quantization means. At present in our theoretical models of particle interactions all the variables are continuous, both space-time and energy momentum. This means they can take any value from the field of real numbers. It is the specific solution of quantum mechanical equations, with given boundary conditions that generates quantization of energy. The same is true for classical differential equations, as far as frequencies go. Sound frequency can take any value, and its quantization in specific modes depends on the specific problem and its boundary conditions. There exist limits given by the value of the constants that are used in elementary particle quantum mechanical equations. It is the Planck length and the Planck time the reciprocal of the Planck time can be interpreted as an upper bound on the frequency of a wave. This follows from the interpretation of the Planck length as a minimal length, and hence a lower bound on the wavelength. which are at the limits of what we can see in experiments and study in astrophysical observations, but these are another story.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/284444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 3, "answer_id": 1 }
Tangential speed and Tangential velocity A slight confusion on terminology. Tangential speed refers to the linear speed when travelling across a circular path, it refers to the distance covered across the circular path for a given time. I have seen the word Tangential velocity used with tangential speed in various websites, shouldn't tangential velocity refer to displacement with time in that motion? Is there something wrong in my point? Tks for helping
In a 2D radial coordinate system, there are two orthogonal directions: radial and tangential. You could call these $\hat{r}$ and $\hat{\theta}$. Tangential velocity is the component of velocity in $\hat{\theta}$. It is still directional because it can be positive or negative. Tangential speed is the magnitude of this velocity. Although the magnitude of the velocity vector has a special name (speed), it's still okay to talk about velocity components or velocity magnitude and call it velocity. Most vectors don't have a special name for their magnitude, anyway. For example, the magnetic field vector and the magnetic field strength are $\vec{B}$ and $B$ and they could both be referred to as just magnetic field (the surrounding context should make it as clear as it needs to be whether magnitude or vector is meant).
{ "language": "en", "url": "https://physics.stackexchange.com/questions/284566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Connecting Ammeter and Voltmeter in the circuit I am unable to comprehend why ammeter is connected in series and voltmeter in parallel in a circuit. My book doesn't give any explanation about it nor am I able to understand it from the internet. Can someone please explain this to me (a beginner).
An easy way to see it might be this: Voltmeter: needs to measure a potential difference, so you need to hook its ends to the two points which voltage you want to measure. This means you need to put it in parallel. Ammeter: needs to measure a current, so you need to put it somewhere where all the current you want to measure will pass through it. This means you need to put it inside the circuit and thus in series.
{ "language": "en", "url": "https://physics.stackexchange.com/questions/284642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }