Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
Is it the term "telescope" the same as a "detector"? For example, in this reference, MITO: muon telescope they use the term telescope but clearly the "telescope" is a muon detection system. And they also talk about angular resolution, angular aperture, etc. So my question is focused in, is the term telescope the same as a particle detector? and if a particle detector can be described by the properties of an optical-telescope(talking about optical geometry, diffraction, angular resolution, ect.)?
| A particle detector is not necessarily a telescope, although in a specific context the terms could be interchangeable. It is like word engine used by firemen or word pot used by parents of a small child - the word is rather general, but no one is confused about its specific meaning. The technical term in speech theory is implicature.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/673103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How necessary are the laws of physics given the impossibility of violating the law of conservation? The Damascene theologian Ibn Taymiyya believed that God originates things ex materia, not ex nihilo or without prior material conditions, arguing that this latter type of creation entails a logical contradiction. Although he had an appreciation of the logical necessity of what is now understood to be the laws of motion, I am interested to see the implications of his theological view on the laws of physics in general. Does contradicting any one of the laws of physics (as we understand them) necessitate a conflict with the law of conservation of energy, such that these laws of physics must in fact be necessary if the law of conservation is necessary. I am not asking as to whether or not we may derive any of the laws of physics entirely from the law of conservation of energy, but rather as to what extent the laws of physics can (possibly?) be manipulated without breaking the law of conservation.
| You can certainly break some of the current laws of physics without violating energy conservation. Energy conservation comes from the idea that the laws of physics do not change with time (i.e. what holds today holds yesterday as well). This in turn is a consequence of Noether's theorem.
Noether's theorem permits other symmetries, like the one for conservation of momentum and angular momentum, which are unrelated to time symmetry. So yes, energy conservation is not the be-all-end-all; you can have change other physical laws without breaking it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/673175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Age of a black hole Is there a way to measure the age of a black hole by find Hawking radiation or calculating the stable orbits around the black hole?
| No. The Hawking radiation could tell us the black hole's remaining lifetime, or equivalently how much mass it has. But that doesn't tell us anything of the following (which are all equivalent): how much mass it's shed; how much it once had; how long it's existed for. The argument for orbital details is analogous.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/673563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why should a dipole have zero net charge? Why can a dipole not have two unequal charges separated by a distance? Is there any significance for the dipole being defined as electrically neutral?
| The concept of a dipole moment, and other moments such as a monopole, quadrupole, etc, comes from the process of writing a field as a sum of components called multipoles. This is known as a multipole expansion of the field. The reason we do this is that it can make calculations quicker and easier because it allows us to approximate a complicated field by a simpler sum of multipoles.
A single isolated point charge produces a field that is a pure monopole field, and two equal and opposite charges produce a field that is approximately a dipole field (it is exactly a dipole only in the limit of the distance between the charges becoming zero). So if you add a single charge to a pair of equal and opposite charges you get a total field that is a sum of monopole and dipole fields.
And this is what happens in the example you give. Suppose we have two charges $+2Q$ and $-Q$, then this is equivalent to a single charge $+Q$ and a pair of charges $+\tfrac32 Q$ and $-\tfrac32Q$. The field from the charges would be the vector sum of a monopole field from the $+Q$ charge and a dipole field from the $\pm\tfrac32Q$ pair.
So the reason a dipole cannot have two unequal charges is simply because such an arrangement would be a sum of a monopole and dipole, and not just a dipole.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/673695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Why bother buying efficient lights if you are already heating your house? Assume I live in a location where at any time of day and any time of year, I need to heat my house. Assume further that I have a room with no windows. In this case, does it make sense for me to buy efficient light bulbs, considering that any inefficiency in converting electricity to visible light simply leads to more heat being added to the room, which in turn, results in less heat being output by the heater to maintain constant room temperature.
Although these are somewhat idealized conditions, I don't think they are too far off from being realistic. For example, say you live near the arctic circle, it might be smart not to have many windows due to heat loss, and it seems reasonable that in such a climate, heating will be required at all times of the day and year. Assuming I haven't missed something, it seems to me, somewhat unintuitively, that buying efficient light bulbs is not a logical thing to do. Is this the case?
| Heating house with electricity is one of most expensive ways (if not THE most expensive).
Normally houses are heated with coal/oil/firewood/natural gas/heat pumps/RTGs, and only rarely with electric heaters. The electric heating is just more expensive than other sources.
The heat is not "lost", but there are cheaper options to generate it.
Unless your main heating is electrical, replacing bulbs with more energy efficient variants or candles will reduce the electricity bill.
It is also possible you will not want the given room as warm.
It is not likely to happen because of lightbulbs, unless you have a really powerful ones, but I find my room is adequately heated with just my PC running at full load, and sometimes need to be vented to bring temperature down. It's autumn, 10 celsius outside, and the room heaters are off.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/673836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "92",
"answer_count": 10,
"answer_id": 1
} |
Euclidean space to Minkowski spacetime Can you continuously deform (i.e., shrink, twist, stretch, etc. in any way without tearing) four-dimensional Euclidean space to make it four-dimensional Minkowski spacetime?
| Both 4D-Euclidean space and (3+1)D-Minkowski spacetime are 4D-vector spaces.
Indeed, $\vec R=\vec A+\vec B$ is the same operation in both spaces.
What differs is the assignments of square-magnitudes to the vectors and the assignments of "angles" between the vectors, which are both provided by a metric structure added to the vector space structure.
To continuously transform from one to the other, leave the vector space structure alone,
and change the signature of the metric tensor field.
Write $$g_{ab}=\left( \begin{array}{cccc} 1 & 0 & 0 &0 \\ 0 & -E & 0 &0 \\ 0 & 0 & -E &0 \\ 0 & 0 & 0 &-E \end{array}\right)$$ and let $E$ vary from $-1$ (Euclidean space) to $+1$ (Minkowski spacetime), with $0$ as the degenerate time-metric of the Galilean spacetime.
Try my Desmos visualization:
robphy v8e spacetime diagrammer for relativity (v8e-2021) - t-UP
https://www.desmos.com/calculator/emqe6uyzha
and play with the E-slider.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/673969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 0
} |
How do stars produce energy if fusion reactions are not viable for us? From what I've learned, fusion reactions are not currently economically viable as of right now because the energy required to start the reaction is more than the energy actually released. However, in stars they have immense pressures and temperatures which are able to allow these reactions to take place. However, if these reactions are considered endothermic for us, how are they exothermic in stars? i.e. how are stars able to release energy?
Moreover, why are such fusion reactions for us endothermic in the first place? Given we are fusing elements smaller than iron, wouldn't the binding energy per nucleons products be higher and hence shouldn't energy be released?
| As I see the core of your question is based on the “exothermic/endothermic” problem.
Fusion reaction is exothermic both on earth and on stars. When one binds two light elements, one ends up releasing energy always. The negative energy balance of fusion apparatus on earth is not due to the fusion reaction mechanism itself. It comes from the enormous energy necessary to keep atoms close enough to allow the fusion reaction to be sustained after initiated. As you noticed, in stars, this containing energy comes for free from the huge gravitational push that stars have. On earth, the containing energy is usually taken from incredibly high strength magnetic fields, which need an equivalent high consumption of electricity.
Because of this high energy consumption to keep the plasma contained, we cannot sustain the fusion reaction long enough to get back the energy spent to start the reaction plus the energy needed to keep it running on a reasonably amount of time.
An interesting fact is that the ratio of the energy consumed to contain the plasma by the energy gained from the reaction reduces inversely with the size of the reactor. The problem is that we still need incredibly big installations to make it cost effective using current technology.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/674089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 10,
"answer_id": 7
} |
Concept of Gravitational potential energy Change in Potential energy corresponding to a conservative force is defined as $$\Delta U = U_f - U_i=-W_f$$ and gravitational potential energy is $$\Delta U = U_f-U_i = -W_g $$ Suppose a mass $m_1$ is kept at a fixed point $A$ and a second mass $m_2$ is displaced from point $B$ to point $C$ such that $AB = r_1$ and $AC = r_2$.
$\therefore$ , $$\Delta U = -W_g = \int{\frac{Gm_1m_2}{r^2}}dr$$ $$U(r_2)-U(r_1) = Gm_1m_2\left(\frac{1}{r_1}-\frac{1}{r_2}\right)$$ Now I am free to choose any reference point thus if I take potential energy at $U(r_1) = 0$ and $r_2 = \infty$ Then I will get potential energy at infinity as $$U(\infty) = \frac{Gm_1m_2}{r_1}$$ which I think is wrong as a reference point at $r_1$ the potential energy at infinity should be infinite.
So where I am wrong, is my concept of gravitational potential energy wrong itself.
| Other answers are making this way too complicated.
The potential energy equation you quoted is only valid outside of a uniform sphere of mass. Inside a uniform sphere, the potential energy is actually constant. Therfore that constant should be set to the minimum of potential energy.
$U(r < R) = C$
$U(r=0) = U(r = R) = C$.
We make this C negative out of convenience so that it's 0 at $r = \infty$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/674392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
What does a Umlaut (double dot) above an angle mean? I'm reading a paper on double pendulums and there is an equation of motion that contains a double dot (Umlaut) above an angle. What does this mean / is this a standard notation in equations of motion?
| It means the second time derivative.
In other words, $$\ddot\theta=\frac{d^2\theta}{dt^2}$$ which represents the angular acceleration of an object (which is a pendulum bob in your example).
These, and indeed first time derivatives (or even more than first, second etc.) are very common in physics (and in engineering and many other subjects), since we are often thinking about instantaneous timed rates of change of quantities. For example, the instantaneous rate in change of an objects position $x$ is called its instantaneous velocity $v$ where $$v=\dot x=\frac{dx}{dt}$$ and its acceleration is the rate in change of this quantity, or $$a=\ddot x=\frac{dv}{dt}=\frac{d^2x}{dt^2}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/674512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why can a very large body of water not store summer heat? On this page, it states "The key disadvantage of using a very large body of water to achieve heat exchange with a relatively constant temperature is that you are not able to store summer heat in that body of water – to have the benefit of retrieving those higher temperatures in winter."
Why is it so? Is it because a very large body of water would have more heat exchange with the air and hence would lose the heat gathered in summer?
But "heat exchange with a relatively constant temperature" also points in the direction of having a large body of water, so I am a bit confused.
| The article is wrong. Consider the following: a popular form of heat pump HVAC uses coils of pipe buried in the ground and in communication with the subsurface water table. The ground water reservoir stays at an almost constant temperature year round (52 degrees F where I live) and furnishes heat in the winter and cooling in the summer via the heat pump.
Now we imagine a similar system where a simple solar array transfers heat into the cold ground water in the summer, bringing its temperature higher than 52 F, at the same time the 52 F water some distance away drives the AC function of the HVAC system of the dwelling. Then, the system runs in reverse in the winter (taking heat from the warmed-up ground water reservoir, and chilling the ground water in the nearby cold sink for the summer AC).
Eminently doable.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/674852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Theoretical minimum temperature required to melt any material Reading about this (New material has a higher melting point than any known substance) got me curious.
Given a pressure level (like 1 atm) and a sufficiently hot temperature, I have the intuition that no material stays solid, and turns to plasma if hot enough.
So here's the question: According to modern physics models, what is the lowest known temperature beyond which we can guarantee that any material will be past its melting point? We can consider an arbitrary material sample being heated under isobar conditions at 1 bar.
Can we in theory make a material that remains solid at 1 bar and 4500K? 6000K? 20000K?
| Using the Debye model leads to the Lindemann melting formula for the melting Temperature: see reference), for p = 1 bar there is an upper limit for a given material structure.
$T_m = \frac{4\pi^2 A\, r_0^2 k_B \eta^2 }{9N_Ah^2}\Theta_D^2\,$ in K with A atomic mass, $r_0$ interatomic distance, $\eta$ Lindemann factor = 0,2 - 0,25 and Debye temperature $\Theta_D$.
In the reference the highest calculated value $T_m$ is for the element Tungsten W with 3955 K. The only variables A, $r_0$ and $\Theta_D$ can be altered, but you don't know them for the "theoretical melting temperature of any material", but only for a specific one. Moreover the whole Debye theory is an approximation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/674935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
How do we measure time? I'm having a little trouble trying to put to words my problem and I apologize in advance for any causation of trouble in trying to interpret it.
We define periodic events as those events that occur over equal intervals of time. But, don't we use periodic events themselves to measure time (like a pendulum or the SI unit definition of transition frequency of Cesium)? Then how is it we know we have equal intervals of time?
Another way to put my problem would be:
We metaphorically describe time in terms of the physical idea of motion, i.e., 'time moves from a to b', but how do we deal with how fast it moves because to know how fast it moves, we must know its rate and to know its rate is like taking the ratio of time with time?
This is all very confusing. I apologize again for any problem in trying to understand.
| A professor of mine once defined time as follows:
Time is what a clock measures.
which I assume is an of-quoted anglicization of Einstein's:
Zeit ist das, was man an der Uhr abliest
In other words, you build a clock (we all know what that is) and time is the thing who's change it measures.
Of course you may ask "Okay, what is a clock then?" and for that I will refer you to the other answers here.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/675075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "44",
"answer_count": 11,
"answer_id": 4
} |
How often is a non-coordinate and non-orthonormal basis used in GR? I wrote a program that takes as input the basis vectors if electing to use an orthonormal basis, or metric components if using the coordinate basis, and outputs non-zero Christoffel symbols and components of Riemann, Ricci, and Einstein tensors, as well as the Ricci scalar.
I could include functionality to support a non-coordinate and non-orthonormal basis, but I don’t want to waste my time if that’s something that no one ever uses in GR. I know that I don’t know enough about GR yet to make a call on this, so I’m asking you all!
| Coordinate bases are rarely orthonormal in GR. They're often orthogonal (in which case the metric is diagonal), but in general the basis vectors associated with each coordinate do not have a norm of $\pm1$. If you could truly establish an orthonormal set of coordinate basis vectors, then I'm pretty sure that your space would be flat.
Non-orthogonal coordinate bases are less common but are far from rare. Examples include "tortoise" or "Gullstrand-Painlevé" coordinates for Schwarzchild spacetime, or standard coordinates for Kerr spacetime (i.e., a rotating black hole.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/675234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Contraction in SR vs GR I've always had a bit of fuzziness concerning relativistic contraction which I will try to put into words.
Iiuc in SR, moving objects contract in the direction of their travel, as measured by rulers at rest w.r.t. said objects. A traveling ruler when compared to the static one will appear shorter, and if we imagine a set of clocks in the moving frame spaced 1m apart in that frame, they will appear closer together in the rest frame. Thus in SR objects contract and if we take the spaced clocks as a metric then the moving frame is entirely contracted as well. To observers traveling in the moving frame however everything appears 'normal', with no contraction.
But in GR iiuc it is only space, and not objects, that contracts in the presence of a $g$-field. A ruler in the presence of (for instance) a constant $g$-field will not contract as compared to the same ruler when not in the $g$-field. But a set of meter-spaced clocks in a region of no $g$-field, will be closer together when in the presence of a $g$-field, as measured by the (non-contracting) ruler in the same g-field.
If objects were also contracted in GR, then (for instance) its hard for me to understand how LIGO could work, since the light between the mirrors would get squished just as much as the space between the mirrors was squished, and you wouldn't be able to measure any effect.
Have I got this right?
| Laser light is already relativistically contracted with respect to LIGO. Therefore, its contraction is not phase-locked to the apparatus reference frame sensing gravity waves of much longer wavelength.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/675606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Transformation of field strength tensor in non-abelian gauge theory The field strength tensor is defined as
$$F_{\mu\nu}^a=\partial_\mu A^a_\nu-\partial_\nu A^a_\mu +g f^{abc} A_\mu^b A_\nu^c$$
where $f^{abc}$ are the antisymmetric structure constants and $A_\mu^a$ the gauge fields which transform as follows:
$$A_\mu^a\rightarrow A^a_\mu+\frac{1}{g}\partial_\mu \alpha^a-f^{abc}\alpha^b A_\mu^c$$
where $\alpha^a$ is infinitesimal and parameterizes the gauge transformation. For example a field transforms as $\psi\rightarrow U\psi$, where $U=\exp i\alpha^a T^a\approx 1 +i\alpha^a T^a$, where $T^a$ are the generators.
I want to calculate the transformation of $F^a_{\mu\nu}$ by plugging in the transformation of $A_\mu^a$:
$$F_{\mu\nu}^a\rightarrow \partial_\mu (A^a_\nu+\frac{1}{g}\partial_\nu \alpha^a-f^{abc}\alpha^b A_\nu^c)-\partial_\nu (A^a_\mu+\frac{1}{g}\partial_\mu \alpha^a-f^{abc}\alpha^b A_\mu^c) +g f^{abc} (A^b_\mu+\frac{1}{g}\partial_\mu \alpha^b-f^{bde}\alpha^d A_\mu^e) (A^c_\nu+\frac{1}{g}\partial_\nu \alpha^c-f^{chi}\alpha^h A_\nu^i)\\ = F_{\mu\nu}^a-f^{abc}\alpha^b(\partial_\mu A_\nu^c-\partial_\nu A_\mu^c)-f^{chi}\alpha^h gf^{abc}(A_\mu^b A_\nu^i-A_\mu^i A_\nu^b)$$
The last term cannot be correct, since I know that the correct answer is:
$$F_{\mu\nu}^a\rightarrow F_{\mu\nu}^a-f^{abc}\alpha^b(\partial_\mu A_\nu^c-\partial_\nu A_\mu^c+g f^{cde}A_\mu^dA_\nu^e)$$
Can you spot my mistake?
| Just turning my comment into an answer.
I haven't checked the algebra, but often in these kinds of calculations you need to use the Jacobi identity,
\begin{equation}
^{}^{}+^{}^{}+^{}^{}=0.
\end{equation}
It would conceptually make sense if you end up needing to use it here, since the Jacobi identity is needed for $^{}$ to be the structure constants for a legitimate Lie algebra, which is necessary for $F^{a}_{\mu\nu}$ to transform properly.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/675715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Error Propagation for division Let's say I have a measurement $x$ with an uncertainty $\Delta x$. I also have a constant $C$ which has no uncertainty.
I want to find the uncertainty of $y$, which is defined as $C/x$. How do I find the uncertainty $\Delta y$? I know that IF I instead defined $y = C*x$ then $\Delta y = C*\Delta x$, but I'm not sure how it would work for division?
| For small $\Delta x$
$$y+\Delta y=\frac{C}{x+\Delta x} = \frac{C}{x(1+\Delta x/x)} = y(1+\Delta x/x)^{-1}=y(1-\Delta x/x) $$
So $\Delta y = -y\frac{\Delta x}{x}$
If $\Delta x$ is larger, for a specific $y$, a straightforward way is to work out $y$ in two cases, using $x + \Delta x$ and $x - \Delta x$ and see what $\Delta y$ results.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/675823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If space is a vacuum, how do stars form? According to what I have read, stars are formed due to the accumulation of gas and dust, which collapses due to gravity and starts to form stars. But then, if space is a vacuum, what is that gas that gets accumulated?
| Space is not a full vacuum. It's mostly a vacuum, and it's a better vacuum than the best vacuums that can be achieved in a laboratory, but there's still matter in it. See interstellar medium.
In all phases, the interstellar medium is extremely tenuous by terrestrial standards. In cool, dense regions of the ISM, matter is primarily in molecular form, and reaches number densities of $10^6$ molecules per $\mathrm{cm}^3$ (1 million molecules per $\mathrm{cm}^3$). In hot, diffuse regions of the ISM, matter is primarily ionized, and the density may be as low as $10^{−4}$ ions per $\mathrm{cm}^3$. Compare this with a number density of roughly $10^{19}$ molecules per $\mathrm{cm}^3$ for air at sea level, and $10^{10}$ molecules per $\mathrm{cm}^3$ (10 billion molecules per $\mathrm{cm}^3$) for a laboratory high-vacuum chamber.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/676131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 6,
"answer_id": 0
} |
Propagator of harmonic oscillator at specific times It is well known that the propagator (kernel) of a simple harmonic oscillator is given by
$$
U\left(x_{b},T;x_{a},0\right)=\sqrt{\frac{m\omega}{2\pi i\hbar\sin\omega T}}\exp\left\{ \frac{im\omega}{2\hbar\sin\omega T}\left[\left(x_{a}^{2}+x_{b}^{2}\right)\cos\omega T-2x_{a}x_{b}\right]\right\}. \tag{1}
$$
I want to show explicitly that at times that are integer multiples of the period (i.e. $T=(2\pi/\omega) n$) the propagator becomes $\delta (x_b - x_a)$ while for odd multiples of the period (i.e. $T=(\pi/\omega)(2n+1)$) it's equal to $\delta(x_b + x_a)$.
Proving the first case seems straightforward. By defining
$$
\epsilon=\sqrt{-\frac{\hbar\sin \omega T}{im\omega}}
$$
we can rewrite the propagator as
$$
U\left(x_{b},T;x_{a},0\right)=\frac{1}{\epsilon\sqrt{2\pi}}\exp\left\{ -\frac{1}{2\epsilon^{2}}\left[\left(x_{a}^{2}+x_{b}^{2}\right)\cos\omega T-2x_{a}x_{b}\right]\right\}.
$$
Now, for $T\to2\pi n/\omega$ where $n$ is an integer, the propagator coincides with the definition of Dirac delta as a limit of a Gaussian:
$$
U\left(x_{b},T;x_{a},0\right)=\lim_{\epsilon\to0}\frac{1}{\epsilon\sqrt{2\pi}}\exp\left[-\frac{1}{2\epsilon^{2}}\left(x_{a}-x_{b}\right)^{2}\right]=\delta\left(x_{a}-x_{b}\right).
$$
So far, so good. However, when $T\to\frac{\pi}{\omega}\left(2n+1\right)$, we still have $\epsilon \to 0$ except now $\cos \omega T \to -1$ and thus
$$
U\left(x_{b},T;x_{a},0\right)=\lim_{\epsilon\to0}\frac{1}{\epsilon\sqrt{2\pi}}\exp\left\{ \frac{1}{2\epsilon^{2}}\left(x_{a}+x_{b}\right)^{2}\right\}.
$$
But now the exponent doesn't coincide with $\delta(x_a + x_b)$. Redefining $\epsilon$ such that it would behave appropriately inside the exponent leads to an overall imaginary phase, which isn't good either. What am I missing?
| OP's troubles are (partly?) caused by the fact that OP's eq. (1) only holds for $0<T<\frac{\pi}{\omega}$. OP's eq. (1) lacks the caustics/metaplectic correction/Maslov index. The corrected formula
$$\begin{align} K(x_b,T;x_a,0)~=~&\exp\left[-i\left(\frac{\pi}{4}+\frac{\pi}{2}\left[\frac{\omega T}{\pi}\right]\right)\right]\cr
& \sqrt{\frac{m\omega}{2\pi \hbar|\sin\omega T|}} \exp\left\{\frac{i}{\hbar}S_{\rm cl}\right\}\end{align} $$
is known as Feynman-Souriau formula.
References:
*
*W. Dittrich and M. Reuter, Classical and Quantum Dynamics, 6th ed, 2020; eqs. (19.71) + (20.40).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/676220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Squared Summation of Terms using Einstein's summation convention In working with QFT and Maxwell's equations, terms such as:$$\left(\partial_\mu\,A^\mu\right)^{2}$$ often appear. Since I am new to this, I am not sure of the expansion. That is, is it 4 terms squared or is it 4 squard terms:
$$\left(\partial_0 A^0\right)^{2} + \left(\partial_1 A^1\right)^{2} +\left(\partial_2 A^{2}\right)^{2} +\left(\partial_3 A^3\right)^{2}$$Or,
$$\left(\partial_0 A^0 + \partial_1 A^1 +\partial_2 A^{2} +\partial_3 A^3\right)^{2}$$
| Here, standard rules of algebra should apply, i.e. the summation should be performed first and then squared (it is obvious once you write out the summation symbol instead of using Einstein's notation):
$$(\partial_\mu A^\mu)^2 = \left( \sum_{\mu=0}^3 \partial_\mu A^\mu \right)^2 = (\partial_0 A^0 + \dots)^2$$
Note that expressions using covariant notation (with valid use Einstein's summation convention) are automatically Lorentz-invariant. You can quickly convince yourself that the first expression you propose is no longer covariant.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/676503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Does the temperature of the gas inside a balloon changes with it expands in a vacuum chamber or does it remain constant? I've been trying to figure out if a balloon expanding in a vacuum chamber undergoes a isothermal, adiabatic or a mixture of both processes and I saw that my problem actually comes to knowing whether the temperature of the gas inside the balloon remains constant or not during the whole process.
Let's say we reach 50% of vacuum pressure inside a vacuum chamber which has a balloon inside filled with a certain gas. On the one hand, If the air inside the balloon is properly insulated from the outside, then heat transfer is negligible meaning Q = 0, which is one of the conditions for a process to be adiabatic.
Now, and here's where my confusion starts, I've read sources using this experiment to teach about Boyles law, claiming that for an ideal gas temperature should be the same and that there could be slight but negligible changes for a real gas like air, meaning the process is isothermal.
And I know that a process can be both isothermal and adiabatic, but I don't think this is the case, and crossed information confused me a lot.
Thank you in advance.
| If the expansion is rapid, it will be adiabatic. If the balloon is rubber which must be stretched, the pressure will be higher inside than outside.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/676830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is the double slit pattern a standing wave? This question is about terminology. The double slit pattern has nodal lines and antinodal lines, and therefore resembles a standing wave. However, the antinodal lines within the double slit pattern resemble travelling waves. Do the terms standing wave and travelling wave have a definition, and if so, are those definitions mutually exclusive?
-- Edit, for clarification: Naively, I would tend to think that only a Chladni pattern is a true standing wave, because its antinodal areas are standing waves, not travelling waves.
[
image derived from a wikimedia commons image
| Yes, the interference pattern produced by two slits (or, equivalently, two oscillators with the same frequency that are in phase with each other) is a type of two dimensional standing wave.
The nodes, where the amplitude of the combined wave is zero, lie along lines where the difference in the distance from the two slits is an odd number of half-wavelengths. Along these lines the two waves are $180^o$ out of phase so they cancel each other out.
There are also lines of anti-nodes, where the difference in the distance from the two slits is a whole number of wavelengths. Along these lines the two waves are exactly in phase, so the amplitude of the combined wave is the sum of the amplitudes of the individual waves. The mid-line exactly half way between the two slits is one example of a line of anti-nodes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/676936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does vacuum spacetime have an inherent curvature? I am a complete novice in physics beyond the high school level, so please excuse anything wrong in my question.
So I have recently read that according to General Relativity, the presence of mass in spacetime causes spacetime to become curved and that is how gravity arises.
Does vacuum spacetime have an inherent curvature? What I mean to say is that if we remove all kinds of matter and energy from the universe (somehow), and then try to determine the curvature of spacetime, will it be flat or will it be curved?
And if vacuum spacetime does have an inherent curvature, why or how does that curvature arise, given that nothing possessing energy or mass is present in the universe I have described above.
| A spacetime without matter or energy is called a vacuum spacetime. There are flat vacuum spacetimes as well as vacuum spacetimes with curvature.
The reason for this is that, like most differential equations, different solutions can be obtained for different boundary conditions, even given the same sources.
So just like vacuum solutions for Maxwell’s equations include both no field solutions and plane wave solutions, similarly vacuum solutions in GR include flat spacetime, gravitational waves, and other curved spacetimes.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/677041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 2
} |
What is the Hydrogen and Helium composition of the Sun in terms of their different states? What is the Hydrogen and Helium composition of the Sun in terms of:
Hydrogen: (1) molecular, (2) metallic and (3) ionized compositions?
and
Helium: (1) atomic, (2) metallic and (3) ionized compositions?
This seems difficult to find on the internet.
| Roughly speaking the hydrogen and helium in the Sun become fully ionised at depths of about $\sim 10^7$ m below the photosphere once the temperature rises above about $(2-3)\times 10^4$ K (the larger number is appropriate for helium).
Thus atomic hydrogen and helium only exists in the outer $10^7$ m of the Sun and even then, the ionisation fraction is significant below the outer $2\times 10^6$ m.
Of course the density also rapidly decreases towards the photosphere, so the fraction of the Sun's mass in atomic form is difficult to estimate without a detailed solar model. However, back of the envelope, if we assume the density in these outer layers is $<1$ kg/m$^3$ (e.g. Nordlund et al. 2009), then a fraction of $<3\times 10^{-5}$ of the Sun's mass is in atomic form.
Molecular hydrogen is dissociated at much lower temperatures and is unlikely to be present in any significant quantities anywhere in the Sun.
Metallic hydrogen is not thought to be present in the Sun, the temperatures are way too high and the electrons are not degenerate. The requirements for metallic helium are even more extreme in terms of pressure and so it should not exist either inside the Sun.
To at least three significant figures, the answers to your questions are:
Hydrogen 0.00%, 0.00%, 100%
Helium. 0.00%, 0.00%, 100%
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/677160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why only the wavelength and speed of refracted light traveling inside a transparent material changes and not its frequency? When monochromatic light waves travel from one medium to another the frequency never changes.
A transition to a denser medium will result of a slow down of the propagation speed v of the light wave and its wavelength λ but not of its frequency f. Photons still travel at c speed from one atom to the next through the vacuum space between the atoms of the medium.
$$u=\lambda f$$
Should not both λ and f proportionally decrease to match the slower v speed?
Why f does not change what is the physical explanation?
Also, if f does not change and since no transparent material is perfect and there will be apart of reflection also some absorption, how light absorption, lost energy, is then justified by the equation
$$\mathrm{E}=\mathrm{h} \mathrm{f}$$
if f remains unchanged?
| The constant value of $f$ is easiest to understand by thinking about the wave model of light rather than the particle model.
If the frequency of the light wave inside and outside of a material had different values then there would be a discontinuity in the electric and magnetic fields at the boundary of the material. This is not physically realistic, so the frequency is constant, and speed and wavelength change in the same proportion as each other.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/677393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
High school physics problem - having trouble understanding This is a fairly straightforward problem which doesn't require the usage of more than one or two formula but I find it hard to grasp the concept behind this.
Let's say we have two trains, one which moves at the speed of $45 \frac{km}{h}$ and the other at the speed of $60\frac{km}{h}$. Now, let the first train start moving, and let the second one start moving an hour after the first one. The question is after how many hours will the second train catch up to the first one.
I have always had trouble visualizing these kind of problems. I know that the second train starts with a delay of $1$ hour and that during that time the first train passes $45$km. But how do I calculate this?
I know that $v_2 - v_1 = 15\frac{km}{h}$ which is the relative speed of the second train with respect to the first one. This probably means that in such a frame of reference, the $v_1$ is zero so we can imagine it as being static, under the condition that the new $v_2=15 \frac{km}{h}$.
But how do I calculate this? $t=\frac{s}{v}$, thus I need a length in order to calculate this. I can't simply plug in the $45$ km from above because that would be the time in which the second train got to the $45$km mark, but the first train would have moved away from that point. Could anyone explain?
| Let $t$ be the time elapsed by the second train since it started to move.
The positions of the first train and the second train relative to the same starting point are $x_1(t)$ and $x_2()$ as follows.
\begin{cases}
x_1(t)= S_\text{ref} + v_1 t\\
x_2(t)= v_2 t
\end{cases}
where $S_\text{ref}$ is the initial relative position.
At $t=T$ the second train catches the first one. So they must be at the same position.
\begin{align}
x_1(T) &= x_2(T)\\
S_\text{ref}+v_1 T &= v_2 T\\
S_\text{ref} &= (v_2-v_1)T\\
T&= \frac{S_\text{ref}}{v_2-v_1}
\end{align}
If we denote $v_2-v_1$ as $V_\text{ref}$ then
$$
T= \frac{S_\text{ref}}{V_\text{ref}}
$$
So we can say
The time needed by the second train to catch the first train is equal to the ratio of their initial spatial separation to relative speed.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/677589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 9,
"answer_id": 8
} |
Averaging over spin phase-space for a cross section In Peskin and Schroeder the Dirac equation is solved in the rest frame for solutions with positive frequency:
$$\psi(x) = u(p) e^{-ip\cdot x}$$
$$u(p_0) = \sqrt{m} \begin{pmatrix} \xi \\ \xi \end{pmatrix},$$
for any numerical two-component spinor $\xi.$ Boosting to any other frame yields the solution:
$$u(p) = \begin{pmatrix} \sqrt{p \cdot \sigma}\ \xi \\ \sqrt{p\cdot \bar{\sigma}}\ \xi \end{pmatrix},$$
where in taking the square root of a matrix, we take the positive root of each eigenvalue.
Then, they summarize:
The general solution of the Dirac equation can be written as a linear combination of plane waves. The positive frequency waves are of the form
$$\psi(x) = u(p)e^{-ip\cdot x}, \ \ \ p^2 = m^2, \ \ \ p^0 >0.$$
And there are two linearly independent solutions for $u(p),$
$$u^s (p) = \begin{pmatrix} \sqrt{p \cdot \sigma}\ \xi^s \\ \sqrt{p\cdot \bar{\sigma}}\ \xi^s \end{pmatrix}, \ \ \ s=1, 2, $$
which are normalized: $$\bar{u}^r (p) u^s (p) = 2m \delta^{rs}.$$
Next, we can consider the unpolarized cross section for $e^+e^{-} \to \mu^+ \mu^-$ to lowest order. The amplitude is given by:
$$\bar{v}^{s'} \left(p'\right) \left(-ie\gamma^{\mu}\right)u^s\left(p\right)\left(\frac{-ig_{\mu \nu}}{q^2}\right)\bar{u}^{r} \left(k\right) \left(-ie\gamma^{\nu}\right)v^{r'}\left(k'\right)$$
Then, I quote
In most experiments the electron and positron beams are unpolarized, so the measured cross section is an average over the electron and positron spins $s$ and $s'$. Muon detectors are normally blind to polarization, so the measured cross section is a sum over the muon spins $r$ and $r'.$
...
We want to compute
$$\frac{1}{2}\sum_s \frac{1}{2} \sum_{s'} \sum_r \sum_{r'}|M(s, s' \to r, r')|^2.$$
Why, in order to take the average and the sum, do we only need to sum, rather than integrate, over the spin phase space? Doesn't each incoming particle have an infinite number of spinors? Assuming it is unpolarized, the probability distribution will be uniform, but in principle it still seems like we should integrate over some $\theta$ for spinors $\xi = \begin{pmatrix} \cos \theta \\ \sin \theta \end{pmatrix}$. Instead, what has been done is to assume each incoming particle is in a definite state of "spin-up" or "spin-down" and assign a $50/50$ probability to each.
| Because The spine can only have two possible (discrete) values, (+1/2 and -1/2), the integration is appropriate for continuous spectrum.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/677709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Can lenses and mirrors be described in terms of beamwidth and directivity? As a first background, I am an Electrical Engineer with experience in antennae design for microwave bands.
Lately, I have been interested by optical devices, and I notice one strange phenomenon: when reading about a lens or a parabolic reflector for light, nobody talks about their beamwidth or gain, as we would in case of describing an antenna or reflector for microwave bands. Hence, my doubts:
*
*Does it even make sense to talk about, for example, the beamwidth of a Fresnel lens or a parabolic reflector for an specific wavelength in the visible light portion of the spectrum? For big enough apertures (far from the diffraction limit), is there any phenomenon (maybe diffraction) I am missing that would make this nonsense?
*Do the main relationships for beamwidth and gain (e.g.:
$$D= \frac{4 \pi A}{\lambda^2}$$
where $D$ is the directivity and $A$ is the area of the aperture of the antenna; or:
$$D= \frac{4 \pi}{\theta_E \theta_H}$$
for the beamwidths $\theta$) still hold at visible light frequencies?
*In case 1. and 2. hold true, wouldn't then the directivity of a lens vary a lot depending on whether we are working on the lower or the upper part of the visible light spectrum? Does this have any consequence in practice?
Thanks in advance!
| Yes, they do, most of the time! There area a few caveats, though. Optical "antennas" are also made for thermal noise like sun light but RF engineers have not much use for such non-coherent signals. The higher the sidelobes the less practical use these formulas obviously would have, and note that the operation of optical systems are rarely limited by grating lobes the way a bad radar antenna might ruin the air-defense system, so the concept of beamwidth is less clear there.
In the directivity formula $D=k \frac {4 \pi}{\theta_E \theta _A}$ the factor $k < 1$ and in RF it is rarely above $0.7$, but optical radiators can be much more efficient. When the directivity is written as $D=\frac{4\pi A_{eff}}{\lambda^2}$ the formula defines $A_{eff}$ but the approximation that $A_{eff} \approx A_{geom}$ assumes that the phase distribution in the radiating aperture is smooth. This is always reasonable for optics but is not true for phased arrays and/or superdirective antennas.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/677860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The change of mechanical into electromagnetic waves and vice versa I know that sound is a type of mechanical wave, so the human eardrum changes mechanical energy into electronic energy (impulses) so the information may be processed by the brain.
Question: As satellites transfer info by electromagnetic waves that are also electric signals, then can we change these mechanical waves into electromagnetic waves and vice versa?
| Amending the previous answer, which refers more to air pressure, there is also the piezoelectric effect of some materials that can be used to transform mechanical waves and pressure from various other media apart from air, like solids, liquids etc., directly to electrical signals that can be then further processed.
Applications include, piezoelectric sensors used for any kind of industrial control automation, but also entertainment like the piezoelectric microphone or even piezoelectric loudspeakers, since the process can also be used in reverse. Thus an electric signal causing mechanical stress on the piezoelectric matter.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/678207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Why do we need insulation material between two walls? Consider a slab made of two walls separated by air. Why do we need insulation material between the two walls. Air thermal conductivity is lower than most thermal conductivities of insulating material and convection cannot be an issue in the enclosed volume: hot air rises, so what? it won't go any further than the top of the cavity.
| If the air gap between the walls is wider than approximately 0.5 inches, the warm wall will heat the air, causing it to rise. The cold wall will cool the air, causing it to fall. This will set up a circulating air flow between the walls which transfers heat across the gap to a greater degree than expected. Insulation between the walls is a barrier to air circulation, which somewhat decreases the heat transfer across the gap.
Note that this is the exact reason that panes in double paned windows are spaced approximately 0.5 inches apart. At that spacing, the rising air and falling air in the gap between panes "fight" each other, and prevent air circulation from forming.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/678462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "37",
"answer_count": 4,
"answer_id": 2
} |
Would a high energy Hydrogen atom start emanating electromagnetic radiation? We know that the total energy of the hydrogen atom is proportional to the inverse of the square of the principal quantum number $n$:
$$E_n \propto -\frac{1}{n^2}$$
So at high quantum numbers the energy spectrum tends towards a continuum.
Shown below, a representation of one of the seven $\text{6f}$ orbitals (courtesy of The Orbitron):
However, due to the Correspondence Principle, high quantum number hydrogen atoms should show wave functions that tend toward Classical orbit-like (instead of orbital) shapes:
This is so, at least according to a video I watched yesterday (unfortunately I don't have the web address)
If this is true, high equantum number orbitals would to become Bohrian in nature and thus emit electromagnetic radiation.
Is this true?
| High energy orbitals can in some respects be represented by classical Bohr orbits but have strictly speaking still to be described my quantum mechanical wave functions, especially when it comes to calculating the transition probabilities to other states. High energy orbitals of all neutral atoms become increasingly hydrogen-like though for increasing n, which makes the transition probabilities easier to calculate.
In any case, radiation is only emitted when the electron makes a transition to a lower level. This is a result of excited atomic states being quantum mechanically unstable (whether n is large or not), not because of the electron radiating as a classical particle. A classically radiating electron would continuously lose energy (producing a continuous spectrum over a wide frequency range in the process) and eventually spiral into the nucleus. This is obviously not observed. One only observes the discrete lines resulting from quantum mechanical transitions from level n to lower states.
The plots below (produced at https://keisan.casio.com/exec/system/1224054805 ) show that, as mentioned above, the wave functions become relatively more sharply peaked with increasing n, but still are wave functions with a continuous spread and do not represent classical orbits.
n=3 (l=2)
n=10 (l=9)
n=100 (l=99)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/678824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Can we measure $10^{-12}\ \mathrm{N}$ force? I would be interested to measure a very small force, say in the order of $10^{-12}\ \mathrm{N}$? Is this possible? What equipment is needed?
My setup
Assume that I have a relatively heavy machine say between 5-10 kg that I want to measure if it produces this thrust, which according to calculations should be of this feeble magnitude. But (according to the predictions) this should be periodic, with a frequency of about 200Hz and it should last for about a quarter of the time period. I should also mention that this apparatus is expected to vibrate (a little), since inside a disk is supposed to be rotated at about 12k rpm.
My research
I have read about torsion balance as a possible method. I am also thinking about some piezo-electric crystals. Would be feasible?
What piezoelectric cells would be recommended? I read that the Atomic Force Microscopy devices are also implemented using piezoelectric materials.
| The question is, a $10^{-12}\rm\,N$ force applied to what. A force of $10^{-12}\rm\,N$ applied to a hydrogen atom, with mass $10^{-27}\rm\,kg$, would produce an acceleration $F/m = 10^{+15}\rm\,m/s^2$.
A torsion pendulum is absolutely a way to allow very feeble forces to cause observable, macroscopic motion.
My favorite underrated classic paper is Beth’s 1936 experiment which transferred angular momentum from a beam of circularly polarized light to a torsion pendulum. There was a parity-violation experiment in the 1960s that used a torsion pendulum as a detector for circular polarization in photons emitted from a parity-violating weak interaction process. And the 2001-ish proposal that gravity might be non-Newtonian at short distances has been mostly ruled out by torsion-pendulum measurements of gravitational attraction between coin-sized test masses.
In all of those cases, you accumulate the very small force by doing the experiment many times, repeating at a frequency near the resonant frequency of the pendulum.
For more direct measurements of very tiny forces, you might read about the operation of an atomic force microscope.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/679182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why doesn't a parallel circuit violate conservation of energy? Let's imagine a hypothetical circuit where there are a large number of wires placed in parallel to each other, hooked up to a simple power source.
We know that voltage at each wire would be equal $V_{total}=V_1=V_2=...=V_n$ where $n$ approaches a large number; and that each wire is of some arbitrary constant length.
Next, assume that at the start of each wire there is a single charge of $+1C$, in each wire placed in parallel.
Since work done on a charge is $W=VQ$; where $W=$ work done, thus we apply the same voltage to each charge in each wire placed in parallel.
Since the voltage across each wire would be the same (say, $Resistance$ is ineligible, but $\neq0$) the work done would be same.
Additionally, we know $W=\vec{F}.\vec{s}$; Since the charge is displaced to a significant length (i.e of the wire) Thus work is done even if we may not be able to easily quantify force.
My questions is this - if the number of parallely-placed wires increases, $W\uparrow$. Thus, we can gain infinite joules by placing more and more parallel wires violating the conservation of energy:
\begin{equation}
\sum_{i=0}^{\infty}W_i = V_i \times1
\end{equation}
by moving the $+1C$ charge in each parallel wire.
How is that possible?
| It sounds like you'd have a circuit like this:
+----[ voltage source ]-----+
| |
+----[ resistor/wire ]------+
| |
+----[ resistor/wire ]------+
| ... |
From the electrical engineering 101 standpoint, adding more wires just decreases the total resistance between the terminals of the voltage source. The power provided by the source is $ P = U I = U^2 / R $ and with an ideal voltage source, you could indeed get an arbitrarily large power output (and hence, energy), if you had an ideal voltage source.
But real voltage sources aren't ideal. Raising the load often leads to the voltage dropping, possibly the source overheating (due to internal losses) and eventually shutting down or blowing up. (Probably so that some internal part overheats and blows up, resulting in the whole thing shutting down.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/679464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 4
} |
Does the height a person jumps from onto a rod, affect the rotational height of a rod? I was explained in a lecture that if lets say, I jumped from height h and grabbed onto a vine, I would reach y height at the tip of the swing. But if I were to jump from 2h, I would still reach the same height (y) when swinging. This doesn't really make all that much sense to me... Won't my velocity increase if I jump from a greater height, thus making me swing higher?
| It might depend on the total length of the vine. Let's say the vine only has length $y$ and you swing all the way to the top when you jump from height $h$. Then, when you start with more energy because you jump from height $2h$, there's no higher you can go when you're still holding on to the vine, so you'll also make it to height $y$ but with a greater kinetic energy than the original scenario.
Otherwise, if $y$ is less than the maximum height the vine can reach, you are correct that jumping from higher should get you a higher swing on the vine. The thing that may be consistent between the two scenarios is the amount of time it takes to do one full swing (because that only depends on the length of the vine and the strength of gravity).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/679738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can gravitational lenses change over human time scales? Gravitational lensing is caused by the chance alignment of the observer, the lens, and the source. Obviously these are not permanent events as the earth will move in and out of a focal point as the three objects move relative to each other.
My question is: over what sort of time scales does this typically occur?
As far as I understand the focal point is quite sensitive and requires a high degree of accuracy in the alignment. Is this alignment therefore typically seen over cosmic scales of thousands or millions of years, or perhaps less? Can a gravitational lens come into and out of focus over a human life time (I imagine the source and the lens would have to be in our galaxy and probably relatively nearby for this)?
| In general no. Typical gravitational lenses are clusters of galaxies. Distant clusters move at high speeds because of the expansion of the universe. But their peculiar velocities (deviation from a pure Hubble flow) are typically hundred of km/sec, perhaps $10^{-3}$ c. In $100$ years, they might move $0.1$ light year.
Using our galaxy as an example, the thickness is around $1000$ light years and the diameter is about $100$ k light years. So the alignment of a galaxy might change by 1 part in a million over a lifetime. The alignment change of a cluster would be even less.
That said, there is one interesting gravitational lens that could change very significantly in our lifetimes. See The Solar Gravitational Lens will Map Exoplanets. Seriously.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/679854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can current flow in a simple circuit if I enclose the battery in a faraday cage? So suppose I have a regular circuit with a battery connected to a resistor and a lightbulb.
Suppose now somehow the battery is inside a metal box (faraday cage) but the rest of the circuit is outside of it so the wire is maybe poked through a tiny hole in the box.
Since energy flow through a circuit is due to the electromagnetic field as described by the Poynting vector, since the field cannot penetrate through the faraday cage, will current flow through the circuit?
| Yes. From the Maxwell equation: $$\nabla \times \mathbf B = \mu_0 \mathbf I$$ we can know the directions of the $\mathbf B$ vector field using the right hand rule. Inside the battery, the E-field is from + to - ($\mathbf E = - \nabla V $). In the external resistance it has the same direction of the current.
Using the right hand rule for the Poynting vector ($\mathbf E \times \mathbf B$), it is easy to see that it is to outside in the battery and to inside in the resistor.
When we say that it flows from the battery to outside, it doesn't mean that some stuff is really flowing and reaching the resistor, after travelling through the air (as it would be the case for EM waves). It is only the expression of a vector field that only exist inside the components. So the Faraday cage doesn't affect the energy flow.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Does anyone know of an adjustable focusing mirror? Does anyone know of an adjustable focus mirror? Allowing short sight and long sighted people to see clearly in a mirror with no specs on. Is it even possible?
| The only adjustable optical component I know of is the lens in your eye. It changes focal length by being flexible. Muscles around the edge of the lens stretch it and change its shape.
As far as I know, all other lenses and mirrors are rigid. Their focal length is fixed.
Camera achieve a variable focal length in one of two ways. They have multiple lenses and adjust the separation between them. This is a zoom lens. Or they do it in software. The take an image with whatever focal length they have, and then adjust the pixels to zoom in.
In principal, you could do the same thing with mirrors. There are a few systems that have multiple mirrors. Large telescopes for example. Lasers for another. But these generally do not change the spacing between the mirrors.
For your application, use your phone. Zoom in if you want to see part of your face larger. Not ideal, but the best I can think of off hand.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Perspective on the renormalization group From reading the renormalization group description from high energy theory texts like peskin & schroeder, one may be tempted to think it has to do with regulating infinities.
However, my impression of the renormalization group once you rotate to euclidean time is that it is a mapping transformation of probability weights under a change of variables.
With this perspective which of those statements are true?
*
*The gaussian fixed point is just a statement of the central limit theorem (CLT). Under weak enough correlations, CLT state one "flows" to a gaussian fixed point.
*"Interacting" fixed points is then more interesting limiting distributions, aka generalization of the CLT. The anomalous critical exponents are just corrections to CLT like behavior. They are therefore fractal dimensions. Studying CFT is the study of fractal dimensions arising from interactions.
*With this perspective, Brownian motion is a key example of a gaussian fixed point (under rescaling, the standard deviation scales as a root, hence critical exponent is $\frac{1}{2}$.
| The high-energy physics point of view of the RG ("putting infinities under the rug") is now quite dated, but unfortunately is usually still the first version of RG that one enconters.
A more modern implementation a la Wilson (only 50 years old now...) can indeed be interpreted as a transformation of probability weights under coarse-graining (in HEP, this would correspond to effective theories).
This point of view was already quite present in the `70s, for instance in the papers by Jona-Lasino (which tend to the more mathematical side). One possible reference is "Critical point behaviour and probability theory" by Cassandro and Jona-Lasinio (DOI:10.1080/00018737800101504).
Indeed, from this point of view, the gaussian fixed point is just the statement that the CLT is valid asymptotically, while non-trivial fixed points correspond to breaking of the standard CLT (corresponding to different asymptotic PDF).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Why does a body accelerate when there is a force applied to it? Why does a body accelerate or changes velocity when a force is applied on it?
How force acts upon things to make them accelerate?
| As others have pointed out, $F=ma$ is a definition of force (and mass for that matter). The reason we invented the concept of force, as defined by this equation, is because it makes things very simple and elegant. We want to understand how things move. We note that objects usually move we constant velocity. The special circumstance is when an object deviates from constant motion. Therefore, whenever an object deviates from constant motion, we say that, by definition, it is being acted on by a force. Then, it turns out, in our universe, we can describe all kinds of phenomena with only a couple of simple fundamental forces.
Note, we could just as well try to define a "velocity force" by the equation $F_v=mv$. You could, technically speaking, build a complete theory of classical physics using "velocity forces" (just use $F_v=\int_{t_0}^t Fdt+mv_0$). However, such a system would be extremely inelegant. The "velocity force" would have to depend on the entire history of the interactions of the particle.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 9,
"answer_id": 0
} |
Is there any *global* timelike Killing vector in Schwarzschild geometry? I have been dealing with the following issue related to the Schwarzschild geometry recently. When expressed as:
$$
ds^{2}=-\left(1-\frac{2GM}{r}\right)dt^{2}+\frac{1}{1-\frac{2GM}{r}}dr^{2}+d\Omega_{2}^{2}$$
one can find a Killing vector $\xi=\partial_{t}$, since there are no components of the metric depending on $t$. This Killing vector is timelike for $r>2GM$, but spacelike for $r<2GM$ (since $\xi^{\mu}\xi_{\mu}=-\left(1-\frac{2GM}{r}\right)$). My question is:
*
*Can we find any timelike vector for the region $r<2GM$?
*If not, this would imply that the Schwarzschild solution is not stationary for $r<2GM$. But it is usually referred to as a "static spacetime". This wouldn't be true for the region $r<2GM$. So is this an abuse of language?
| The are only four Killing vectors of Schwarzschild. They are $\partial_t$ and the three rotational Killing vectors. No linear combination of these is globally timelike within the horizon, so there is no global timelike Killing vector.
I suppose whether or not Schwarzschild is static depends on one's definiton of "static." If you define it to mean that there is a global timelike Killing vector, then yes, Schwarzschild is not static. However I think the word is implicitly used to only refer to patches of spacetimes. So the region outside the horizon could indeed be called "static." This is also the case in de Sitter, where one often talks about the "static patch."
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
} |
Does capacitance between two point charges lead to a paradox? Is it possible to have a capacitance in a system of two point charges? Since there is a potential energy between them and they both have charges then we can divide the charge by the potential and get capacitance.
However, capacitance is supposed to depend only on geometry so should therefore be zero. How does one resolve this paradox?
|
How does one resolve this paradox?
As a general prelude to this answer, I would like to mention that it is well known that classical point charges lead to some unresolvable paradoxes in classical EM. Personally, I do not consider this an indication of an inconsistency in classical EM, but an indication that classical point particles themselves are inconsistent. So what remains here is to determine if this specific case is an instance of an inconsistency.
However, capacitance is supposed to depend only on geometry so should therefore be zero.
It is true that the capacitance depends only on the geometry, but that does not immediately imply that it should be zero. A pair of point charges does have some geometry, specifically the distance, $s$, between them. So all we can say from this is that the capacitance should be some function of the distance between them $C=C(s)$. While we could indeed have $C(s)=0$, that is by no means guaranteed.
Since there is a potential energy between them and they both have charges then we can divide the charge by the potential and get capacitance.
This is actually a little bit incorrect. The potential energy between two point charges is undefined. You can extract an infinite amount of energy from a system of two point charges simply by letting them get sufficiently close together. This is one of the major problems of classical point charges and this fact leads to many of the genuine paradoxes.
If we naively plug in infinity then we get $$C=\frac{Q}{V}=\frac{Q}{\infty}=0$$
Of course, since $\infty$ is not a real number, this method is more than a little suspect. But the voltage at the surface of a spherical charge $Q$ of radius $R$ is $$V=\frac{Q}{4\pi \epsilon_0 R}$$ so $$\lim_{R\rightarrow 0}\frac{Q}{V}=0$$ This result then gives a bit more confidence in the $C=0$ result.
So, although this particular aspect of classical point charges does reach close to the root of many paradoxes, it does seem that $C=0$ is not itself paradoxical.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 3
} |
Is electrostatic charge on floor connected with slippage? I have some foam puzzle mats for sports in my flat. I noticed that every time I remove them after some days, the floor underneath where the mats were is very slippery. How is that? This is valid for the wooden and also the stone part of my floor.
My guess is that the puzzle mats are causing some kind of electrostatic charge buildup, but in what way and how is that connected to slippage?
My assumption: The puzzle mats become discharged and deposit electrons on the floor and, thus, reducing physical-chemical attractions between the floor and my feet.
Does that make sense?
| Electrostatic charge buildup is unlikely to be the cause of the slipperiness. Even if the floor did have a static charge, I think it would actually increase friction by a tiny amount, since it should (very slightly) attract your feet to the floor. This the same as if you charge a balloon by rubbing, it will stick to a wall or attract water due to polarization effects.
An increase in slipperiness when a puzzle mat is removed was mentioned on Reddit in 2017:
"Those interlocking foam mats when removed from the floor after a while leave the floor dangerously slippery. What is going on here? Microdirts? Oils?".
They reported that it took washing "with soapy water (3 passes) before danger level removed".
A second Reddit post in 2021 mentioned the same problem for a different kind of rubber mat:
"Bought this rubber mat and it left a weird slippery residue on the hardwood floor that makes walking around in socks super dangerous. Any idea on how to remove the residue. It's practically invisible and I can only detect it if I'm standing on the affected area."
They note how hard it is to see the residue.
It almost sounds as if the mats are polishing the floor. It is plausible that walking (or doing sports) on the mats causes them to rub against the floor, perhaps polishing into the floor a waxlike residue from the mat.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/680998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Work done in sliding a block across a table, as seen in different inertial frames Suppose, I'm pushing a block across a smooth table.
The length of the table is $d$, and the force that I applied is $F$.
According to an observer at rest, standing next to the table, the work done is $W=F.d$. Since there is no other force contributing to this, I could say this is also the net work done. In that case, if the block was initially at rest, and at the edge of the table, it had a velocity $v$, I could say the total work done is the change in Kinetic energy i.e. $\frac{1}{2}mv^2$.
So we have established $F.d=\frac{1}{2}mv^2$
Now consider a second observer moving past the table at some velocity $u$ in the direction opposite to which I'm pushing the block. This person observes the force that I applied is still $F$ since he is also in an inertial frame. He also sees the length of the table as $d$. He too concludes that the total work done by me is $W=F.d$.
However, the velocity of the block is different with respect to him. According to him, the block starts at a velocity $u$, and at the end of the table, it has a velocity $v+u$. So the change in kinetic energy is not the same as in case of the stationary observer.
So, one one side of the equation, we have a kinetic energy difference which is different from the first observer. This would suggest that the net work done would be the same. However, according to both the observers the force and the distance remain the same, so the work done must be $F.d$.
I can't seem to resolve this apparent paradox. How can the work according to both be the same, and yet the 'net work' or change in kinetic energy be different at the same time. There is no concept of potential energy here, and no other forces seem to work.
| Trula is right and here are the details.
Relative to the observer moving at $u$, the work done by the force is
$$W=F(d+ut)=F(d+u\times\frac{2d}{v}) = Fd(1+\frac{2u}{v})\tag1$$
since the time for the mass to cover the distance $d$ is $\frac{d}{v/2} = \frac{2d}{v}$ from the average speed of the mass.
The apparent gain in kinetic energy is $$\frac{1}{2}m(u+v)^2 -\frac{1}{2}m(u)^2 = \frac{1}{2}m(v^2+2uv) = \frac{1}{2}mv^2(1+\frac{2u}{v})\tag2$$
and (1) and (2) are consistent.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/681225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If you were invisible, would you also be cold? If you were invisible, would you also be cold? (Since light passes through you, so should thermal radiation.)
Additionally, I'd like to know if you were wearing invisible clothes, would they keep you warm? In my understanding, the heat radiation from the body would pass through the cloth.
Is it even necessary to be permeable for heat radiation in order to be invisible? Could there be a form of invisibility (hypothetically speaking, of course) that makes you permeable for light in the visible spectrum, but not for heat radiation? Can those two things be separated?
|
if you were wearing invisible clothes, would they keep you warm? In my understanding, the heat radiation from the body would pass through the cloth.
Clothes don't just block the radiated heat, they also stop conduction and convection with the ambient air. I haven't done the calculations but I suspect invisible clothes would only be marginally less warm.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/681335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 9,
"answer_id": 7
} |
Physical reasons for why systems are chaotic? Are there any reasons why a system would exhibit chaotic behavior? Or is this something only found through numerical modelling or experimental testing?
For example, the simple forced, damped pendulum or the duffing oscillator. Were these experimented on and it was found that they were sensitive to initial conditions, and then examined further to prove the 3 chaotic properties and finally deemed to be chaotic? Or is there something physical about them that gives away a possibility to chaos?
If it is the former, how would we determine chaotic systems? Just trial and error until all 3 properties are proven?
| You model the system with differential equations and evaluate the differential equations (without necessarily needing to use computer simulation or experimentation, although both, especially simulation, are powerful tools). By so doing you predict under what conditions the system will exhibit chaotic behavior and what the characteristics of the chaotic behavior will be.
I don't think the "how do we determine" question is answerable in a short forum post, even to someone with a math or physics degree. The answer is most of a university math course or textbook on nonlinear dynamical systems. The prerequisites of such a class would be the calculus sequence, differential equations, and linear algebra.
As with any course there are plenty of options. The one I've read is S. H. Strogatz's book Nonlinear Dynamics and Chaos, and I found it unusually clear and easy to read for a math textbook. Cornell has a lecture series by the author, following the book, available for free on youtube. I haven't listened to them, so I don't know how well Strogatz lectures, but the book was excellent and having a lecture series to go along with a book helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/681578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Can I conclude that acceleration happens a bit later after force is felt? We define forces like electric force, magnetic force and gravitational force etc, to be caused by field lines such as electric field, magnetic field and gravitation field respectively. Since these fields take time to reach the object on which the force is applied for acceleration, the acceleration should occur after the force is applied. Also, does it apply to all cases or are there any interactions that happens with contact?
What I think is that when object A applies force on B, A first feels the force and then B feels the force and so accelerates. Means that force applies on B and B accelerates at the same time but A feels force first.
| mathematically acceleration exists at time = zero, before there is any displacement or velocity since both involve a time integral.
However, because material objects are not infinitely stiff, forces between objects being pressed into contact build according to Hooke's law for elastic solids which means that accelerations in this case are also not instantaneous but newton's laws still hold.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/681734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Spacetime effects on human scale objects? For a human standing upright on the earth, gravity would have a different value at the feet than at the head, and gravity influences the flow of time. Does the difference in the flow of time cause any effects?
I was toying with the idea that gravitational acceleration is just nature trying to compensate for time flowing at different speeds with a preference for moving towards slower timeflow.
Highschool level question.
| That can be thought the other way around. Suppose a spaceship in the outer space with an acceleration $g$. The crew would feel 'gravity' normally as in the Earth.
But according to relativity, for the ship keeps the same distance between parts, (as it should be to keep its integrity), the 'bottom' portions must have a bigger acceleration than the 'top' parts. Particulary, the people head has a smaller acceleration than the feet for someone stand up. And a clock in the head thicks faster than another in the feet.
Of course the difference in completely negligible for all pratical purposes. But it shows the correlation between time difference and acceleration. Both are linked, but I don't think that one causes the other.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/682058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
What does GR get right that QFT gets wrong, and vice versa? I wondering what precisely it was, in terms of predictions of observations, that General Relativity gets right, that QFT cannot explain. And what QFT gets right, that GR cannot explain.
I'm assuming GR cannot predict quantum effects, like wave-particle duality, but is there anything else? Or a more thorough list?
| GR is the theory meant to explain the forces meant on a macroscopic scale far larger than even newtonian mechanics. Thus it only explain macroscopic objects, taken from a past answer it is not a particle physics model and cannot explain microscopic particles in the same way QM or QFT would do.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/682234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 3
} |
Clarification on the displacement in the definition of Work I'd like to ask a question about work. The definition of work gives us a way to calculate the work done by a force along a path but in practice it's not always clear what path to take in consideration. Moreover, this fact that work is defined along a path is not taken in consideration when applying the conservation of energy. Could someone clarify this points?
I'd like to give an example to make my position clearer. There's a ball rolling of pure rolling down a slope (v=wR) with friction. I've been told that in this case friction doesn't make work because although the ball (the object on which friction is applied) is moving, the point of contact, where friction is applied, is not moving relative to the slope. This makes me think that I have a problem understanding the definition of work :)
| The effect of the rolling friction is to decrease the net force: $F_{net} = mgsin(\theta) - F_{fric}$. $$F_{net} = ma \implies mgsin(\theta) - F_{fric} = m\frac{dv}{dt} $$
When there is no slip, the friction force can be expressed in terms of the momentum of inertia and angular acceleration $$F_{fric} R = I\frac{d\omega}{dt} \implies F_{fric} = \frac{I}{R}\frac{d\omega}{dt}$$
As $v = \omega R$, and multiplying both sides by dx
$$dw = mgsin(\theta)dx = m\frac{dv}{dt}dx + \frac{I}{R^2}\frac{dv}{dt}dx = (m + \frac{I}{R^2})\frac{dx}{dt}dv = (m + \frac{I}{R^2})vdv $$
$$vdv = \frac{1}{2}d(v^2) \implies dw = \frac{1}{2}(m + \frac{I}{R^2})d(v^2)$$
The work of the gravity force results in an increase of the translational kinetic energy (first term) and rotational kinetic energy (second term). If there was some slip, only part of the torque would result in angular acceleration $$F_{fric} R - \Delta = I\frac{d\omega}{dt}$$ In this case, the final expression $dw$ would be: $mgsin(\theta)dx - \Delta dx$, the latter being the work of the friction force. Without slip, all the work of the gravitational force results in kinetic energy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/682355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Where is quantum probability in macroscopic world? How can macroscopic objects in real world have always-true cause-effect relationships when underlying quantum world is probabilistic? How does it not ever produce results different than what is predicted by Newtonian physics, except for borderline cases?
| Statistical mechanics is a field that gives some good perspective to this. It's quite common that the math works out where you have an equation to describe your system, and there is a variable $N$ in your equation for the number of particles. If $N$ is a small value, the equation and your system still look very quantum. But once $N$ is large values, or you take the limit of the equation as $N$ approaches infinity (called the thermodynamic limit), you then see the system's macroscopic laws. This article might be of interest: https://arxiv.org/abs/1402.7172
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/683695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Is there a notion of a "Majorana boson"? In a similar manner to how we can define Majorana fermionic operators $\gamma_j$ via
$$
c_j \propto \gamma_{2j+1} + i \gamma_{2j}^\dagger,
$$
where the $c$'s are fermionic creation/annahilation operators. These operators are super useful when dealing with fermionic systems. Im wondering if one can define and meaningfully use bosonic Majorana operators, i.e.
$$
b_j \propto \tilde{\gamma}_{2j+1} + i \tilde{\gamma}_{2j}^\dagger,
$$
where the $b$'s are bosonic creation/annahilation operators.
Is there a way to legalize these Majoranas?
| The correspondingly defined objects for bosons are the position operator $x=(a+a^\dagger)/2$ and the momentum operator $p=(a-a^\dagger)/2i$, respectively.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/683818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Does light have mass or not? We know light is made of photons and so it should not have mass, but light is a form of energy (light has energy) and has velocity ($c$), so according to $E=mc^2$, light should have mass... So what is correct?
| When the value of mass is given as mc^2=hf, or m=hf/c^2, this is the equivalent newtonian mass, which appears in momentum for example.
Under Special relativity, it has no mass.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/683919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Why does mass bend the temporal dimension more than the spatial dimensions of spacetime? From my (limited) understanding of general relativity, most of what we experience as gravity is a result of the distortion of the temporal dimension, and not the spatial dimensions. Therefore, most of the spacetime curvature caused by the earth (and most astronomic objects, with the exception of maybe black holes) occurs along the temporal dimension, with very little on the spatial dimensions. This is why the bent sheet analogy is misleading, if I am not mistaken. Why is this so? Why aren't all four dimensions distorted equally, or the spatial dimensions distorted more than the temporal?
|
Why aren't all four dimensions distorted equally
Although spacetime is often used term one should clearly understand that space and time aren't interchangeable one into another; similarly when they talk about curvature of space-time it's by definition curvature of space relatively to time.
Why aren't all four dimensions distorted equally
If they would have been equally distorting you won't notice any "distortion" being an observer belonging to the same "equally distorted space-time".
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/684074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
} |
At what speed would a wind affect a bullet? Firing a gun loaded with the fastest bullet (.220 Swift 1,422m/s or any bullet that is super fast and excellent aero dynamics) in a close range (2cm) from the tip of an air blower. What would be the speed of the air coming out of the air blower to be able to deflect the bullet off course 90 degrees?
| For the bullet traveling directly at the air blower, to stop the bullet within $2cm$ needs, from the equations of motion a deceleration of $5\times10^{7}m/s^{2}$
Air resistance is $$F=\frac{1}{2}\rho ACv^2$$ see for example this website
For air $\rho = 1.2$, $C=0.2$ (estimate) and $A=\pi r^2$ with $r=2.8\times 10^{-3}m$
so from $F=ma$ with a mass of the bullet of $2g$
$$0.377r^2v^2 = 2\times 10^{-3}\times5\times10^{7}$$
$$v^2 = 3.4\times10^{10}$$
$$v = 184,000m/s$$
We could subtract the speed of the bullet from this, but it doesn't make much difference.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/684458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Temperature of 1kg of matter when squeezed to its event horizon Let’s say I have 1kg of matter at room temperature (300K), held in a spherical configuration. I symmetrically squeeze this until it forms a black hole. What formula (formulae) would I use to (or how can I) calculate the new temperature of the 1kg of matter once it reaches its event horizon?
| When the air in a room is at $300 \mathrm{K}$, objects placed in that room only acquire a temperature of $300 \mathrm{K}$ as well if they can come to equilibrium with it. This will not be the case for a black hole. The purely classical prediction is that all of the air molecules will eventually fall into the black hole making it an infinitely long lived object with temperature zero. An improved analysis (due to Hawking) shows that if a black hole sits around for a long enough time without colliding with anything, it will evaporate. This means it has a nonzero temperature but one determined entirely by its mass.
The order of magnitude for this temperature can be read off from a previous question. Since you said $1 \mathrm{kg}$ instead of $0.2 \mathrm{kg}$, the temperature will be $1.2 \cdot 10^{23} \mathrm{K}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/684620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there a name for the type of boundary condition where the initial boundary values are known but are not held constant over time? I'm exploring the heat equation to model a particular 1D scenario, and I understood the Dirichlet and Neumann boundary conditions, but neither are sufficient for my scenario. Assuming a rod of length L, I want the boundaries to have a particular initial value ($U(0,0) = 400$, $U(L,0) = 300$), but the temperatures at the boundaries do not need to be constant across time ($U(0,0) ≠ U(0,t)$, $U(L,0) \ne U(L,t))$. Heat does flow in and out of the boundary, but only towards the rod, not the air.
Now, my question is, is there any sort of name for this type of boundary condition, where the initial boundary values are known, and are not held constant over time?
I hope the explanation of my scenario was clear. Please drop a comment in case you need clarification on some point.
| If there are boundary conditions
$$
\phi_0(t) \equiv U(0,t),\\
\phi_1(t) \equiv U(L,t),
$$
and the initial condition
$$
f(x) \equiv U(x,0),
$$
then you have what is called time-dependent boundary conditions, see e.g. these lecture notes. The problem statement in your question represents a special case, where only the values of $f(x)$ at two points are specified. I don't know of a more specific name for this case (and I wonder if the problem is actually underdetermined, but that is probably a question for Math.SE).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/684914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How does intensity depend on slit-width regarding Fraunhofer diffraction? For a single slit, considered to be infinitely long, with size $b$ the intensity at any angle is given by:
\begin{equation}
I(\theta)=I(0) \bigg( \frac{\sin \beta}{\beta} \bigg)^2
\end{equation}
where,
\begin{align}
\beta=(\frac{\pi b}{\lambda})\sin \theta
\end{align}
However, this tells me nothing about how $I(0)$ varies with slit-width. What happens to the value of $I(0)$ when we double the slit width for example and how can this be derived? I am looking for a function like $I(0)=f(I_0,b,\lambda)$.
By dimensional analysis it looks like that $I(0) \propto I_0(\frac{b}{\lambda})^n$
| If we double the slit size it is assumed I(0) would double, the intent of the formula is to show the variation with angle that describes the observed pattern.
The formula dose not care if one person uses a 1 watt laser and the next a 2 watt laser or if the person exposes the image for 1 second while the next exposes for 1 hour.
If you want to be really precise you would need to know the shape of your laser beam, most have a Gaussian profile.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/685136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How come the number of wandering electrons is same as the number of the positive ions? My book mentions the following:
Cause of resistance : When an ion of a metal is formed , its atoms lose electrons from its outer orbit . A metal ( or conductor ) has a large number of wandering electrons and an equal number of fixed positive ions . The positive ions do not move , while the electrons move almost freely inside the metal These electrons are called free electrons . They move at random , colliding amongst themselves and with the positive ions in any direction as shown
The book mentions that :A metal has a large number of wandering electrons and an equal number of fixed positive ions. My doubt arises that lets says the metal is aluminium since aluminium has 3 valence electrons a single atom will loose 3 electrons which becomes the free electrons in the metal, so since a atom looses 3 electrons to form a cation so in this case should not the number of wandering electrons be three times the number of positive ions. So how come the number of wandering electrons is same as the number of the positive ions
| Strictly speaking, it should be charge of ions, rather than number of ions. However, the ionization energy of an atom increases the greater the charge of the atom. For instance, the first ionization energy of Al is $577.5 \text{kJ mol}^{-1}$, while the second ionization energy is $1816.7$, more than three times as much. So once one atom loses (note spelling) an electron, it's easier to take the next electron from an atom that hasn't lost any electrons. In the case of, say, a typical capacitor, the number of net electrons missing from one plate will be large in an absolute sense, but as a proportion of total atoms, it will be miniscule, so we can model the system in terms of ions with single charge each.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/685325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Variations in Refractive Index of Materials It's quite a common fact that different types of glass have different refractive indices. Most sites I've found attribute these differences to variations in the 'density' of the glass, which is not very satisfying of an answer.
My questions:
*
*What is the underlying physical mechanism that determines the refractive index and its variations in glass?
*For other materials, such a Silicon\Gold\Silver\etc., does the refractive index have a fixed and known value (by model or in principle) under the assumption that the material in question is 'pure' (i.e. no foreign materials)?
| Heavy glasses are mainly heavy because the use elements such as lead with many nuclei. As a result, these materials also have more electrons per atom, especially in the higher (more loosely bound) orbitals.
The looser bound the electrons, the more they will react to the electric field wave. How exactly this increases the refraction index I don't know, but I will provide some handwaving argument that is hopefully useful to you. The electrons getting dragged around (i.e. a plasmon) have an effective mass and thus the plasmon has kinetic energy, which takes away the energy of the actual photon and reduces propagation speed.
I hope this is not too inaccurate, but hopefully someone else can expand
Tables on refractive index and loss for metals. Metals mainly have a large loss term because of unbound electrons which couple to phonons and dissipate energy. When you dope or mix metals, surely their refractive index will change. Metals also have a real index of refraction, which is strongly frequency dependent as can be seen in the link. Unfortunately, the simple picture I draw here doesn't provide any clue as to why this is so.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/685482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is the sensation of apparent acceleration within the frame or visible force source enough to know if that frame is non-inertial? In Renato Brito's book Fundamentals of Mechanics, a property of the non-inertial frame is defined as follows:
Non-inertial referential is any one that presents acceleration in relation to an inertial referential. For this reason, non-inertial frames are also known as accelerated frames.
When considering a statement in the book, it is really necessary to compare two frames of reference to know the inertia or non-inertia of each of them, or it is possible to determine whether a frame of reference is inertial or not just by the apparent acceleration or by the perception that there is a source of force acting about this frame?
| Indeed you are correct, it is not necessary to refer to a second frame in order to determine if the first is inertial. You can simply use accelerometers. If the acceleration relative to the reference frame is not equal to the acceleration measured by the accelerometer (for all accelerometers) then the frame is non-inertial.
For example, say we are using a spinning space station as our reference frame. An accelerometer at rest on the space station is not accelerating relative to the reference frame, but the accelerometer measures centripetal acceleration. Therefore it is a non inertial frame. No comparisons to other frames are needed
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/685601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Rotation of a freely falling body Suppose a straight rod with one end denser than the other is dropped from a height at an angle. Will the rod hit the ground at the same angle or will air resistance cause it to straighten and hit the ground with the denser end? What will happen if the same thing is repeated in vacuum?
| The denser end has less bouyancy and will start tilting downwards eventually in a precessing motion. when damped it will point straight down.
In a vacuum, the entire rod will have no bouyancy, so it will drop straight down without experiencing any torque.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/685681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can supercapacitors not implode? How can supercapacitors store $5\,\mathrm{coloumbs}$ and not implode due to the enormous force between the plates ($10^{15}\,\mathrm{N}$ if the plates are $1\,\mathrm{cm}$ apart)?
| This is a good question. It comes down to two factors: The 'plates' have dielectric material separating them, and the effective size of the plates is large, relatively speaking.
The dielectric material has positive and negative charges that align themselves with the electric field of the electrodes. Fig 2 in this link shows a very simplified view of what goes on. The charges in the electrolyte move and align themselves with the electrodes, thus the force the electrodes experience is actually just to the local charges near the electrodes, not all the way to the opposite electrodes. Also keep in mind that unlike this figure, the electrodes are surrounded on all sides by electrolyte. So In other words, it's not a net force acting on each plate, but rather just a local force on each microscopic part of the plates.
This brings us to the next part. The electrodes are made up of a highly porous matrix of carbon and other materials, like this illustration. The effective surface area is high, on the order of 1000 $\mathrm{m}^2/\mathrm{g}$. So even though the stored charge is high, the surface charge density is low, or at least low enough for the materials to handle.
We can do some rough back-of-the-envelope calculations. Assume:
*
*1000 $\mathrm{m}^2/\mathrm{g}$ surface area.
*100 F/g capacitance.
*2.7 V breakdown voltage.
Then one obtains a charge of $q = Vc = 2.7 \times 100 = 270$ C/g. Using the naive formula for force between parallel plates:
$$
F = \frac{Q^2}{2A\epsilon_0}
$$
One obtains a pressure on the order of ~1 GPa. This is by using vacuum permittivity, which doesn't exactly apply here, but we can use it anyway for a rough estimate. 1 GPa is much below what would be required to e.g. tear apart the conductors at the molecular level.
Note: Capacitors with vacuum/air dielectric do exist, however their capacitances are very low, thus the amount of charge stored on them (and the force between the plates) is low.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/685924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 1,
"answer_id": 0
} |
Why in the first Friedmann equation quantity $ρ$ is directly proportional to Hubble's constant despite the fact that gravity counteracts expansion? Here is the first Friedmann equation:
$$H^2 = \left(\frac{\dot a}{a}\right)^2 = \frac{8\pi G}{3}\rho - \frac{kc^2}{a^2} + \frac{\Lambda c^2}{3}$$
We know that matter and energy through gravity slow down or reverse any expansion in the fabric of spacetime. Yet in some context and specially here with this equation I encounter the fact that matter and energy content of universe is increasing the expansion rate instead of the opposite, as if there's an anti gravitational force in effect. How so?
| You ignored the $k$ term, but it's crucial here. $k$ is the curvature not of spacetime but of constant-$t$ spatial slices, so it depends not only on spacetime curvature (represented by $ρ$ and $Λ$) but also on the extrinsic curvature of the spatial slice in the spacetime (represented by $\dot a/a$). You can think of this equation as showing the relationship between the spatial curvature that appears in the metric and the physical spacetime curvature.
In the special case $Λ=ρ=0$, the equation becomes $\displaystyle H^2 = \frac{-kc^2}{a^2}$, which implies $R = c/|H|$, where $R=\sqrt{|k|}/a$ is the radius of curvature. ($k/a^2$ is the Gaussian curvature.)
If also $p=0$, then there is no Riemann curvature, the spacetime is Minkowski space, and this equation is easy to interpret: the radius of curvature equals the time since the big bang (or until the big crunch). This makes sense because surfaces of constant $t$ are surfaces of constant distance from the $t\to0$ limit of all comoving worldlines, which is a single point in Minkowski space. In Euclidean space, the points at a distance $R$ from a point form a sphere with curvature radius $R$; in Minkowski space the points at a timelike distance $R/c$ form a hyperbolic plane with curvature radius $R$.
The spatial curvature has, in this special case, no physical significance: you can cover the same region of spacetime with many different FLRW charts with different values of $k$ and $H$. It's a coordinate artifact.
When $ρ\ne 0$, the symmetry is broken by the presence of matter, and the FLRW coordinates are dictated by the broken symmetry, so it is not a pure coordinate artifact, but the equation still expresses the relationship between inherent curvature and the curvature of a particular coordinate system, and not a physical force.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/686048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
How mass of different components in a solution get affected in centrifugation process? Centrifugation technique is used in laboratory or even at home to separate colloidal solution with variable densities constituents.Example butter from milk
So How does Centrifugation help in separating different densities components of a solution? Why high and less densities object form two different immiscible layer during this procedure that is how mass of components affect their separation?
| It’s the density, not the mass, that matters.
In the rotating frame of reference, the “apparent gravity” that appears due to the (fictitious) centrifugal force will be many times higher than g.
Suspended particles are subject to gravity, buoyancy, and the individual impacts of other molecules in the liquid. For small particles in suspension, the tiny net force of gravity (gravity minus buoyancy) may be too small to overcome the effect of molecular impacts, so the particle remains suspended and subject to Brownian motion.
When you turn up the speed of centrifugation, you effectively “turn up gravity (and buoyancy)”. This makes it strong enough to cause drift (to either the inside or the outside, “top or bottom”) of particles. The higher the density difference compared to the suspending liquid, the greater the force moving the particles. This heavier particles will get to the bottom first while lightest particles will drift to the top.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/686310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Will a planet rotate if it is the only being in the universe? As a senior student , I have been wondering whatever the word inertia mean . Is inertia lying in the interaction between all the objects , or is it the nature of a space even without anything put into it ? In our life it seems like the latter , since wherever you throw out a stone into a space it will go along a parabola . But that is not the case , for there is still the earth and the sun and all the distant galaxies that interact with the stone outside its moving space .
So if all the interactions are removed , and there's only a planet thrown into a universe of nothing . Then will it rotate , or can we detect its rotation through , for example , a Foucault pendulum ?
If not , can we conclude that inertia relies on the interaction of the objects , and thus a consequence of universal gravitation?
| @ummg's comment that you might want to read about Mach's Principle, https://en.wikipedia.org/wiki/Mach%27s_principle, is the right answer to your question.
Along the same lines, you might think about linear motion. Consider a universe with just two rigid bodies, say $m_1$ and $m_2$, and a compressed spring (of negligible mass so we needn't consider it a third body) placed between them. Release the spring, and $m_1$ and $m_2$ begin accelerating away from each other as the compressed spring expands. But all you can observe is their total acceleration $\ddot{\vec{r}}$ away from each other. There's no possible way to separately measure their individual accelerations $\ddot{\vec{r_1}}$ and $\ddot{\vec{r_2}}$, because (similarly to your example) there's no "rest of the universe" providing a fixed background reference frame. And so (in this example) there seems to be no empirical way to define "inertial mass".
And lots of similar "goofy" little problems emerge when you start considering the issues that Mach's principle tries to address.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/686772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 0
} |
Is energy really Conserved in rolling motion? Energy is conserved in pure rolling motion. Then why does the ball stops its motion after some time. I think it's not the case of air drag only. Does all work gets Transferred to surrounding in the form of heat?
| In an ideal case, once rolling starts, friction stops acting as there is no relative motion at the point of contact and the ball keeps on rolling.
In a real life scenario, there is some deformation at the point of contact, and the normal shifts slightly and no longer passes through the centre. On performing torque analysis at the centre of mass, you will notice that the force of friction(acting opposite to the velocity of the centre of mass of the ball so as to slow it down), acts to increase the angular velocity of the ball, however, the slightly shifted normal force now provides a torque in a sense opposite to that of the angular velocity, therefore slowing the ball down eventually.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687054",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
What mechanism will force mechanical watch to tick slower when go fast, due to relativistic effects? To make mechanical watch tick slower, watch tick rate must be changed, oscialtion of balance wheel must be SOMEHOW changed, how would speed change oscialtion of balance wheel, due to relativistic effects?
I dont understand mechanism between speed and parts inside mechanical watch that will somehow mysteriously start ticking slower?
This video show how watch works.
| The tricky thing is: it is not the watch that ticks differently, it is time itself.
Let us consider the situation you proposed on the comments: you are on a fast rocket and there is a clock on Earth. What do you see? You see your watch ticking just as usual, while the clock on Earth (which you are looking at e.g. with a telescope) is ticking slower. However, if I am on Earth, I'll see the clock ticking as usual, while I'll see your watch ticking slower.
But not only we'll see the clocks and watches ticking slowly. We'll see everything happening slowly. I'll see you moving slowly, I'll see things dropping to the ground slowly, you'll see me moving slowly, everything slows down.
It is not the watch that changes. It is time itself. Time is not something absolute that governs everything and everyone. It is relative. Time depends on where you are and what you are measuring.
I have written some other posts on this problems. You might be interested in this, this, and this.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
"The resultant of two forces of equal size, that form an angle, is lowered by 20% when one of the forces is turned in the opposite direction."
"The resultant of two forces of equal size, that form an angle, is lowered by 20% when one of the forces is turned in the opposite direction."
Does anyone know how one would go about trying to find the angle where this happens? I've been reading an old textbook on Mechanics and it has stunped me for quite some time now.
The book is originally written in Swedish so forgive my bad translation skills.
| In first case the two forces are defined as:
$$\vec{F}_1 = F \hat{\imath} \quad \text{and} \quad \vec{F}_2 = F \cos\alpha \hat{\imath} + F \sin\alpha \hat{\jmath}$$
and their resultant force is:
$$\vec{F}_R = F (\cos\alpha + 1) \hat{\imath} + F \sin\alpha \hat{\jmath}$$
When you take force $F_1$ to point in the opposite direction, i.e. $\vec{F}_1 = -F \hat{\imath}$, the resultant force is:
$$\vec{F}_R' = F (\cos\alpha - 1) \hat{\imath} + F \sin\alpha \hat{\jmath}$$
The ratio of resultant force magnitudes is:
$$\frac{|\vec{F}_R'|}{|\vec{F}_R|} = \frac{1 - \cos\alpha}{\sin\alpha} \quad \text{and} \quad |\vec{F}_R'| = p \cdot |\vec{F}_R|$$
From the above equation it follows that
$$1 - \cos\alpha = \frac{p^2 + 1}{2} \sin^2\alpha$$
and the final solution is
$$\boxed{\alpha = \arcsin \Bigl( \frac{2p}{p^2 + 1} \Bigr)}$$
For $p = 0.8$ the angle is $\alpha = 77.3^\circ$.
Here is the detailed expansion for the magnitudes ratio:
$$\frac{|\vec{F}_R'|}{|\vec{F}_R|} = \frac{F \sqrt{\bigl(\cos\alpha - 1\big)^2 + \bigl(\sin\alpha\bigr)^2}}{F \sqrt{\bigl(\cos\alpha + 1\big)^2 + \bigl(\sin\alpha\bigr)^2}} = \frac{\sqrt{1 - \cos\alpha}}{\sqrt{1 + \cos\alpha}} \cdot \frac{\sqrt{1 - \cos\alpha}}{\sqrt{1 - \cos\alpha}} = \frac{1 - \cos\alpha}{\sin\alpha} = p$$
Here is the detailed expansion for the above trigonometric equation:
$$1 - \cos\alpha = p \sin\alpha \rightarrow 1 + \cos^2\alpha - 2\cos\alpha = p^2 \sin^2\alpha \rightarrow 2 - 2\cos\alpha = (p^2 + 1) \sin^2\alpha$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How did Ernest Sternglass’ phenomenologically incorrect model of the neutral pion predict its mass and lifetime so accurately? In 1961, Ernest Sternglass published a paper where, using what seems to be to be a combination of relativistic kinematics and Bohr’s old quantisation procedure, he looked at the energy levels of a set of metastable electron-positron states, and found the lowest of these to be a mass surprisingly close to the measured mass of the neutral pion. He also calculated its lifetime, through what looks to me to be a form of dimensional analysis, to be close to that of the neutral pion also.
We now know, of course, that this is not the correct model of the neutral pion, but how did his analysis manage to produce these curiously close results? Is it understandable in terms of our modern model of neutral pions, a mistake in the argument, a coincidence, or some combination of these?
| It is a coincidence. The claim in the article can be summarized as
$$
\frac{\alpha}{2}\frac{m_\pi}{m_e}=0.96
$$
which is close to $1$. This relation doesn't have a deep origin, it is just a coincidence of numerical factors. Particle physics has dozens of numerical parameters, and thousands of possible ways to combine them into dimensionless expressions. Eventually, you will find many combinations that are close to $1$.
Note that $\alpha$ and $m_e$ are parameters of electromagnetism, and $m_\pi$ is a parameter of the strong force. As far as we know, these two forces are completely independent (at least at the energies relevant to the calculation; they may unify at larger energies but that would not matter as far as the calculation of $m_\pi$ is concerned). There is no fundamental principle of nature that relates $m_\pi$ to $m_e$ or $\alpha$. The pion mass depends on the quark masses and $SU(3)$ interactions alone; it can be predicted using lattice QCD without even introducing the electromagnetic $U(1)$ sector (of course, the mass splitting of $\pi^0$ and $\pi^\pm$ does care about EM, but this is a tiny subleading effect).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Why do we need the concept of Gravitational and Electric Potential? I understand that we need potential energy for the concept of energy conservation. However, why would we come up with a definition like 'energy required per unit mass/charge to bring the mass/charge from point A to B. The part says 'per unit mass/charge' allegedly to avoid mass/charge dependence as the potential energy depends on the mass/charge. Why do we need to get rid of the mass/charge dependence and invent a new concept like 'potential' out of potential energy?
|
However, why would we come up with a definition like 'energy required
per unit mass/charge to bring the mass/charge from point A to B.
First of all, the concept of gravitational or electrical potential is that it is an absolute quantity requiring the assignment of some point a value of zero potential. It is has no real physical significance because its value depends on the arbitrary selection of a point as being zero potential. Griffiths, in his book "Introduction to Electromagnetism", makes the following statement:
"Evidently potential as such carries no real physical significance, for at any given point we can adjust its value at will by a suitable relocation of 0"
His statement applies equally to gravitational and electrical potential.
What really matters is potential difference, which is independent of the point where a potential of zero is assigned. The electrical potential difference, or voltage $V$, between two points is defined as the work per unit charge (Coulomb) to move the charge between the two points, which will be the same regardless of the point assigned a potential of zero. The same applies to gravitational potential difference, though it is seldom used.
The electrical potential difference, $V$, is an essential concept in electrical circuit analysis. It gives us the electrical potential energy gained or lost per unit charge in moving the charge between two points so that we can apply Kirchhoff's voltage law for an electrical circuit. Although the electrical potential at a given point may be different depending on where the potential of zero is defined, the potential difference between any two points in the circuit will be the same regardless of the point selected to be zero potential.
Hope this helps.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/687983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How to draw the phase plane of this equation? Using various computational tools, it's possible to draw a phase plane from two first-order ODEs or a single second-order ODE. However, when there is a parameter in the equation and we don't know the value of the parameter, is there any way to draw the phase plane and see the changes with respect to the parameter? For example (e-print), if we have two first-order ODE
$$ \frac{dx}{dt} = \alpha T x - \beta xy$$
$$ \frac{dy}{dt} = \alpha T y - \beta xy$$
can we draw the $x$-$y$ phase plane? We are not given any value of $\alpha$ and $\beta$, but we are given a few constraints:
$$\gamma = \frac{x-y}{x+y}\;,\;\;\;\;\;\;\frac{dT}{dt} = -\left(\frac{dx}{dt}+ \frac{dy}{dt}\right)$$
$$\text{so,}\;\;\;\frac{d\gamma}{dt}=\frac{\beta}{2}(x-y)(1-\gamma^2).$$
| Comment masquerading as an answer, to avoid macaronic sequences.
Looks like T is dross, to be eliminated as =−(+), for
some constant c.
You may then divide your two ODEs with each other, and get /
as a function of x and y, much less pretty than Lotka-Volterra, but straightforward to plot numerically for selected values of the parameters. may of course be absorbed into , so it too is dross,
$$
\frac{dx}{dy}=\frac{x}{y} ~~\frac{x+y-c +\frac{\beta}{\alpha} y} {x+y-c +\frac{\beta}{\alpha} x} ~.
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Dirac-delta-distribution charge densitity Are the charge distributions $$\rho(\vec{r})=\frac{Q}{2\pi R^2}\delta(r-R)\delta(\vartheta-\pi/2)$$ and $$\rho(\vec{r})=\frac{Q}{2\pi r^2\sin(\vartheta)}\delta(r-R)\delta(\vartheta-\pi/2)$$ of a charged circle the same? I would say yes because integrating over them gives the same result but is this in general true?
| Yes, since $\delta(r-R)\delta(\vartheta-\pi/2)$ is zero everywhere except $\left(r,\vartheta\right)=\left(R,\theta/2\right)$, we can replace
$$
\frac{Q}{2\pi r^2\sin(\vartheta)}
$$
with its value at $\left(r,\vartheta\right)=\left(R,\theta/2\right)$:
$$
\frac{Q}{2\pi R^2}
$$
If you have a copy of Griffiths E&M he discusses this property of the Dirac delta function (section 1.5.2, equation 1.88, at least in the Third Edition).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Would light bend the other way, if I use antimatter instead? Imagine the following setup: an antimatter straw, an antimatter glass filled with antimatter water and we have antimatter atmosphere just in case. My question is: does Snell's law still apply here as though they are regular matter, if I were to observe the straw inside the water?
| We think antimatter refracts light like “ordinary” matter, but we don't know for certain. As the Wikipedia article on antimatter says:
There are compelling theoretical reasons to believe that, aside from the fact that antiparticles have different signs on all charges (such as electric and baryon charges), matter and antimatter have exactly the same properties.
However, theory always needs to be experimentally verified. The Antiproton Decelerator at CERN is able to make and trap small numbers of antihydrogen atoms, and there are a series of ongoing experiments which are investigating the detailed physical properties of these antihydrogen atoms. We expect them to behave like "ordinary" hydrogen atoms, but any of these experiments could produce unexpected results which would open up whole new areas of physics (which is why vast amounts of money are being spent on them at CERN).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Electric field of an electron in motion in a wire How do I correctly model the electric field of an electron in motion in a wire? I could treat the electron as a point charge moving through the wire. If I use the Liénard Wichert equations, they will predict radiation if the wire turns, since the electron is being accelerated here. But we know that constant currents doesn't radiate like this.
Alternatively I could view the electron as a wave function distributed over the entire wire. Which equations would I then use to obtain the field? And would the wave function then be used as a chargedistribution?
| The electrons in a conductor occupy quantum momentum and energy states in a band structure. I think that it is not possible to model them as classical moving charges as for Liénard Wichert equations.
For example, even without any external E-field applied, each electron has a momentum, so they are all moving. As the conductor has boundaries, at some time they must change their momentum, otherwise they would escape outside. But that accelerations don't generate EM waves of course.
It is similar to electrons in atomic orbitals. They can have angular momentum, but don't radiate.
Alternatively I could view the electron as a wave function distributed
over the entire wire
It is the band structure. A simplified model to get it is the Kronig-Penney's. Its main hypothesis is the eletric field on the electrons from the atoms of the lattice, that results in a periodic potential. The effect of the electric field from each electron on its neighbours is not part of the calculation. I don't know a model that also includes it.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How would Newton explain mirages? Suppose we think of light as photon packets with tiny momentum, then with this picture in mind, go and see the refraction of light in mirages:
We see that the packets of light photons must be continuously under a nature of force since it must change natural path direction. How would Newton have explained this force since he had no knowledge of wave theory?
| This is not how photons are viewed. They are not moving particles in the Newtonian sense, and this has to do with the wave-particle duality of quantum theory. Light is absorbed by matter like a particle but propagates like a wave. Photons are the quanta of energy passed to matter. So refraction is an example of the wavelike nature of light, not the curved trajectories of light particles. It is interesting that Newton thought of light as literally particles, in the sense that you are implying here. But his conception differed greatly from the modern photon.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/688905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Particle density vs. Probability Density in Quantum Mechanics I am currently reading trough "Bose-Einstein Condensation and Superfluidity" by Pitaevksii and Stringari and noticed some inconsistencies in my reasoning.
In Chapter 5 (Non-uniform Bose gases at zero temperature) the authors introduce the condensate wave function $\Psi$.
It is futher stated that the normalization of $\Psi$ is given by $N = \int d\vec{r} |\Psi(\vec{r})|^2$, where N is the total number of atoms in the condensate. Up until this point, I think of $\Psi$ as a probability density, as I have been doing when dealing with Quantum Mechanics for the past few years.
The following sentence then really confuses me:
The modulus $|\Psi(\vec{r})|$ determines the particle density $n(\vec{r}) = |\Psi(\vec{r})|^2$ of the condensate.
My question is: How can something that describes a probability density be a quantity that represents a particle density?
|
How can something that describes a probability density be a quantity
that represents a particle density?
That is open for extensive qm interpretations, and your question is related to "what is the physical property of a qm object before its measured".
I know that in quantum chemistry, that a working assumption is that the probability density also is the particle density, and if relevant also the charge density. This gives a good picture of how things look in average in results, but the assumption is also used in midway calculations - contradicting the Copenhagen interpretation that the physical properties arises at measurement.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/689062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What is the connection between mechanics and electrodynamics that makes it necessary for both of these to obey the same principle of relativity? Mechanics obeyed Newtonian relativity (faithful to Galilean transformations) before Einstein.
Einstein formulated Special relativity (faithful to Lorentz transformations), and Maxwell's equations became invariant under Special relativity. So, electrodynamics obeyed Special relativity. So far, so good.
Why could we not be happy to conclude that Mechanics obeys Newtonian relativity, and Electrodynamics obeys Special relativity? Why in his first postulate did Einstein emphasize that both Mechanics and Electrodynamics should obey Special relativity? What was the crucial connection between Mechanics and Electrodynamics that demanded that both should obey the same principle of relativity? Is the reason primarily based on experimental verification of Newton's laws for high velocity particles?
| The principle itself is the connection and not the other way around.
The principle of relativity is the idea that the state of constant speed of a reference frame must be impossible to detect from within, i.e, if you don’t witness acceleration by yourself, then you’re doomed to be ignorant about who was accelerated among bodies with constant speed. In fact one could argue that the speed of light (second Einstein’s postulate) was deemed to be constant because, if not, it could be used to verify if one was accelerated in the past even if that person didn’t witness it. As an intrinsic property of our universe not only mechanics or electrodynamics must obey it, everything imaginable must obey this same idea, including your aging, pleasure, or whatever you can think of.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/689192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 9,
"answer_id": 8
} |
Uncertainty notation: I am unsure of how the parentheses notation works If I have a value of $5.868709...×10^{−7}$, and an uncertainty of $7.88431...×10^{−12}$, is it correct to write this as $5.86871(8)×10^{−7}$ or $5.8687(8)×10^{−7}$?
A problem I have with the first is the values used to calculate the $5.868709...×10^{−7}$ were to five significant figures, so is it wrong for me to have six significant figures in my answer?
| Your uncertainty of $7.88431 \times 10^{-12}$ can be written as $0.0000788431 \times 10^{-7}$. But the value itself, $5.868709 \times 10^{-7}$ is known only to 6 decimal places, so the uncertainty cannot sensibly be given to more than 6 decimal places when expressed with the same $10^{-7}$ multiplier. So we could give the result as
$$(5.868709±0.000079) \times 10^{-7}$$
I'd be more inclined to give your value as the safer
$$(5.86871±0.00008) \times 10^{-7}.$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/689790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
How does potential energy increase with no work? If you're dragging an object up a hill at a constant velocity, work is technically 0 (as acceleration is 0), but potential energy constantly increases. How would you represent this situation mathematically, and how does the potential energy increase despite a lack of work?
| If the speed stays constant, the net work is zero, but the work done by the individual forces may not be. In your case,
\begin{align}
W_{\text{net}} = W_g + W_{\text{drag}} = 0
\end{align}
So both you and gravity are doing work, it's just that whatever work you do by dragging, gravity does minus that: $W_g = - W_{\text{drag}}$. By doing this work, you're storing energy as potential energy in the Earth-sled system, since
\begin{align}
\Delta U_g = -W_g = W_{\text{drag}}.
\end{align}
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/690037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
General Method for Calculating Excluded Volume In section 5.3 of Kardar's Statistical Physics of Particles, the van der Waals equation is given as:
$[P+\frac{u_0 \Omega}{2}(\frac{N}{V})^2][V-\frac{N\Omega}{2}]=Nk_BT$
The van der Waals parameters are identified as $a = \frac{u_0 \Omega}{2}$ and $b=\frac{\Omega}{2}$. Here, $\Omega$ is the volume excluded around each particle (to the centers of the other atoms). The parameter $b$ is interpreted as the effective excluded volume for low densities due to the fact that for low densities ($\Omega \ll V$) the contribution of coordinates to the partition function is:
$V(V-\Omega)(V-2\Omega)...(V-(N-1)\Omega) \approx (V-\frac{N\Omega}{2})^N$ (Equation 5.46)
which explains the factor of $\frac{1}{2}$ in $b$. However, the author goes on to say that:
Of course, the above result is only approximate since the effects of
excluded volume involving more than two particles are not correctly
taken into account. The relatively simple form of Eq. (5.46) is only
exact for spatial dimensions d = 1, and at infinity.
I have some questions:
1- What does the author exactly mean when he says the expression is exact for $d=1$ and at "infinity"? Does he mean infinite number of particles?
2- How to incorporate the effects of excluded volume involving more than two particles? As an example I would be glad if someone explained it for 3 particles.
3- Is there a general method or algorithm to find the excluded volume involving any number of particles for any dimension $d$? If so, I would like to see some sources explaining these methods.
| *
*He means the spatial dimension.
In $d=1$ if you consider a 3-particle hard-ball cluster (3 hard-ball particles each overlapping with the remaining 2) it can only happen by taking two overlapping particles and slapping the third one in the middle. The 3-particle overlap will be the same as the overlap between two initial particles so I believe it factors out.
In $d \rightarrow \infty$ I would expect the 3-particle (and higher-order contributions) to vanish. If you keep the particle volume constant and increase the spatial dimension, the particle radius has to decrease. It becomes progressively less likely for 3-particles to meet each other at the same time. In both cases, this is only an idea, but it should be quite easy to show through a direct calculation.
*and 3. The cluster expansion seems to be an answer (although I am not sure since I only briefly heard about it). It is defined for any intermolecular potential but it should be quite simple for hard-ball potential. I think it would boil to the inclusion-exclusion principle.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/690217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Aurora Borealis As we know that during a solar flare, a large number of electrons and protons are ejected from the sun.
Some of them get trapped in the earth’s magnetic field and move in helical paths along the
field lines. The field lines come closer to each other near the magnetic poles.
Hence the density of charges increases near the poles. These particles collide with atoms
and molecules of the atmosphere. Excited oxygen atoms emit green light and excited
nitrogen atoms emit pink light.
So is it possible to (or a device already exists of which I am unaware) concentrate the magnetic fields of the earth say near your house for enjoying the view? (if you know what I mean) Or can we depict the MEC in any other form on such a large scale?
|
So is it possible to (or a device already exists of which I am unaware) concentrate the magnetic fields of the earth say near your house for enjoying the view?
In general, no. In principle, yes. The magnetic moment of the Earth's magnetic field is huge. While we can generate large magnetic fields for short periods of time (e.g., pulsed electromagnets), it's extremely difficult and energetically restrictive/expensive to generate a sustained magnetic moment of such large magnitude (not even sure it's possible, actually).
Or can we depict the MEC in any other form on such a large scale?
I am not sure as I do not know to what MEC refers.
These particles collide with atoms and molecules of the atmosphere. Excited oxygen atoms emit green light and excited nitrogen atoms emit pink light.
As I note in this answer https://physics.stackexchange.com/a/382414/59023, this is not quite correct. There are multiple excitation emission lines from diatomic and monatomic nitrogen and oxygen.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/690316",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Measurement on mixed states I have a conflict between my lecture notes on quantum mechanics, where it is stated that the probability of measuring an eigenvalue $a_i$ on a mixed state with desnsity matrix $\rho$ is
$$
\operatorname{Tr}(P_i \rho P_i)\ ,
$$
where $P_i$ is the projector for the subspace corresponding to $a_i$.
However, all resources out there states that the probability should be $\operatorname{Tr}(\rho P_i)$, and even the professor gave as a solved exam as an example where the later formula was applied instead of the first one.
Which calculation for the probability is correct? Is it possible that both traces are the same because of $P_i$ being a projection operator?
| The latter is correct.
By the cyclic property of the trace.
$$Tr(\rho P_i) = Tr(P_i \Sigma_i | \psi_i \rangle \langle \psi_i|) $$
$$= Tr(\Sigma_i \langle \psi_i | P_i | \psi_i \rangle) $$
This equals the expectation value of operator $\langle P_i \rangle$ (the probability of measurement).
The Wikipedia article has a good explanation also https://en.wikipedia.org/wiki/Density_matrix#%3A%7E%3Atext%3DIn_quantum_mechanics%2C_a_density%2Cstate_of_a_physical_system.%26text%3DDensity_matrices_are_thus_crucial%2Cquantum_decoherence%2C_and_quantum_information.?wprov=sfla1
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/690484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Electric flux due to a point charge through an infinite plane using Gauss divergence theorem I'm learning the basics of vector calculus when I came across this problem:
A point charge +q is located at the origin of the coordinate system. Calculate the flux of the electric field due to this charge through the plane $z = +z_0$ by explicitly evaluating the surface integral. Convert the open surface integral into a closed one by adding a suitable surface(s) and then obtain the result using Gauss' divergence theorem.
I have no problem in solving the first part (i.e) by direct integration of the surface integral. I got the answer as $q/2\epsilon_0$, which I know is the correct answer as it can also be obtained using the solid angle formula.
But the problem is when I proceed to calculate the divergence of the electic field and then do the volume integral I run into an undefined answer. I converted the open surface into a closed volume by adding another plane at $z = -z_0$.
I'm attaching my work below:
Can someone help me out on where I made a mistake?
| The error in your original derivation is that
$$
\frac{\partial (\sin \theta)}{\partial r} = 0,
$$
and so $\vec{\nabla} \cdot \vec{E} = 0$ as well. (Except when $r = 0$, but that's another story.) A partial derivative implies that the other two coordinates ($\theta$ and $\phi$) are held constant. By looking at the derivative when $r$ is constrained to the surface (which is basically what you did when you substituted $\sin \theta = \sqrt{r^2 - z_0^2}/r$), you are no longer holding $\theta$ constant.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/690654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
By how much protons dipole moment inside a nucleus attenuate the culomb force between them? By how much protons dipole moment inside a nucleus attenuate the culomb force between them? As up quarks repel more than down quarks the protons should be oriented with the positive side looking away from the centre of the nucleus. In that case the strong force have more job to do with the dipole moment that wants to break apart the particle instead of the residual strong force that have dipole moment as a 'friendly' force?
| The electric polarizabilities for the proton and neutron, catalogued for example by the Particle Data Group, are about a thousand times smaller than you would expect from doing dimensional analysis. In a hand-waving way, this is because strong interaction makes the "medium" inside of a proton "stiffer" than the "medium" within a hydrogen atom (where the dimensional-analysis result is okay).
In low-mass nuclei, the statement that "strong isospin is a good symmetry" is basically equivalent to "you have permission to neglect nucleon charge." If you can effectively predict the excitation spectrum for a nucleus without considering electric charge at all, it's probably also reasonable to neglect the small electromagnetic correction due to polarizability.
If you wanted to compute this, I'd use a mean-field approach. Choose a nucleus of interest and model it as a uniform-density sphere of charge, whose electric field is zero at the origin, linear in radius to the edge of the nucleus, then $1/r^2$ to infinity. Then convert your nucleon polarizability into an electric susceptibility for nuclear matter. The energy density of the electric field will shrink a little within the nucleus, as you move from $\vec E$ to $\vec D$. The difference in the integrated electric field energy is an order-of-magnitude estimate for the polarizability correction to the energies of nuclear states.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Relationship between angular and translational velocity on inclined surface I have been researching about rolling motion and I was calculating a way to predict the translational velocity of the object at the bottom of the incline. I know that the kinetic energy of a cylinder undergoing rolling motion is given as
$$E_k = \frac{1}{2} I \omega^2$$
Can angular velocity $\omega$ be replaced as $v/r$ even if the object is a partially filled cylinder?
| Linear tangential speed $v_t$ of a particle at radius $r$ from the axis of rotation is
$$v_\text{tan} = r \omega$$
The fact that the cylinder is only partially filled does not affect the above equation, it affects only moment of inertia of the body. Please note that many equations for rotational motion assume that body is rigid! Total kinetic energy of a rolling cylinder must include both translational and rotational kinetic energy
$$\boxed{K = \frac{1}{2} I \omega^2 + \frac{1}{2} m v_t^2}$$
Moment of inertia of a full cylinder is $I = \frac{1}{2} m r^2$ and the above equation is simplified into
$$K_\text{full-cyl} = \frac{3}{4} m v_t^2$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What causes time warping in the space-time? I was reading through some blogs/articles and watching youtube videos that explained to non-physicists such as myself - how time warping or a gradient in time flow around any object can create gravity. I am able to understand the mechanics (minus the math, I'm not a physicist) of gravity according to this theory, but, a question bugs me:
What creates the time gradient in the first place? Why would the infinitesimally small clocks (or connected particles that move through time at different rates) have the the different tick rates in the first place?
(Kindly be gentle on the math - I am not a mathematician either!)
A follow up question: if an object in space-time is massive enough, such as a black hole, can it stop moving through time... in a way bending time over itself and never letting it go, like light?
| Marco's answer is an excellent explanation of time dilation as different path lengths, and I do hope you've read it. But just to supplement it: the reason that paths through time can be "curved" is because matter (and energy, and everything else that exists) bends both space and time.
The common pop-sci picture of a bowling ball on a rubber sheet representing warping of space is very incomplete, because time is warped as well. And in fact in our everyday experience it's the warping of time that matters the most, because our "scale" for time is much more significant. That is, space and time are related by the speed of light. In one second light travels nearly 300,000 km. So on a scale of "natural" units, a second of time roughly corresponds to the distance from the Earth to the Moon.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 7,
"answer_id": 3
} |
Why "in" and "out" states $\Psi^\mp_\alpha$ are eigenstates of total Hamiltonian $H$? "in" and "out" states, $\Psi^\pm$, with reference to Weinberg Vol. 1 pages 109 and 110 could be defined by
$$\Psi_\alpha^\pm = \Omega(\mp \infty)\Phi_\alpha\tag{3.1.13}$$
where
$$\Omega(\tau) = \exp(+iH\tau)\exp(-iH_0\tau).\tag{3.1.14}$$
This by itself defines $\Psi^\pm$s and so any great property this object couldn't be included in definition but it should be proven. but Weinberg includes in definition that $\Psi_\alpha^\pm$ are eigenvectors of total Hamiltonian $H$. something is missing! I'm graduated in math and tried over forty hours to find rigorous picture of this things and I couldn't!
| This is an assumption of the Hamiltonian. Weinberg states
...suppose we can divide the time-translation generator $H$ into two terms, a free-particle Hamiltonian $H_0$ and an interaction $V$,
$$H=H_0+V$$
in such a way that $H_0$ has eigenstates $\Phi_{\alpha}$ that have the same appearance as the eigenstates $\Psi^+_{\alpha}$ and $\Psi^-_{\alpha}$ of the complete Hamiltonian
$$H_0\Phi_{\alpha}=E_{\alpha}\Phi_{\alpha}$$...
We are purely assuming that the full Hamiltonian has the same spectrum as the free Hamiltonian. This is generically the case in most physical circumstances in quantum field theory, where the spectrum of these two operators is continuous (it consists of unbounded particle momentum states). The only caveat though is within this assumption we are imposing that the particle states for the free Hamiltonian $\Phi_{\alpha}$ have the same masses as the ones for the full Hamiltonian. But this can be done trivially by redefining $V$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Lorentz transformation of annihilation operator In Srednicki's Quantum Field Theory, chapter 4, the author claims that the Lorentz transformation for given a scalar field $\varphi(x)$,
\begin{align}
U(\Lambda)^{-1} \varphi(x) U(\Lambda) = \varphi(\Lambda^{-1}x),
\end{align}
"implies that the particle creation and annihilation operators transform as"
\begin{align}
U(\Lambda)^{-1} a(\mathbf{k}) U(\Lambda) = a(\Lambda^{-1}\mathbf{k}).
\end{align}
I'm trying to prove that statement. My starting point is the expression for the $a$ operators given in the previous chapter:
\begin{align}
a(\mathbf{k}) = \int d^3x e^{-ikx} \left[ i\partial_0 \varphi(x) + \omega \varphi(x) \right].
\end{align}
I then applied $U(\Lambda)^{-1}$ and $U(\Lambda)$ to left and right of this equation and used the fact that these operators commute with the integral and derivative to obtain
\begin{align}
U(\Lambda)^{-1}a(\mathbf{k})U(\Lambda) = \int d^3x e^{-ikx} \left[ i\partial_0 \varphi(\Lambda^{-1}x) + \omega \varphi(\Lambda^{-1}x) \right].
\end{align}
I then want to make a variable change $x' = \Lambda^{-1}x$. For that I first put the integral measure in a Lorentz invariant form, in a similar manner to what the author does for the measure in $k$-space, by defining
\begin{align}
\tilde{dx} := \frac{d^3x}{2 \sqrt{s^2 + \mathbf{x}^2}}, \quad s = \sqrt{-(x^0)^2 + (\mathbf{x})^2}>0.
\end{align}
By making the variable change to $x'$ and using the Lorentz invariance of this new integration measure I then obtain
\begin{align}
U(\Lambda)^{-1}a(\mathbf{k})U(\Lambda)
&= \int \tilde{dx}' 2 \sqrt{s^2 + (\Lambda\mathbf{x}')^2} e^{-ik(\Lambda x')} \left[ i\partial_0 \varphi(x') + \omega_k \varphi(x') \right] \\
&= \int d^3x' \sqrt{\frac{(\Lambda^0_{~~\mu} x'^\mu)^2}{(x'^0)^2}} e^{-i(\Lambda^{-1}k)x'} \left[ i\partial_0 \varphi(x') + \omega_k \varphi(x') \right] \\
&= \int d^3x' \left(\frac{\Lambda^0_{~~\mu} x'^\mu}{x'^0}\right) e^{-ik'x'} \left[ i\left(\Lambda_0^{~~\nu}\partial'_\nu\right) \varphi(x') + \left(\omega_{k'} \frac{\Lambda^0_{~~\sigma} k'^\sigma}{k'^0}\right) \varphi(x') \right],
\end{align}
where $k' = \Lambda^{-1} k$. At this point the whole expression is in terms of the new integration variables $x'$, and the new momenta $k' =\Lambda^{-1} k$. However, it is not clear how to put this in the same form as the definition of $a$ (the third equation) and I don't know where to go from here.
My question is, can this last expression be simplified to obtain Eq. 3? If so, how?
Notes:
*
*As the author, I'm using the "mostly plus" metric, with $(x^\mu) = (t, \mathbf{x})$, $(x_\mu) = (-t, \mathbf{x})$.
*I believe the notation "$\Lambda^{-1}\mathbf{k}$" means the spatial part of $\Lambda^{-1}k$, where $k^0 = \sqrt{m^2 + \mathbf{k}^2}$.
| What Srednicki is trying to say with the first equation you have written (well, maybe the with the second too) is that both the field $\varphi(x)$ and the creation/annihilation operators do not have a vector or a tensor nature. Rather, they are scalar quantities and as scalar quantities they should transform under Lorentz transformations. What you are trying to do is redundant I think. The essence of a scalar quantity is that it transforms trivially under Lorentz tranformations. You can think of it as a defining property. Also, the same way Lorentz transformations act on spatial vectors, they also act on momentum vectors, so there is no reason why the creation/annihilation operators do not obey the same "transformation rules" with the scalar field $\varphi(x)$...
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Analysing a system of pulleys?
I'm quite unsure on how to solve these questions.
For part (b), my approach was that the tension in the lower cable would have to be greater than or equal to $2m$ for both the masses to be lifted.
Using Newton's second law on the lower pulley, and assuming the pulleys to be massless, the tension in the upper cable is twice the tension in the lower cable.
Hence, the tension in the upper cable would have to be greater than $4m$ for the masses to be lifted, and so, M would have to be more than $4m$.
However, it is stated in the question that we expect $M \geq 8m$ as our answer.
Similarly, for part (c), if M = 4m, then the tension in the upper cable is $4mg$. Again, from Newton's second law, the tension in the lower cable is half this, $2mg$. And therefore, the vertical acceleration of mass A is given by:
$$2mg - mg = ma$$
In other words, $a = g$
However, the expected answer is $a = \frac{g}{2}$.
So in both cases, I have a factor of two error, so I'm clearly missing something. But I can't quite figure out what.
Any help would be greatly appreciated!
| As long as mass B is partially supported by the surface, then the acceleration of mass A is twice that of M. If the surface is not a factor, then the acceleration of M puts A and B into an accelerated frame, in which has a relative acceleration of a'.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691715",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is my bluetooth signal able to exit my microwave? I was shopping for a bluetooth meat thermometer. Since this device would also be used in my combo (conventional and microwave) oven, which is shielded for microwaves, I expected the device to not work.
So, I decided to make the following test before ordering:
*
*play a song over bluetooth on my headset
*put my cell phone in the combo (conventional and microwave) oven
*close the door
I expected the bluetooth connectivity to drop, since the oven is shielded to the microwave spectrum. Yet, I could still hear the song just fine on my headset.
So, what physics principle explain the bluetooth signal being able to exit by microwave oven ?
| Just some rough numbers: say the oven produces ~1kW=60dBm RF power of which only 1mW=0dBm is allowed to leak out then the window's leakage is about -60dB. If your Bluetooth is radiating about 1mW =0dBm and your receiver has about -90dBm operating threshold meaning it will receive 0-60=-60dBm still having some 30dB (1,000X) margin above that value. A modern RF receiver is an amazingly sensitive device.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/691958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
} |
Question about the Wave equation I have a question. I was looking for the Wave equation (first Eq. of this wikipedia page).
I saw for the first time a version of this equation during an Acoustic course, where we obtained it for the sound wave combining the Euler equation, the Continuity equation, the general gas equation.
So, how is a generical wave equation, as the one described in wikipedia, derived? Is there behind a mathematical derivation or is it just a specific form of Differential Eq. that was found the same for some scalars, so we have to take it "as it is"?
Thank you in advance
| The wave equation is a "general" differential equation that describes waves in several contexts.
It is given by
$$\partial_t ^2 u = v^2 \Delta u$$
and has has general solution (in 1D)
$$u(x, t) = f(x-vt)+g(x+vt)$$
i.e. the sum of a function "moving" to the left with velocity $v$ and one moving to the right. That is, waves that translate: whatever value $f$ has at position $x$ at the beginning it will have it at position $x_2$ such that $x_2=x+vt$ (and same for $g$ with different signs).
You can not "derive" it. What you can do is observe that several phenomena (electromagnetic fields, material waves, etc) are described by an equation having this form i.e. they accept a solution which "moves" like a wave. Roughly what will change between different system is the value of $v$ i.e. the speed of the wave.
The "shape" of the wave instead is given by initial conditions and geometrical/symmetry arguments, usually.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
What is causing this sign difference in the centrifugal term between Lagrangian and Hamiltonian formalism? Consider a central force problem of the form with the Lagrangian
$$
L(r, \theta, \dot{r}, \dot{\theta}) = \frac{1}{2} m \left( \dot{r}^2 + r^2 \dot{\theta}^2 \right) - V(r),
$$
where $r = |\vec{x}|$. Since $\theta$ is cyclic, we can show that $m r^2 \dot{\theta}$ is a constant of motion, and rewrite the Lagrangian as
$$
L(r, \dot{r}) = \frac{1}{2} m \dot{r}^2 + \frac{l^2}{2mr^2} - V(r).
$$
If I calculate the Hamiltonian from this, I get
$$
H_{1}(r, p_r) = \frac{p_r^2}{2m} - \frac{l^2}{2mr^2} + V(r)
$$
Taking another direction, I calculated first the Hamiltonian from the Lagrangian as
$$
H_{2}(r, \theta, p_r, p_{\theta}) = \frac{p_r^2}{2m} + \frac{p_{\theta}^2}{2mr^2} + V(r) = \frac{p_r^2}{2m} + \frac{l^2}{2mr^2} + V(r) = H_{2}(r, p_r),
$$
where I concluded that $p_\theta = m r^2 \dot{\theta} = l$ is a constant.
The problem is, that I get an apparent sign difference between the $\frac{l^2}{2mr^2}$ and $V(r)$ terms in $H_{1}$ and $H_{2}$, which I don't understand. I'm pretty sure that $H_1$ is wrong, but I don't know what kind of conceptual mistake did I make when calculating $H_1$.
Conceptual issue
Apparently, when I introduce the additional potential term in the Lagrangian formalism first, then calculate the Hamiltonian, I don't get the same Hamiltonian when I do it in reverse order. Why do I get different Hamiltonians?
| You can't insert a the solution of the equation of motion back into the Lagragian. You must eliminate the conserved quantity bu using a Routhian. See the section on cyclic coordinates and on central forces in spherical coordinates.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
On an infinite plane, with gravity the same of that of Earth, how far could light at an arbitrary angle travel until bending to hit the plane? Now, I'm a complete idiot, so bear with me.
I've recently come across the idea that standing an infinite flat Earth would in theory appear the same as standing inside a hollow earth, since light would, due to gravity, bend towards the flat earth.
Here illustrated like so:
However, I have yet to find any source that has an actual way of telling how far this distance would be. I have found calculations for the gravity of an infinite flat earth here and a formula for gravitational lensing here, but I'm not smart enough to understand the latter or how one would somehow combine the two.
So, as this has started to drive me insane, I've decided to turn to people who know more about this than I do.
Basically, there's an infinite flat plane with uniform gravity equivalent to that of earth. Is there any sort of formula or calculation that one could do to to figure out how far along the plane a ray of light would travel if casted at an arbitrary angle?
| For a proper calculation you will have to consider a spacetime metric that produces the same gravitational field as in your setup, and then find the null geodesics (paths of light) in this metric. I think this shouldn't be very difficult to calculate.
However we can probably also get an order-of-magnitude estimate just by considering the non-relativistic case of a ballistic trajectory and plugging in $c$ for the velocity. In such a case it is easy to see that the horizontal distance of a projectile shot at a 45 degree angle would be $v^2/g$, so if we plug in $v = c = 3 \times 10^8 m/s$ and $g = 9.8 m/s^2$ , we get a distance of approximately $10^{16}$ meters, or 10 trillion killometers (which is about 1 light year).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
What Lorentz symmetries do electric and magnetic fields break? When we turn on an external (non-dynamical) electric or magnetic field in (3+1)-dimensional Minkowski space we break rotational invariance because they pick out a special direction in spacetime. Does this also break boost invariance?
What about in (2+1)-dimensions when the magnetic field is a scalar? Now the magnetic field does not seem to break rotations. Does it break boosts?
How can I show this?
| Under boosts,the fields transform into each other in a prescribed way. If we define the direction of the boost to be the $x$-direction, then we have
\begin{align*}
E'_x &= E_x & E'_y &= \gamma(E_y - \beta B_z) & E'_z = \gamma(E_z + \beta B_y) \\
B'_x &= B_x & B'_y &= \gamma(B_y + \beta E_z) & B'_z = \gamma(B_z + \beta E_y)
\end{align*}
It is not hard to see from the equations there that an electric or a magnetic field is invariant under boosts in the direction of the field — i.e., if the field is in the $x$-direction in one frame, then any new frame moving in the $x$-direction with respect to the first frame will also observe the same field. However, the fields change if they have any components perpendicular to the boost.
Presumably one could write down a set of field transformations for electric and magnetic fields in 2+1 dimensions. These could be found by writing out the component-by-component transformation laws for the Faraday tensor in 2+1 dimensions:
$$
F'_{\mu \nu} = \Lambda_\mu {}^\rho \Lambda_\nu {}^\sigma F_{\rho \sigma}.
$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
What does Griffith mean by adding a prime on integration variables? In the book "Introduction to Electrodynamics" by Griffith, the author mentions electric potential as a point function writes the equation for electric potential as
Then in a side note he write "To avoid any possible ambiguity, I should perhaps put a prime on the integration variable"
To what 'ambiguity' is he refering to and what how does adding the prime clarify it?
| When talking about integrals, the variable of integration is "dummy", in the following sense. Suppose $f:\Bbb{R}\to\Bbb{R}$ is a function, then for any $x\in\Bbb{R}$,
\begin{align}
\int_0^xf(t)\,dt=\int_0^xf(s)\,ds=\int_0^xf(\xi)\,d\xi=\int_0^xf(\ddot{\smile})\,d \ddot{\smile}=\int_0^xf(@)\,d@=\int_0^xf(\sharp)\,d\sharp,
\end{align}
and so on. The actual symbol used does not matter except for $x$: what is completely nonsense notation is
\begin{align}
\int_0^xf(x)\,dx,
\end{align}
because the $x$ is being used in two places with different meanings, so it's just confusing and wrong. We can keep going: if you want, you can write
\begin{align}
\int_0^xf(y)\,dy= \int_0^xf(x')\,dx'=\int_0^{x}f(\tilde{x})\,d\tilde{x}.
\end{align}
Literally, any other symbol than $x$ can be used as the integration symbol. Same thing with line integrals
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/692837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Speed of heat through an object According to the Heat equation (the PDE), heat can travel infinitely fast, which doesn't seem right to me. So I was wondering, at what speed does heat actually propogate through an object?
For example, if I have a really long iron rod at a constant temperature (say 0 Celsius), and one end of it instantenously becomes hot (e.g. 1000 Celsius), how far down the rod will the temperature have changed in 1 second? I don't care how much the temperature changes, only how far a temperature change (however minuscule) happened.
Would changing the material (e.g. steel instead of iron) or the initial temperatures change the answer?
My gut tells me the answer should be the speed of sound for the material, because that's the speed at which movement in the atoms can affect each other.
| It seems to me that what you are looking for is the thermal diffusivity, as this is the coefficient that balances the rate and area of the temperature change:
$$\partial_tT=\alpha\nabla^2T\implies\alpha=\frac{\partial_tT}{\nabla^2T}
$$
and has units of area per time ($\partial_t$ has units of inverse time, $\nabla^2$ has units of inverse area, hence area per time).
It is also empirically measured by a method called laser flashing in which a material of thickness $d$ is heated by a laser on one side and the temperature measured on the other side. The thickness and the time to half-maximum temperature, are then used in the formula,
$$
\alpha=\eta\frac{d^2}{t_{1/2}}\tag{1}
$$
where $\eta$ is some small constant. An example I found online of such a test shows a very nice chart of a finite temperature change,
(source)
So it seems to me that one could invert the relationship in Eq (1) for the distance,
$$d\sim\left(\alpha\,t\right)^{1/2}$$
and find at least a first-order approximation. For the example given of the iron rod, using the Wikipedia entry for the diffusivity, after one second the heat should have traveled about 5 mm.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/693112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 4
} |
Can we derive Boyle's law out of nothing? My textbook states Boyle's law without a proof. I saw Feynman's proof of it but found it to be too handwavy and at the same time it uses Boltzmann's equipartition theorem from statistical mechanics which is too difficult for me now.
So to state roughly what Boyle's law is, it states that at a constant temperature and mass of gas,
$$PV=k$$
Where $P$ is pressure and $V$ is the volume and $k$ is constant in this case.
Is there a proof for this that isn't based on any other gas law, perhaps based on Newtonian mechanics?
| The pressure exerted by a gas on the walls of its container is proportional to the frequency with which the molecules strike the walls. The larger the container, the more time on average a molecule spends between collisions with the container walls and thus the lower the pressure.
Consider a cylindrical container. If the walls are smooth, then the magnitude of the molecule's velocity along the cylinder's symmetry axis is fixed and changes sign every time the molecule hits the top or bottom of the cylinder. The time it takes the molecule to travel down the cylinder and back up again is proportional to the height of the cylinder, and so is also, for a fixed cross sectional area, proportional to the volume. This proportionality gives you the constant $PV$, since time is inversely proportional to frequency.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/693255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 0
} |
Feynman diagrams for neutral pion decay into electron-positron pair The main Feynman diagram I've seen corresponding to this decay is the following:
What I don't understand is why is it not possible for the up-antiup quarks that form the pion to annihilate into a single virtual photon, and then have that virtual photon decay to a positron-electron pair, similar to the annihilation of electron-positron pairs into a muon-antimuon pair in the following fashion?
EDIT: My question was closed and flagged as a duplicate of a question that did not answer what I asked so I repost it. I repeat, it is NOT a duplicate, I am not asking why the loop contribution is suppressed, I am asking why a different Feynman diagram is not acceptable.
| Charge conjugation, C, contrasts even and odd numbers of photons in states and amplitudes.
The $^0$ has C=+, but one photon has C=—, and hence two photons +.
By contrast, the $\rho^0$, with C=—, can and does couple to one photon, the heart of the Vector Dominance Model.
Note the Z has no well-defined C, as the weak interactions break C. Such a coupling to one Z exists, as the corresponding axial current of PCAC couples to the Z in its inimitable cockeyed way...
All charged fermions couple to an indefinite number of photons, in principle, of course. But the overall amplitude must preserve the C of the incoming and outgoing states, in QED which preserves C (unlike the weak interactions). The
daughter $e^+e^-$ of the triangle diagram above are in an even C state, +; unlike the $e^+e^-$ that annihilate to one virtual photon, and then resolve to $\mu^+\mu^-$ in your latter diagram.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/693363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Electric field of a point very far from uniformly charged rectangle sheet I was wondering what is the Electric field at a point which is very far from a rectangular sheet and it is also above the center of the rectangle. So form a mathematical perspective you get Electric field due to a finite rectangular sheet of charge on the surface $$ S = \left\{(x,y,z)\in \mathbb{R}^3 \mid -a/2< x < +a/2; -b/2< y < +b/2 ; z = 0 \right\} .$$ is $$ E(0,0,r) = \frac{\sigma r}{4\pi\epsilon_o} \int_{x=-a/2}^{x=+a/2}\int_{y=-b/2}^{y=+b/2} \frac{dx dy}{(x^2+y^2+r^2)^{3/2}} $$ so $$E(0,0,r) = \frac{\sigma}{\pi \epsilon_0} \arctan\left( \frac{ab}{4r\sqrt{(a/2)^2+(b/2)^2+r^2}} \right)$$. It seems very counter intutive that for $r>>a$ and $r>>b$ electric field is not $$E(0,0,r) = \frac{\sigma}{\pi \epsilon_0}\arctan\left( \frac{ab}{4r^2} \right)$$ but $E(0,0,r) =k_e\frac{q}{r^2}$ where $q=\sigma ab$. My question is shouldn't it behave like a point charge if it is very far away from the point where I am calculating electric field? Why is that not so? What am I doing wrong?
| $\arctan(\theta)\approx \theta-\frac{\theta^3}{3}$ near $\theta=0$ so
\begin{align}
\frac{\sigma}{\pi\epsilon}\arctan\left(\frac{ab}{4r^2}\right)
\approx \frac{\sigma}{\pi\epsilon}\frac{ab}{4r^2}\tag{1}
\end{align}
and since $a\times b$ is the area, $\sigma\times a\times b=Q$, the charge
on your plate. At this level of approximation you then get
\begin{align}
E_z(0,0,r)\approx \frac{Q}{4\pi\epsilon r^2}
\end{align}
which is the field of a point charge.
The additional term $\theta^3/3$, which I did not include in (1), gives the leading correction due to the finite size of the plate. It is negative because, in $Q/4\pi\epsilon r^2$, you are concentrating all the charge at a single point whereas the actual field will be a little less since the charge is diluted over the entire area, and thus some of the charge is a slightly greater distance from $(0,0,r)$ than the centre of the plate, resulting in a slightly smaller contribution than if it was at the origin.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/693777",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Is there a physically meaningful example of a spacetime scalar potential? From Misner, Thorne and Wheeler, page 115.
0-Form or Scalar, $f$
An example in the context of 3-space and Newtonian physics is temperature $T\left(x,y,z\right),$ and in the context of spacetime, a scalar potential, $\phi\left(t,x,y,z\right).$
I'm trying to think of an example of such a scalar potential. Is there one? Electrostatic potential is the time component of the electromagnetic 4-vector potential, so it's really a vector with 0-valued space components.
| If you have a vector or tensor field, then you can get a scalar field by contraction.
Examples:
$J^\mu$ = 4-flux of some quantity. Scalar field: $\rho = \sqrt{J^\mu J_\mu}/c$. Interpretation: proper density.
$k^\mu$ = 4-wave vector; $x^\mu$ = 4-position. Scalar field: $\phi = k^\mu x_\mu$. Interpretation: phase of a plane wave.
Electromagnetic field tensor $F^{\mu\nu}$. Scalar fields: $F^\mu_\mu$ and $F^{\mu\nu} F_{\mu \nu}$ and $F^{\mu \nu} \tilde{F}_{\mu \nu}$. The first of these is zero, the second is $2(E^2 - c^2 B^2)/c^2$ and the third is $4 {\bf E} \cdot {\bf B}$.
The above are scalar fields, though not normally called 'potentials' because their gradient does not relate to a force. However we can introduce a potential which is by definition a scalar invariant, and then consider the gradient to be a 4-force. We thus obtain
$$
f^\mu = - \partial^\mu \phi.
$$
Such a 4-force is not the electromagnetic force, but it can be used to construct simple models of the strong force.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/693884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Can two waves be considered in phase if the phase angle is a multiple of 2$\pi$? Question is essentially what the title states. Wavefront is defined as the locus of points that are in phase. So I wanted to know if the locus would be the points of only a single circle or multiple circles whose points all have the same displacement? Or in other words can all the points that are at the peak at a specific time be considered as part of a single wavefront/inphase?
Can all the points in all the green circles be said to be in phase? Can they be said to be in the same wavefront?
| I believe that the term wavefront is used to refer to the pulse that was produced by the source at the same time. So when we define wavefront we define it as a locus of points with same phase, where a phase difference of $2\pi$ is not considered in the locus.
All the points on the green circles are in phase but a single green circle is considered to be a wavefront.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/694127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.