Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
---|---|---|
One straw drinking from many containers of liquid One of my friends brought up a photo:
Which sparked a debate about whether the containers closest to the end of the straw would empty first. I was just wondering if someone could explain if the closest two containers would be empty before the furthest.
| Even with a Newtonian fluid like water and sucked very slow and constant the straw distances are not the same. Thus the pressure drop at the lips would be different from each container making a very slight rate of suck difference. If not a constant rate suck it would attempt to level out on the flow back and probably would especially with a Newtonian fluid. On the flow back cycle you still have the pressure drop difference but now the drop favors filling the closer containers more. With a Newtonian fluid you probably would hardly notice the difference. There would be a difference. A non Newtonian fluid like the ones in the drawing would exaggerate all of the above. The bottom line the suck is the last thing you do with no flow back and because of pressure drop there will be a difference.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/283183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Why are hydrogenic levels used in writing electronic configuration? I recently started taking a course in Atomic and Molecular Physics. We learned about Hartree Fock approach to solving the many-electron atom problem. If I understand correctly, the electron orbitals that we refer to as 1s, 2s, 2p etc. are eigenstates of the hydrogenic Hamiltonian. When electron-electron repulsion is included these states are no longer the solutions to the problem, and the total wave function is an anti-symmetric wave function in N spatial/spin co-ordinates.
Then why do we use 1s, 2s, 2px etc. when writing the electron configuration of many electron atoms?
I also don't understand how can we use that description to write the spectroscopic term; I learned about LS and JJ coupling where the calculations start with taking the L values of the unpaired electrons in the outermost shell. But how can we justify that these electrons would have that L value. For e.g. How do we know that the outermost electron of sodium ([Ne] 3s1) actually has l=0, if it does not really reside in a hydrogenic evergy level 3s1?
| The issue here is that although the atomic single-electron orbitals have the same quantum numbers as the hydrogenic orbitals (n,l,$m_l$,$m_s$), they are not hydrogenic orbitals because they result from the self-consistent (central) HF potential. This is an approximation, but quite a good one for valence orbitals (the closed shells generate central potentials in HF approximation). This permits the use of hydrogenic names (1s, 2p, etc), but they are not hydrogenic orbitals. As with many approximations in physics, the justification is that it works for many cases of interest.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/283288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Question about oersted's experiment So I was studying my physics notes. They are on the magnetic field shown by a straight current carrying wire. To demonstrate that theres the oersted's experiment. The starting goes like this-
Insert a thick copper wire between 2 points, X and Y, in a circuit. The wire should be perpendicular to the plane of paper. Place a compass horizontally to the wire. Switch on the current. The compass needle shows deflection
My question is - what does placing the wire XY perpendicular to the plane of paper mean?
And why would you do that? Please help.
| “The paper” is probably just meaning the table. So let's put the wire vertically going in the up-down direction. Using the right-hand-rule we obtain that the magnetic field will be circles in the plane of the table/paper.
The compass will align itself to the magnetic field lines. Therefore the rotating compass should be positioned such that it points towards the wire or parallel to it. When you turn on the magnetic field, it can align itself with the magnetic field lines.
Take a look at some video of the experiment. There you can see that the magnet will align itself perpendicular to the wire and parallel to the plane that the wire is perpendicular to (that sentence is probably unnecessarily complex).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/283595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Theoretically, could there be different types of protons and electrons? Me and my friend were arguing. I think there could theoretically be different types of protons, but he says not. He says that if you have a different type of proton, it isn't a proton, it's something else. That doesn't make sense to me! There are different types of apples, but they're still called apples!
He says that's how protons work, but can we really know that?
| I believe you are arguing semantics. To make this clear, lets assume there are only 200 type of "particles," each with a unique set of properties. Once we give each one a name, there can't be any more of "the same thing" with a different name. For example, lets say that number 12 on this list we call it "electron" and number 125 we call it "proton" then, any particle that meets the properties of #13 must be an electron, and those that meet the properties of #125 must be protons. Since there are only 200 particles, if a given particle does not have the properties of an electron (12) or a proton (125), then it must meet the properties of some other particle on the list (neutron, positron, neutrino, etc.).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/283682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "32",
"answer_count": 11,
"answer_id": 10
} |
Could the gravitational field equations be formulated in term of the Riemann curvature tensor (as opposed to the Ricci curvature tensor)? The most symmetries and identities in Riemannian geometry are in term of the Riemann curvature tensor. One may ask why the gravitational field equations are not in term of this main tensor of (pseudo)Riemannian geometry? i.e. without any contraction with the metric.
However contraction with the covariant derivative, torsion tensor (and the like) is OK.
| Sure they can. The answer comes from the Ricci decomposition and the Einstein equations.
Let $T$ be the trace of the stress-energy tensor, let $S_{ab}$ be the traceless part of the stress-energy tensor, and let $C_{abcd}$ be the Weyl tensor, let $\kappa$ be the proportionality constant for the Einstein equations $8\pi G/c^4$, and the result is
$$R_{abcd} = -\frac{\kappa T}{6} g_{a[c} g_{b]d} + C_{abcd} + \kappa g_{a[c} S_{b]d} - g_{b[c} S_{d]a}$$
The Weyl tensor $C_{abcd}$ holds all the gravitational radiation degrees of freedom. All the other terms stem directly from the stress-energy tensor.
In other words, the Einstein equations eliminate the gravitational radiation degrees of freedom to directly relate stress energy to curvature. Writing the equations in terms of the Riemann tensor merely makes those gravitational radiation terms explicit again.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/283752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Help with this geometrical approach to deriving the lens equation for weak lensing All images and quotations are from Schneider, Kochanek and Wambsganss.
Here is an image of a typical weak lensing setup. Since $D_{ds}$ and $D_s$ are much larger than the extent of the lens and source plane, we can model the curvature of the light ray as a kink at the point of the lens.
$\hat{\alpha}$ is the deflection angle.
$\eta$ is the 2d position of the source on a source plane.
$\xi$ is the ray impact parameter.
Small angle approximations apply to the deflection angle.
From the figure we can read off the geometric condition that $$\vec{\eta}=\frac{D_s}{D_d}\vec{\xi}-D_{ds}\vec{\hat{\alpha}}(\vec{\xi}).$$
I am struggling to understand where this has come from geometrically. Could someone please explain?
For completeness I will include the rest of the derivation in case it aids any explanations. We introduce angular coordinates by $$\vec{\eta}=D_s\vec{\beta}$$ and $$\vec{\xi}=D_d\vec{\theta}.$$ Now we transform the first equation to $$\vec{\beta}=\vec{\theta}-\frac{D_{ds}}{D_s}\vec{\hat{\alpha}}(D_d\vec{\theta})=\vec{\theta}-\vec{\alpha}(\vec{\theta}).$$
| I think this is where it comes from?
Triangles $ACB$ and $ECF$ are similar so
$\dfrac {\xi}{D_d} = \dfrac {AB}{D_s} \Rightarrow AB = \dfrac {D_s}{D_d} \xi = \eta + AG$
For small angles $AG \approx \alpha D_{ds}$ and the result follows
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/283873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How electron get deflected in magnetic field while moving? I don't understand why electron moves this way... e.g. A light object (crampled paper) going down until gets hit by the wind will go parallel (at least a few seconds) to the wind direction ... why not with electron?
| For electrons the magnetic field is not like a "wind". The electron experiences a velocity dependent force, the Lorentz force, which is perpendicular to both the direction of the velocity and to the magnetic field direction. See, e.g., https://en.wikipedia.org/wiki/Lorentz_force .
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/284081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Finding the subsequent of motion using a gravitational field
A constant gravitational field points along the negative z-axis. If the
acceleration due to gravity is $g$, the force in the z-direction experienced
by a particle of mass $m$ is $F = −mg$. If the particle is released from
co-ordinate $z = z_0$ with the velocity $v_0$, find the subsequent motion $z = z(t)$.
Here is my work so far, not sure if I am on the right track.
Since the gravitational field $g$ around a mass $m$ is a vector field :
$g$ = $\frac{F}{m}$ = $-\frac{d^2 z}{d t^2}$ = $-GM\frac{\hat z}{z}$
Since $F = m\ddot z = -mg$
$\ddot z = GM\frac{\hat z}{z}$
to get a subsequent motion do I double integral over the right hand side $GM\frac{\hat z}{z}$?
Any help will be appreciated.
| Forget $GM$ etc.
In the question it states that there is a "constant gravitational field" and the magnitude of that filed is $g$.
So all you need to solve is the equation $\ddot z = -g$ which you probably done may times before and got the constant acceleration kinematic equations.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/284208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If energy is quantized, does that mean that there is a largest-possible wavelength? Given Planck's energy-frequency relation $E=hf$, since energy is quantized, presumably there exists some quantum of energy that is the smallest possible. Is there truly such a universally-minimum quantum of $E$, and does that also mean that there is a minimum-possible frequency (and thus, a maximum-possible wavelength)?
| You have to keep in mind that the relation
$$
E = hf
$$
holds only true for photons. Photons - generally - can have arbitrary energies, so they can have arbitrary frequencies as well.
When you think of quantization, you think of quantization of an observable for a specific system. An one-dimensional harmonic oscillator for example has the quantized energies
$$
E_\textrm{Osz} = h f_\textrm{Osz} \left(\frac{1}{2} +n\right)
$$
where $n$ is $\mathbb Z_{0}^{+}$, and $f_{Osz}$ is the frequency of the oscillator! So the energy has discrete values it can hold, but remember, we are talking about the energy of the oscillator, not of a photon.
If you now ask the question: What is the minimum energy (and therefore maximum wavelength) for a photon to get absorbed by the oscillator, the answer would be:
$$
E_\textrm{PhMin} = 1\cdot h f_\textrm{Osz} \ \rightarrow \ \lambda_\textrm{PhMax} = c/f_\textrm{Osz}
$$
because that is the difference between two niveaus of the oscillator.
If you look at another system your answer will be different. How observables (like energy) are quantized is dependent on the system.
tl'dr:
There is no maximum wavelength for photons.
Edit: At least not because of the Planck Relation. If there is a maximum wavelength for photons, the reasons for it will have nothing to do with the implications of your question. I could be that - at sufficiently small energies - photons show non-trivial effects which restricts another loss of energy.
(Please note that I used simplified assumptions, for instance we are in vacuum etc.)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/284444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 3,
"answer_id": 0
} |
Promoting time to an operator In Mark Srednicki's QFT book, he talks about the fact that one of the problems with combining quantum mechanics with special relativity is that in QM, position is an operator and time is just a parameter. He then says there would be two natural ways to remedy this, either promote time to an operator or demote position to a parameter. Referring to the first option he says that we can indeed do so if we use the proper time as the parameter in our differential equation and promote the observed time to an operator. He then continues:
Relativistic quantum mechanics can indeed be developed along these lines, but it is surprisingly complicated to do so. (The many times are the problem; any monotonic function of τ is just as good a candidate as τ itself for the proper time, and this infinite redundancy of descriptions must be understood and accounted for.
He then goes on to describe the second option, the one that I am familiar with, where we label our quantum field operators by a position label $x$.
My question is two-fold:
*
*What exactly does understanding the infinite redundancy in choices for the time parameter entail, as in how does one end up dealing with that mathematically?
*Can it be shown that these two formulations are completely equivalent?
EDIT:
Upon further reading, Srednicki asserts in fact that the two formulations are equivalent, so I would like to change to question 2 to a reference request to where I can find such a proof.
| For one thing, such a theory would automatically be a gauge theory, because the freedom to reparameterize the proper time in infinitely many ways would be a gauge freedom. So you'd always need to gauge fix your path integral, even for spin-0 and spin-1/2 theories, and that's always a huge mess.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/284565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Measuring very small temperature differences Can one use a thermometer with $\pm$5 mK accuracy to measure a temperature difference of 2 mK (the measurement is near 100 mK temperature on a sample on an ADR)? Using the same thermometer, I am thinking to measure temperature of the sample, heat the sample slightly, measure temperature again, and take the difference. Does the $\pm$5 mK uncertainty cancel out when I take the difference? My thermometer is sensitive enough, my AC resistance bridge is capable of resolving such small temperature differences, but I want to know if the $\pm$5 mK is really an issue here.
| In general it's a bad idea to take two absolute measurements and subtract them from each other to find a difference that's comparable to the uncertainty in the measurements; the fractional uncertainty in the difference is much larger than in either measurement. In computer science the problem is sometimes called catastrophic cancellation, but the problem is essentially the same if the imprecision is due to physical uncertainty rather than numerical truncation.
Whether your $\pm5\rm\,mK$ thermometer uncertainty is a systematic uncertainty or a random uncertainty is a question you can probably already by looking at the stability of your data when you're not changing the temperature. The extent to which you can subtract adjacent measurements depends on the random uncertainty, which causes independent measurements of the same quantity to be uncorrelated. If the uncertainty is systematic, it's okay to compare adjacent measurements that are larger than the random error.
If what you want is the difference in temperature between two heat reservoirs, you want a differential measurement. For instance, the operating principle of the thermocouple is a voltage difference between dissimilar conductors that depends on the temperature difference between their two junctions. In your case, perhaps your cryostat could contain a fairly large thermal mass controlled (i.e. by some pid-driven heater) to be near the target temperatures for your sample. Put a thermocouple junction on your sample, the reference junction at your reference temperature, and use a sensitive ammeter to measure the relatively large changes in the small current driven by the temperature difference between your sample and your reference.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/284641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Confusion regarding the finite square well for a negative potential Consider the finite square well, where we take the potential to be $$V(x)=\begin{cases}
-V_0 & \text{for}\,\, |x| \le a \\
\,\,\,\,\,0 & \text{for}\,\, |x|\gt a
\end{cases}$$ for a positive constant $V_0$.
Within the square well the time-independent Schr${ö}$dinger equation has the form $$-\frac{\hbar^2}{2m}\frac{d^2 u}{dx^2}=(E-V)u=(E+V_0)u\tag{1}$$
While outside the square well the equation is
$$-\frac{\hbar^2}{2m}\frac{d^2 u}{dx^2}=Eu\tag{2}$$ with $E$ being the total energy of the wavefunction $u$ where $u=u(x)$.
The graph of the potential function is shown below:
Rearranging $(1)$ I find that
$$\frac{d^2 u}{dx^2}=-\underbrace{\bbox[#FFA]{\frac{2m}{\hbar^2}(E+V_0)}}_{\bbox[#FFA]{=k^2}}u$$
$$\implies \frac{d^2 u}{dx^2}+k^2u=0\tag{3}$$
with
$$k=\frac{\sqrt{2m(E+V_0)}}{\hbar}\tag{A}$$
So equation $(3)$ implies that there will be oscillatory solutions (sines/cosines) within the well.
Rearranging $(2)$ I find that
$$\frac{d^2 u}{dx^2}=-\underbrace{\bbox[#AFA]{\frac{2m}{\hbar^2}E}}_{\bbox[#AFA]{=\gamma^2}}u$$
$$\implies\frac{d^2 u}{dx^2}+\gamma^2u=0\tag{4}$$
with
$$\gamma=\frac{\sqrt{2mE}}{\hbar}\tag{B}$$
But here is the problem: Equations $(4)$ and $(\mathrm{B})$ cannot be correct since I know that there must be an exponential fall-off outside the well.
I used the same mathematics to derive $(4)$ & $(\mathrm{B})$ as $(3)$ & $(\mathrm{A})$. After an online search I found that the correct equations are
$$\fbox{$\frac{d^2 u}{dx^2}-\gamma^2u=0$}$$
and
$$\fbox{$\gamma=\frac{\sqrt{-2mE}}{\hbar}$}$$
Looks like I am missing something very simple. If someone could point out my error or give me any hints on how I can reach the boxed equations shown above it would be greatly appreciated.
EDIT:
One answer mentions that the reason for the sign error is due to the fact that $E\lt 0$ inside the well, so I have included a graph showing the total energy (which is always less than zero inside or outside the well):
EDIT #2:
In response to the comment below. If I place $E\lt 0$ in equation $(4)$ (outside the well) I will have to also make $E\lt 0$ in equation $(3)$ (as $E\lt 0$ inside the well also) and so equation $(3)$ will become $$\frac{d^2 u}{dx^2}-k^2u=0$$ which is clearly a contradiction as this no longer gives oscillatory solutions (plane waves) inside the well.
| This is in principle correct. Take the limits For $E>0$ and $E<0$. If the latter obtains, you get a negative under your square root (and k becomes imaginary) and $e^{ikx}\rightarrow e^{-kx}$, giving you the exponential solution. My guess is that there is a different sign convention in what you read, where it is assumed explicitly that $E<0$. Likewise if $E>0$, then we expect to continue to get plane waves, and we do. Thus, as long as you remember that energy inside the well is negative you will always get the same results.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/284879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How can you experimentally determine intrinsic carrier density? I know the equation for intrinsic carrier density is
$$
n_i = BT^{3/2}e^{-E_g/2kT}
$$
Where B is a material dependent quantity. But how would you determine $n_i$ experimentally? Or if you were given an intrinsic semiconductor but no quantitative information about it, how would you go about finding $n_i$?
This is purely out of curiousity, so thanks for any help and suggestions!
| There are several experimental methods to determine the intrinsic carrier concentration of a semiconductor. Most of them are indirect. For example you can measure the conductivity of the semiconductor at relatively high temperatures where it has intrinsic properties. Then you determine the electron and hole mobilities and obtain the intrinsic concentration from the conductivity. A second methods uses measurements of the densities of states of valence and conduction bands and of the band gap. Other methods use the characteristics of semiconductor devices. All these methods give errors in the range of several percent. A scientific paper which gives the most accurate recent results for silicon uses the injected minority carrier flow in a pn-diode. The abstract of the paper can be seen here http://scitation.aip.org/content/aip/journal/jap/70/2/10.1063/1.349645 (I don't know if you can also download it for free.) The references in this paper also describe other methods.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/284979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Young's experiment but with reflection from two thin wires? Is it possible to shine a laser beam on two thin metal wires that are really close to each other and observe a reflection interference pattern? I would like to confirm that the pattern is the same as the one obtained by transmission through a double slit, thus indirectly confirming Huygens' principle.
| Yes I use guitar strings all the time to do these experiments. Youngs original experiment was with one human hair. My guitar strings all have different gauges and the experiments work perfectly for the fringe pattern spacings. It just depends on the wavelength of laser light, the distance from the wire to the Wall and the gauge of the wire.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is difference between operating wave function with operator of an observable and measuring for an observable? People say operator of an observable helps in measuring for an observable. We also know that measuring leads to collapse of wave function. But operator on wave function gives a number times same wave function (which of course is not a collapsed wave function!). All intuitions I made about operator, wave function, measures, collapse are all seeming to be inconsistent. If operator doesn't collapse a wave function then what it is for. Is it just for calculating expectation value of observable. What in physical sense it is?
| You are right in pointing out that operating with an operator on a wavefunction gives a number times the wavefunction (assuming that it is indeed an eigenfunction of the said operator). Measuring the observable collapses the wavefunction. How a wavefunction collapses is still an open question in Quantum Physics. The job of the operator is to find out the possible eigenvalues of the wavefunction. This paper might help you with the progress that has been done in solving that open problem.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is Del (or Nabla) an operator or a vector? Is Del (or Nabla, $\nabla$) an operator or a vector ?
\begin{equation*}
\nabla\equiv\frac{\partial}{\partial x}\vec{i}+\frac{\partial}{\partial y}\vec{j}+\frac{\partial}{\partial z}\vec{k}
\end{equation*}
In some references of vector analysis and electromagnetism, it is considered as an operator (and noted as $\nabla$), and in other ones, it is considered as a vector (and noted as $\vec\nabla$).
| I hate to play this card, but it depends on the object it acts on (and sometimes who you ask.) Example: many (professors, collegues, etc.) will insist on differentiating between writing $\vec{\nabla}$ and $\nabla$ (consider obliging if your grade/ income depends on it.) In reality, however $\nabla$ is NOT a specific operator, but a convenient mathematical notation. For instance, one may write $\vec{\nabla}\cdot\vec{j}$ or $\nabla\cdot \vec{j}$ and it "should" be obvious from the notation that the meaning of $\nabla$ in this case is a vector operation whether or not the vector symbol is included over it. Another example: one may write $(\vec{v}\cdot\vec\nabla) \vec{j}$ or $\vec{v}\cdot\nabla{\vec {j}}$. In ether case the same quantity is produced. I appreciate the latter notation, however, because it highlights the freedom to act the $\nabla$ upon $\vec{j}$ first (producing a matrix) and then act on $\vec{v}$ to get a vector, or to act the $\vec{v}$ on $\nabla$ first (producing a scalar operator) and then act on $\vec{j}$ producing an identical vector.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
} |
Why can (heat-related) energy $E$ be considered as the product of temperature $C$ and thermal capacity $T$? Why can (heat-related) energy $E$ be considered as the product of temperature $T$ and thermal capacity $C$?
I.e.
$$E=CT$$
I've seen this definition in one answer to an excercise in a course, but no explanation of the above is given.
| Heat capacity is the increment in heat you need to increase the system's temperature by one degree, in other words, it measures system's ability to accept energy as heat
$C\equiv\frac{Q}{\Delta T}$
Where $Q$ is the amount of heat absorbed by the system. The definition makes sense as the more heat you need to apply to a system to increase it's temperature, the higher the heat capacity (it is directly proportional) and the higher the increase in temperature given a fixed $Q$, the lower the heat capacity (inversely proportional). But as I said, it is just an arbitrary definition that turns out to be useful.
It is also useful because it is related with specific heat capacity (which you can usually find tabulated in books)
$C=mC_s$
where $m$ is the mass and $C_s$ is the specific heat capacity of a determined substance.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/285789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why the clock at rest runs faster, while another clock slows when moving? I have observed from my first question that it is hard for me to study the special relativity from every frame of reference. But, there is one most important question in my head right now that time runs slower for moving body if observe from rest and time runs faster in clock at rest if observe from that moving body. But, the rate at which the ticks slower for one and faster for another is different. Why it is not the same rate? Please answer in brief and simple language.
| The situation is completely symmetric. Let the velocity of a frame A w.r.t another frame B is $\textbf{v}$. then from the perspective of A, the frame frame B has a relative velocity $-\textbf{v}$. From the perspective of A-observer, the clock of B-observer is slowed down and vice-versa. Note that the dialation factor depends upon the square of the relative velocity i.e., $\gamma(v)=1/\sqrt{1-v^2/c^2}$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
bubble/drop Reynolds number The bubble/drop Reynolds number makes me confused and I hope someone can help me on this please!
Normally (as I read in every books and papers) that when a bubble or drop rises in a fluid, the bubble/drop Reynolds number is calculated by:
Re = ρUD/μ
where U is particle velocity, D can be particle diameter, and ρ and μ are density and viscosity of continuous fluid
my question is why don't use ρ and μ of bubble/drop? why use values of surrounding fluid?
what is the physical meaning of this Re?
Thanks in advance.
| The Reynolds-Number is the ratio between forces of inertia and forces of viscosity. The forces of viscosity are represented by the density and viscosity of the fluid.
Bodies with the same Reynolds-Number will have a similiar turbulence behavior. You can also define a critical Reynolds-Number which is related to the actual problem you are observing.
Below the critical Reynolds-Number you will have a laminar current. When $\mathrm{Re} > \mathrm{Re_K}$ you will see a turnover to a turbulent current.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Inclined Plane and Center of Mass Say there is a block sliding down an inclined plane that rests on a frictionless table. There is kinetic friction between the block and inclined plane. If the block slides downhill, then the kinetic friction acting on it points uphill. By Newton’s third law, the inclined plane will experience a friction force pointing down hill, in the direction of the block’s velocity/acceleration. Shouldn’t the plane want to move with the block, then? If it does, then wouldn’t the center of mass move with the block and plane too? There’s no friction on the block–plane system, however, so the center of mass should not move, but my analysis claims that it does. Where have I gone wrong?
| Treat the plane and the block as a system
When you will treat the block and the inclined plane as a system you find that the only forces which are acting on the system are in the vertical direction. therefore the center of mass of the system will accelerate ( if it does ) in a vertical line only. also the net force vector in this case will point downwards so the center of mass will accelerate downwards.
or to simplify the center of mass won't be accelerated in the horizontal direction.
The inclined plane does want to move with block
The friction which acts on the inclined plane definitely opposes the relative motion between them but what you forgot to count I think is the normal form the smooth friction-less table. The table may be smooth but it can still exert a normal force on any body that presses it.
Also as a representation of forces you may look at the diagram I have attatched
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why do odd numbers of either of the nucleons in a nucleus make it relatively unstable compared to a nucleus having even numbers of both the nucleons? Try to keep it as simple as possible, as I am still completing school. Just wanted to get an explanation.
| There is no solid theory on why the occurrence of even number of either nucleons in a nucleus of an atom is stable. Only through experimental data have we been able to observe this phenomenon and the concept of magic numbers.
Even some approximations and theories such as Semi-empirical mass formula fail to explain this occurrence.
But we cannot generalize this observation for all elements as some odd-odd stable isotopes are also present, for example :
2H1, 6Li3, 10B5, 14N7
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What would be the atomic number of the atom whose 1s electron moves nearly at the speed of light? What would be the atomic number of the atom (may be hypothetical) whose $1s$ electron moves at $0.99c$ (the speed of light)?
Quantum mechanics might have an answer, but I do not know the necessary maths to calculate. I am interested in the answer.
In this article they say that the speed of the electron defines gold's property through relativistic quantum mechanics.
| You can get a back of the envelope notion of the energy of a inner-most orbital by just treating the problem as a hydrogen-like atom (not entirely fair and almost certainly a slight over-estimate but at least it is easy). You get
\begin{align*}
E_{1s} \approx \mathrm{Ry} * Z^2 = (13.7\,\mathrm{eV}) * Z^2 \;.
\end{align*}
Where $Z$ is the atomic number of the atom in question and $\mathrm{Ry} = 13.6 \,\mathrm{eV}$ is the Rydberg constant.
Then we can pretend this is kinetic energy and compute some kind of speed on that basis. (This is simpler but less exact than the computation suggested by Ruslan in the comments. Nor does it really mean that there are little ball-like object in there whizzing around along classical paths.)
If you are asking for a speed of $\beta = 0.99$ ($\gamma = 7.1$) then you are suggesting an kinetic energy of about $T = (\gamma - 1) m_e c^2 = 6.1 (5.11 \times 10^5\,\mathrm{eV}) = 3.1 \times 10^6\,\mathrm{eV}$.
Which suggests:
\begin{align*}
Z^2
&= \frac{(\gamma -1) m_e c^2}{\mathrm{Ry}}\\
&\approx \frac{6.1 (5.2 \times 10^5 \,\mathrm{eV})}{13.7\,\mathrm{eV}} \\
&= 2.3 \times 10^5 \\
Z &\approx 480 \;,
\end{align*}
give or take a small factor.
Even for $\beta = 0.9$ ($\gamma = 2.3$) you get $Z \approx 220$.
*
*For $\beta = 0.75$ ($\gamma = 1.5$) I find $Z \approx 140$.
*For $\beta = 0.65$ ($\gamma = 1.3$) I find $Z \approx 110$.
*For $\beta = 0.55$ ($\gamma = 1.2$) I find $Z \approx 86$.
All of these values are thoroughly relativistic, but as you can see the ultra relativistic regime requires unreasonable heavy nuclei.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If I evaluate degree of freedom and got some number $n$, then how can I know what are those $n$ independent coordinates? Using $3N-f=d$ we can evaluate the degree of freedom or independent coordinates of a system.
But how can we know which coordinates are actually independent?
(Here $n$ = number of particles, $f$ = number of constraint equations and $d$ = degree of freedom or number of independent coordinates.)
If we take the case of double Atwood machine, we get 2 dof. So which two coordinates should be said to be independent? $x$ and $y$?
Update:
If I take the case of, "A particle falling under gravity", the dof will be 1. So there should be only one independent coordinate with which we can describe the situation. If I take the fall of the particle in $y$ direction, then that one independent coordinate will be $y$?
| Technically, when you choose your $n$ generalized coordinates $q^1,\ldots,q^n,$ among the $3N$ position coordinates ${\bf r}_1,\ldots,{\bf r}_N,$ of $N$ point particles, with $n\leq 3N$, you should make sure that the $3N\times n$ rectangular matrix
$$ \frac{\partial {\bf r}_i}{\partial q^j}, \qquad i\in\{1,\ldots N\},\qquad j\in \{1,\ldots n\}, $$
has maximal rank, i.e. has rank $n$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Should zero be followed by units? Today at a teachers' seminar, one of the teachers asked for fun whether zero should be followed by units (e.g. 0 metres/second or 0 metre or 0 moles). This question became a hot topic, and some teachers were saying that, yes, it should be while others were saying that it shouldn't be under certain conditions. When I came home I tried to find the answer on the Internet, but I got nothing.
Should zero be followed by units?
EDIT For Reopening: My question is not just about whether there is a dimensional analysis justification for dropping the unit after a zero (as a positive answer to Is 0m dimensionless would imply), but whether and in which cases it is a good idea to do so. That you can in principle replace $0\:\mathrm{m}$ with $0$ doesn't mean that you should do so in all circumstances.
| If you formalize dimensional analysis, you end up with the set-wise product of a scalar from $\mathbb{R}$ with a free group on with n generators, where n is the number of "base units" you can talk about.
So one of your unit generators might be mass, another distance, another time, etc.
In this structure, addition is only defined when the group portion aligns exactly, and does nothing to them. It adds the scalar.
Multiplication multiplies both the scalar and the units together.
Now, some "units" may be a scalar multiple times some "base unit", but that is ok.
Once you have generated this abstraction, it becomes clear that 0 m/s is a different thing thann 0 kg, but 0 g is the same as 0 kg, and 1000 g equals 1 kg.
While not definiative, a solid abstraction that leads you to treat the zero values differently is a strong reason to do so.
This structure is no longer a field, but that is ok. Not everything is a field.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/286964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72",
"answer_count": 10,
"answer_id": 8
} |
What is the evidence that distant galaxies are moving away from us with speeds greater than $c$, due to space expansion? I came up with this query after @Rob Jeffries's answer to a previous question of mine.
So, is there any evidence that distant galaxies are moving away from us with speeds greater than $c$, due to the expansion of space, or is it just an artifact of Hubble's equation, $v=H_0D$?
If indeed this is a fact, does it determine the shape/geometry of our universe?
| If the galaxy was traveling away faster than the speed of light, then we wouldn't be able to see it. (obviously) By very definition, it exist outside of the "observable universe". Not only can we not "observe" it with our eyes, but no information can reach us at all. (similar to the inside of a black hole). It cannot affect us in any way.
Asking wether things outside of the observable universe "exist" is somewhat of a "Zen riddle". It opens an very deep philosophical or meta-physical debate about the definition of the word "exist"
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Field of a Polarized Object In Griffith's Electrodynamics, in the section 4.2, just after the equation 4.9, he writes "sleight-of-hand casts this integral into a much more illuminating form"...
I have a doubt in that. If the Gradient (or differentiation if carried out) is with respect to primed coordinates, how can variable r be differentiated as r' ? It would be of great help if someone clarifies this point.
| We need to shift from $\mathscr R$ to $r'$ because otherwise the coordinate system would keep changing as we integrate over the whole volume. Now, about that sleight of hand.
Gradient depends upon the coordinate system.
By simple definition of gradient we have :-
$dT= \nabla T.\boldsymbol{dl}$, where $\boldsymbol{dl}$ is the change in space of the coordinate system.
Since, $\mathscr R = \boldsymbol r - \boldsymbol r' $ it implies that $d\mathscr R = -d\boldsymbol r'$ as $\boldsymbol r$ is the constant position vector of the point of interest where we wish to calculate electric field by the polarized object, in the source coordinate system.
Now, $d \left( {\frac {1}{\mathscr R}} \right) = \nabla \left( {\frac {1}{\mathscr R}} \right).d\mathscr R = \nabla'\left( {\frac {1}{\mathscr R}} \right).d\boldsymbol r'$, as $d\mathscr R = -d\boldsymbol r'$ this implies
$\nabla' \left( {\frac {1}{\mathscr R}} \right)$ = $-\nabla \left( {\frac {1}{\mathscr R}} \right)$ which simply means that gradient in source coordinate system is just the negative of the gradient in the coordinate system of that differential dipole in consideration, which is what Griffiths touched upon.
PS : When I was having trouble with this, I was incorrectly assuming $d|\mathscr R| \hat{ \mathscr R} = -dr' \hat{r'} $, this was incorrect because I was incorrectly assuming $d\hat{ \mathscr R} = d\hat{r'}= 0$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Solving a problem using Newtonian mechanics and D'Alembert principle yI have to solve that problem with two methods (applying Newtonian mechanics and the D'Alembert principle.
The problem consists in two balls inside a spherical cylinder, it consists in determine the minimum value of $M$ making the tube not to knock down (where $M$ is the mass of the cylinder and $m$ the masses of the two spheres).
I have issues with both methods. With Newtonian method, I don't know what influence has $M$ on the problem, because I can choose a reference point in the center of the cylinder and there will be no torque.
With D'Alembert principle, the problem is I have no idea what virtual displacement I have to choose.
The Newtonian process brings me to this meaningless expression if the normal force acts on the lower right corner.
| You have 2 couples countering each other.
One is the cos of the mass of two balls times 2r divided by difference the of their contact points' heights on the wall of the cylinder. And the other is overturning momentum of the cylinder.
Let's call the angle of the line connecting the center of the 2 balls A.
$A= arccos(2(R-r)/2R $.
Therefore the couple reaction of the walls creates is
$cos(arccos((R-r)/R).m.2r/(2Rr-R^2) $
$m.2r((R-r)/R)/(2Rr-R^2)= m.2r(R-r)/(2Rr-R^2)R$
Hopefully, I got the arithmetics right on my cell phone.
This should be smaller than $M.R$ - which is the overturning moment of the cylinder.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why are planets not crushed by gravity? Stars can be crushed by gravity and create black holes or neutron stars. Why doesn't the same happen with any planet if it is in the same space time?
Please explain it in simple way. Note: I am not a physicist but have some interest in physics.
| There have been several answers already but as a synthesis attempt :
Gravity is attractive, and in absence of repulsive counter force it causes the collapse of a massive object. The order of magnitude of the pressure needed to resist against gravitational collapse is roughly of the order of $GM^2/R^4$ where $M$ is the mass of the object and $R$ its radius.
In the case of a planet such as the Earth, the repulsive forces are of electrostatic nature (their electrons tend to repel).
For the earth, $GM^2/R^4 \sim $ 1000 GPa.
If the mass is much larger, gravity is too strong and electrostatic forces are too weak to counter it. When the density is high enough, nuclear reactions can occur, emitting a high amount of radiation. In this case the object is a star and it is held by thermal pressure. For the sun, $GM_{\odot}^2/R_{\odot}^4 \sim 10^{6} $ GPa, but this pressure can vary a lot from one star to another.
After some time, nuclear reactions no longer release enough energy, for example when iron starts being produced (iron is the most stable nucleus, and reactions that transform it would be endothermic). In this case, the object can collapse to a higher density form of matter, this time stabilized thanks to Pauli's exclusion principle.
This principle states that two fermions cannot occupy the same quantum state, resulting in a very strong repulsive force between them. In white-dwarfs, these fermions are electrons. In neutron stars, they are mostly neutrons. The strong force also contributes to resisting gravity in neutron stars. In these cases, the pressure can be extreme. A neutron star mass is usually $\gtrsim 1.2 M_{\odot}$, and its radius of the order of 10 km. This yields $P \sim 10^{25}$ GPa.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42",
"answer_count": 6,
"answer_id": 2
} |
Why doesn't the heat of the Earth's core diffuse to the surface? The Earth has a crust, mantle, outer core and the inner core with each one getting hotter than the next. How come, over millions and millions of years, the heat that is at the center of the Earth hasn't conducted throughout the planet's material so that the entire planet is one even temperature?
This always bothered me because we all learn that temperature diffuses from high areas to low areas, yet the Earth's center is super hot while if you dig a one foot hole, the ground feels quite cold. I never understood this. Thoughts?
| It's a bit like when you put a thick jumper on. The inside of your clothing ends up being warmer than the outside of your clothing.
Most of the heat within the earth can be attributed to radioactive decay (of long lived isotopes like potassium). This heat is constantly being conducted out to the surface. (Yes, if you go down into a deep mine, you will get hotter.) It turns out that kilometres of rock works as a reasonably good insulator.
Remember that the difference in temperature affects how quickly heat is transferred. If the surface was nearly as hot as the interior (like when it originally formed) then the surface would radiate heat into the cold night sky much faster, and the crust would conduct internal heat away from the core to the surface even slower, and this imbalance would cause the surface to lose net thermal energy and cool down (while the core heats up even further); this process continues until an equilibrium is reached (where each layer of the earth has its own roughly stable temperature, and each layer is getting rid of excess thermal energy at just the same rate as it acquires it).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/287980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 0
} |
Where do symmetries in atomic orbitals come from? It is well established that:
'In quantum mechanics, the behavior of an electron in an atom is described by an orbital, which is a probability distribution rather than an orbit.
There are also many graphs describing this fact:
http://en.wikipedia.org/wiki/Electron:
(hydrogen atomic orbital - one electron)
In the figure, the shading indicates the relative probability to "find" the electron, having the energy corresponding to the given quantum numbers, at that point.
My question is: How do these symmetries shown in the above article occur?
What about the 'preferable' axis of symmetries? Why these?
| The hydrogen atom is spherically symmetric, so for any solution of the Schrödinger equation for the hydrogen atom, any rotation of that solution must also be a solution. If you do the math on how to rotate a solution, it turns out that the solutions with a particular energy $E_n$ fall into groups labeled by an integer $l < n$. The integer $l$ is physical: $\hbar^2 l(l+1)$ is the magnitude squared of the angular momentum. Within each group, rotating the solution gives you a new solution in the same group. These two facts are of course connected: a rotation can't change the length of a vector.
One can show that each group contains $2l+1$ independent solutions, in that any solution $|n,l\rangle$ where the energy is $E_n$ and the angular momentum $\hbar^2 l(l+1)$ can be written as a sum $$|n,l\rangle = \sum_{m=-l}^l c_m |n,l,m\rangle$$ (I apologize for the somewhat poor notation.)
This decomposition is based on choosing a particular axis, and taking each state to depend on the angle $\varphi$ around this axis as $e^{im\varphi}$. The appearance of axes of symmetry in these plots is due to this choice of axis and particular decomposition. With another choice of axis, which is the same as a rotation, the states will be mixed.
The bottom line is that it's not each solution -- wavefunction -- that needs to be spherically symmetric, but the total set of solutions.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 0
} |
Why is energy not conserved in this situation Suppose there are three masses that are still relative to each other in space. They are positioned in an equilateral triangle. Let's accelerate one mass towards the other two with a force. The energy added to this system should be $F\cdot{ds}$. However, according to the particle that has been accelerated, the work done is double this amount assuming that the three particles are of the same mass. I don't think that I fully understand how does the conservation of energy really works.
| Conservation of energy occurs within a given reference frame. If you change reference frames, you cannot use those rules.
A clear example of this occurs if you consider the energy of the system when considering the Earth and an airplane flying through the air. From the perspective of an observer on the ground, the airplane has kinetic energy of $\frac{1}{2}m_{plane}v^2$, and the earth has 0 kinetic energy. From the perspective of an observer on the plane, it is the plane that has 0 kinetic energy, and the earth has kinetic energy to the tune of $\frac{1}{2}m_{earth}v^2$. Needless to say, given that $m_{earth}\gt\gt m_{plane}$, the two observers will disagree greatly on the numeric value for the system's kinetic energy. However, if we consider changes in kinetic energy, both systems will find that energy is conserved (from their own perspective).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
BCS state and its superconductivity I've learned in BCS theory about its ground state by applying Bogoliubov annihilation operator on it to be zero; however, in the textbook the total momentum of electrons is set to be zero. It's okay to me for this state to be a ground state for the effective Hamiltonian; however, I cannot understand why this state exhibits superconductivity. I was considering yo apply perturbation say a constant electric field $E=U/L$ to the system and calculate some kind of linear response. However, I'm not sure about the results I derived so far.
| bcs state is superconducting because excitation spectrum has a gap. which mean to create a quasi particle on ground state you need non zero energy.
creating a qp can be interperted as exciting an cooper pair. cooper pairs can be excited by scattering in lattice.
so lattice scattering of your charge carriers needs energy. but this scattering what causes the resistance. hence having a resistence causes an energy. thats why you don't have resistance in bcs.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is the potential in a circuit? I have learnt that the potential in a point in an electric field is defined as being numerically equal to the work done in bringing a unit positive charge from infinity to the point. However, this is in the case of an electric field. What is the potential in a circuit say, consisting of a battery and simple capacitor, at one of the plates? Is it numerically equal in the work done in moving a unit charge from the 'positive' plate to the positive pole of the battery? (by having to do work in overcoming the attractive forces of the nucleus on the electrons of that plate) But from definitions this charge is a unit positive charge. This is all confusing to me and it would help for simple explanations.
| In a circuit, you usually define potential differences with respect to a chosen (arbitrary) electrode. This is often the "mass" electrode which is connected to earth. When you measure potential differences in a circuit, you actually measure differences in electrochemical potential, not the difference in electrical potential. This can be easily recognized when measuring the voltage between two connected wires of different metals, e.g. copper and tin. Already without any applied voltages, there is an electrical potential difference, the contact voltage, which, however, cannot be measured with a voltage meter because both wires have the same electrochemical potential.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/288888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Difference between "Periodic motion" and "Oscillating Motion" So far I know one of them is a special case of the other: The Oscillating motion being the special case of Periodic motion. But I don't know the precise "Kinematical definition" of each one. I mean when you have an "Equation of motion" for a particle, how will you determine it's either a "Periodic motion" or an "Oscillating motion"? If some periodic functions appear in an equation of motion, can we call it a "Periodic motion"? If so how can we recognize it from "Oscillating motion"?
| From what I understand, periodic motion from the physical point of view is quite general in the sense that any type of motion that repeats itself after some period of time would be term as periodic.
Where in the case of oscillation, the time period of the periodic motion is quite large, like the motion of the pendulum upto a scale of about seconds and above.
Which I find the periods having small periods concerning to vibrations.
In representing with a graph, the periodic motion would be repeating curves at some constant intervals, the intervals being time.
The curves which are comparable to our senses would be called oscillation and with very high frequency would be called vibration, where both are periodic motions in general.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/289231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Does "Excitation" mean the value of a field rapidly changes over time at that place? I know that the definition of something called a "field" is formally defined as the presence of a quantity at every point in space. In quantum field theory, does the excitation of a field mean that the value of the field is changing over time? What the heck is the meaning of "excitation"?!
| The term excite in this context generally means something like to add energy to. So if we excite a field that means we add energy to it.
This can be used in either a classical or quantum context, though I'd guess it is most commonly used in quantum mechanics where it means changing a system from a lower to a higher energy quantum state. It's most commonly used for systems that have discrete states, e.g. exciting an electron in an atom, though it could be used for continuous states e.g. adding energy to an electron within a conduction band.
You need to be cautious about extrapolating this to quantum field theory. A classical field is generally an intuitively obvious object and adding energy to it is easily understandable. However a quantum field is an operator field not a physical object - a quantum field does not have energy and you cannot excite a quantum field. However you can excite the field in the sense that you add energy to modes described by the quantum field. Having said this, many of us routinely refer to exciting the quantum field and understand that is just a shorthand and not to be taken literally.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/289331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why is the work done by some forces path independent while for others it is are path dependent? I know that by definition, forces for which the work done is independent of the path taken are known as conservative force while the forces for which the work done is path dependent are known as non-conservative forces. My question is why is the work done by some forces path independent while for others it is are path dependent or simply put why do some forces do not dissipate energy while others dissipate energy. What makes them do so?
| You can see for yourself that if a force depends only on position (not on time and speed), then it is conservative, while if it depends on speed it may not be.
Simple examples should do: take the weight force, $\vec f = m \vec g$: when something falls down from an height $h$, you have $\ W = \vec f \cdot \vec s = mgh.$ Now suppose you pick that thing up and put again at his former place. Now your work is $\ W = \vec f \cdot \vec s = - mgh$. The minus comes from the fact that you are pushing up, while the weight pushes down. Total work is zero, the same of the null path.
Now imagine pushing your cell phone on your desk, from point $A$ to point $B$, say $ |AB| = l$ and examine the work done by friction: now $\vec f = - \mu_d mg \hat v, \ $ where $\hat v$ is the unit vector parallel to $\vec v$: work is
$\ W = \vec f \cdot \vec s = - \mu_d mgl$. Now you push your phone back to point $A$: again, friction is opposite to motion, so, again $\vec f = - \mu_d mg \hat v, \ $ so work is $\ W = \vec f \cdot \vec s = - \mu_d mgl$, this time total work along this closed path is not zero, so you see that work now it depends on the path.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/289616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Deriving a formula for the moment of inertia of a pie slice of uniform density Say you have a right cylinder of radius $R$, and you take a pie slice of angle $\theta$ at the origin with mass $M$. How can you determine the moment of inertia?
My teacher says it is impossible to derive its moment of inertia given those two variables, but this problem was in our textbook.
| This comes down to a trivial integral, assuming that the relevant axis is the centre of the cylinder:
\begin{align}
I
& = \int_\Omega \rho\:r^2\:\mathrm dV
=\int_0^L\mathrm dz \int_0^\theta\mathrm d\varphi\int_0^Rr\mathrm dr \: \frac{M}{L\theta R^2/2}r^2
\\ & = \frac{2M}{R^2}\int_0^Rr^3\mathrm dr
\\ & = \frac12MR^2.
\end{align}
Note that it is independent of $\theta$ and $L$ (with the only dependence coming if you want to regard $\rho$ instead of $M$ as fixed), as it should be: the relationship between $I$ and $M$ is fixed by the radius of gyration, and this is only a function of the radial density profile.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/289827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Non-glassy amorphous solids According to Wikipedia:
*
*A glass is any "solid that possesses a non-crystalline (that is, amorphous) structure at the atomic scale and that exhibits a glass transition when heated towards the liquid state".
*A glass transition is "the reversible transition in amorphous materials (or in amorphous regions within semicrystalline materials) from a hard and relatively brittle 'glassy' state into a viscous or rubbery state as the temperature is increased."
It seems like most familiar non-crystalline solids (e.g. household plastics, cheese) become more ductile and less brittle when heated. Does this mean these are all considered glasses? What is an example of an amorphous solid that is not a glass?
| Plastics are made if polymers, which are chains of molecules, therefore i don't think we can call it a glass even though some of them are amorphous, because they are ordered at the atomic scale.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/289916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How does the Hubbard Stratonovich transformation decouple interactions? I'm having trouble understanding how the Hubbard Stratonovich (HS) transformation decouples equations via the introduction of a field variable. The particular problem I'm facing is a derivation in Phys Rev E, 81, 021501 (2010) equations 2.4 -> 2.7, where the author suggests that
$$ e^\left( -\beta H\right) = e^\left(-\beta\frac{e^2}{2}\int dr\;dr'\;\rho(r)C(r-r')\rho(r')\right) = \int D\phi\; e^\left(-\beta\int dr\; \frac{1}{2}\epsilon (\nabla\phi)^2 + i\rho e \phi\right) $$
Here, $C(r-r')$ is named the "Coulomb" operator, but is defined as the Greens function of the Poisson equation:
$$ \nabla\cdot[\epsilon \nabla C(r-r')] = \delta(r-r')$$
The other terms are what you'd expect: $\rho$ is a charge density, $\epsilon(r)$ is a dielectric function, $e$ is elementary charge and $\beta$ is $\frac{1}{kT}$.
My stupid question:
Apparently I'm bad at math and simply can't complete the square in the right hand side to get the left hand side. How... do you do this?
My (hopefully) interesting question:
What does it mean, conceptually, to decouple an interaction by introducing a field? I interpret the left hand side of the equation above as "$\rho(r)$ communicates with $\rho(r')$ through the $C(r-r')$ propagator" -- does this new field somehow contain all of this 'communication' information? Can I express the field $\phi$ in terms of $C$?
| Probably you would have found some answers by now..
For your first question, doing integrating by parts on $(\nabla \phi)^2$, you can change it to $\phi \Delta \phi$ right? then you can complete the square as usual.
You will have something like $(\phi + A\rho) M (\phi + A\rho) + B\rho M^{-1} \rho$ with A,B some coefficients and $M = \Delta$ (you might want to do it in momentum space)
For your second question, $C(r - r')$ in the left-hand side of your first equation will be the reciprocal of M which means it is a propagator. (Greens function)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What does the square root of Laplacian mean? There is a relation in the textbook, "Quantum Field Theory and the Standard Model, Schwartz"
$$\left \langle 0\left | \sqrt{m^2-\vec{\bigtriangledown }^2}\phi _0(\vec{x},t) \right |\psi \right \rangle=\left \langle 0\left | \int \frac{d^3p}{(2 \pi)^3} \frac{\sqrt{\vec{p}^2+m^2}}{\sqrt{2\omega _p}}\left ( a_pe^{-ipx}-a_p^\dagger e^{ipx} \right )\right |\psi \right \rangle, \tag{2.85}$$
where $$\phi _0(\vec{x},t)=\int \frac{d^3p}{(2\pi)^3}\frac{1}{\sqrt{2\omega _p}}\left ( a_pe^{-ipx}+a^\dagger _pe^{ipx} \right ).\tag{2.78}$$
I don't know why there is a minus sign appearing in $a_pe^{-ipx}-a_p^\dagger e^{ipx}$ instead of a plus sign.
| The square root of a differential operator indicates that the Fourier factors of that operators are taken as square roots. In this case,
$$\text{FT}(\nabla^2 \varphi) \propto p^2 \widetilde \varphi$$
$$\text{FT}(\sqrt{\nabla^2} \varphi) \propto \sqrt{p^2} \widetilde \varphi$$
The operator will then be equal to something like
$$\sqrt{\nabla^2} \varphi \propto \int d^3p \ \sqrt{p^2} \widetilde \varphi e^{ipx}$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Does infinity exist in the structure of physical systems? Sometimes people fail at asking a question by being too broad, unclear like here. So I'll take a stab at what I believe to be the same question, but more concise and clearly stated:
Does infinity exist in the structure of physical systems?
To be clear I'm referring to, systems in the real world, NOT models of systems.
Can the mathematical concept of infinity have any real connection with reality?
Or is infinity purely a mathematical concept just used by physicists as a convenient way to describe the very large, an approximation?
I have heard that if you model a physical system (recently Brian Greene posted a video on YouTube regarding infinity), and you run into infinity as a solution, then you have either made an error in your calculations or your model is wrong.
| Infinity exists in physical theory. There are infinite-dimensional Hilbert spaces, there are cosmological solutions that are infinite in space or time, there is the infinite divisibility of space and time as modelled by the continuum of real numbers.
However, there is always the possibility that the end point of the quantum revolution, will be an outlook in which everything about the universe is finite. So I think the answer to the question is: maybe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 9,
"answer_id": 8
} |
A paradox about distant galaxies When we observe a galaxy farther than 13 billion light years away, we see that galaxy as it was 13 billion years ago. But back then, that galaxy was much closer to us ,if indeed we live in an expanding and accelerating universe. The question is, why we see it so far when in fact it was very close to us and the time for the light to reach us was much shorter?
| Have a look at this timeline of the universe:
The x axis is the time axis. After the "dark ages" there are galaxies formed, which become diluted in space as time grows.
The question is, why we see it so far when in fact it was very close to us and the time for the light to reach us was much shorter?
Because light has to travel a larger distance than when the universe was more restricted. If we existed at the formation of the two galaxies it would have taken less time to see the galaxy. We exist now, and the photons that reach us now have to travel the larger distance even though they left at a time when it was close to our galaxy because the distance was expanding.
Take the expanding balloon with dots on it. The distance between dots changes with time as the balloon expands. The photons are just a ruler measuring the expanding distance, imo.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Understanding rocket problem intuitively
A rocket is trying to land on a planet. The mass of the rocket is $1\,\rm kg$, and the gravitational acceleration of the planet is $1\,\rm m/s^2$. The rocket starts the free fall at $20\,\rm m$ above the surface of the planet (initial velocity is $0$), and can use the thrust for $2\,\rm s$ (the force of thrust is $1\,\rm N$).
When is the most reasonable height at which the rocket uses its thrust for two seconds? (By the way, we ignore the loss of mass due to the use of the thrust.)
I solved the question, but I'm not satisfied. I don't quite understand it intuitively.
Someone said $W = Fs$, and since $F$ (thrust) is the same, when $s$ (distance moved) is the greatest, the work done by the thrust, to counteract gravity, would be the greatest. Therefore, the most reasonable height to start using the thrust is when the height at which the rocket would end using its thrust is when it reaches the ground (the calculation to find the actual value of the height is very complicated, so I'll skip (it's not the main point of my question).
When I first tried to solve this, I thought that the chemical energy of the thrust would be used to counteract gravity, and since the chemical energy of the the thrust does not change by the velocity at which it moves, I thought that the height at which the thrust is used does not matter, as the total energy (potential energy due to the gravity + the kinetic energy of the rocket - the chemical energy of the thrust) stays the same, the final velocity would be the same, but this is not the answer.
Can anyone please help me why I may be wrong?
| Assuming the goal is to minimise impact speed then you should time your burn so that it ends when you land.
Why? because all the time you are in flight you are gaining speed due to gravity. Burning earlier makes your flight longer and thus increases your impact speed.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 5
} |
Solenoidal electric field In electrostatic electric field in a system is always irrotational ∇×E=0. And divergence of electric field is non zero ∇.E=ρ/ε but in some cases divergence of electric field is also zero ∇.E=0 such as in case of dipole I had calculated and got that ∇.E=0 for a dipole
So in case of this dipole divergence and curl both are zero
So what does it mean when a vector fieLd do not diverge and not rotational at all
So what kind of nature it has??
∇×E=0 , ∇.E=0.
So it means the electric field is both solenoidal and irrotational ,but how can these two conditions satisfy simultaneously? If a vector field is solenoidal then it has to rotate ,must have some curliness
But in pic of a dipole I can see that the electric field is bending or rotating
Then what does it mean about zero curl (∇×E=0)?
I can see the electric field is rotational
| div E does not vanish everywhere for the dipole, you should get delta-functions in the points where there are charges.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/290724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Line integral of a vector potential From the theory of electromagnetism, the line integral $\int {\bf A}\cdot{d{\bf s}}$ is independent of paths, that is, it is dependent only on the endpoints, as long as the loop formed by pair of different paths does not enclose a magnetic flux.
Why is this true?
| From a slightly different, though equivalent, view...
If the line integral of a vector field is path independent, the vector field is conservative, i.e., the vector field is the gradient of a scalar field.
Thus, if $\int \mathbf{A}\cdot \mathrm{d}\mathbf{s}$ is path independent, it is the case that $\mathbf{A} = \nabla \phi$.
Now, recall that the curl of a divergence is identically zero:
$$\nabla \times \nabla\phi = \mathbf{0}$$
But, the magnetic field is $\mathbf{B} = \nabla \times \mathbf{A}$ and thus, in this case, $\mathbf{B} =0$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/291103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How much information can you obtain from a pulsar-black hole system? Imagine that we have detected an interesting source in the sky that we believe is generated by a pulsar orbiting a black hole.
The challenge here is the following:
What physically relevant information could you extract from the observation of this system?
Note: I am posting an answer with some possible information that we could obtain, but I will NOT mark my answer as the correct one.
| First of all, if the pulsar is not extremely close to the black hole, we should observe almost the usual pattern of a pulsar. Let's start obtaining the distance.
The light that comes from the pulsar may encounter regions with free electrons in the interstellar medium. Those regions introduce a dispersion relation that makes lower frequency waves travel slower than higher frequency ones. If we observe the pulsar in two different frequencies $\nu_1$ and $\nu_2$, we will observe that there are a difference in the observation time $\Delta t =t_2 - t_1$.
Now, let $DM$ be the dispersion measure, i.e. the free electron density integrated to the line of sight: $$DM=\int_0^d n_e(l) ~dl$$
Light travels a time $t\propto \left( DM\over\nu^2 \right)$, so then we got $$\Delta t= k \; DM \left( \frac{1}{\nu_2^2} - \frac{1}{\nu_1^2} \right) $$ where $k$ is a known constant.
That means that if we know the electron density along the line of sight (for example, if we have other pulsars in the same region of the sky at known distance), we can obtain the distance to the object. Similarly, if we know the distance to our system by other means, we can get the electron density integrated along the sight of view of the interstellar medium.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/291188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why are angular frequencies $\omega=2\pi f$ used over normal frequencies $f$? When we first studying vibrations in crystals we begin by studying the monoatomic chain, and then go onto the diatomic chain with a series of alternating masses. In studying these we look to calculate the dispersion relation, which is the angular frequency as a function of the wave vector.
For example, in the monoatomic chain we can derive the dispersion relation as $$\omega=\sqrt{\frac{4C}{M}}\sin^2\Big(\frac{ka}{2}\Big),$$ where $C$ is a 'spring' constant inherent in the crystal structure, $M$ is the mass of the atoms on the chain, $k$ is the wave vector and $a$ is the atomic spacing in the chain.
When studying the diatomic chain, we get two solutions corresponding to the optical (diatomic only) and acoustic (diatomic and monoatomic) waves.
What I don't understand is exactly why we are concerned with an angular frequency. What has the property of angular frequency? As far as I know there is no rotational motion, and the intrinsic frequency of a wave is surely more useful?
In addition to this question, how can we calculate the frequency, $f$ of, say, an optical wave of a diatomic chain given the angular frequency from the dispersion relation, $\omega$?
| Mainly because
$$\frac{\rm{d}}{\rm{d}t}\sin\left(\omega t\right) = \omega\cos\left(\omega t\right)$$
but
$$\frac{\rm{d}}{\rm{d}t}\sin\left(2 \pi f t\right) = 2 \pi f\cos\left(2 \pi f t\right)$$
All those pre-factors every time you take a derivative or an integral get to be a pain to keep track of.
As you get even more mathematical in your physics, you might prefer to work with complex exponentials, $e^{i\omega t}$ instead of sine and cosine, because that makes dealing with differential equations even easier (no flipping back and forth between sine and cosine every time you take a derivative).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/291481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Excited State of an Electron in a 2D Box An electron in a 2D infinite potential well needs to absorb EM wave with wave length 4040 nm to be excited from $n=2$ to $n=3$. What is the length of the box if this potential well is a square($L_x=L_y$)?
My solution:
$$E_{n_x,n_y}=\frac{\pi^2\hbar^2}{2mL^2}(n_x^2+n_y^2)$$
For $n=2$, the energy should be:
$$\frac{\pi^2\hbar^2}{2mL^2}\times(2^2+0^2)$$
, and for $n=3$, the energy should be:
$$\frac{\pi^2\hbar^2}{2mL^2}\times(3^2+0^2)$$
So:
$$\frac{hc}{\lambda}=\Delta E=\frac{5\pi^2\hbar^2}{2mL^2}$$
$$L=\sqrt{\frac{5\pi\hbar}{4mc}\lambda}=2.47 nm$$
What's wrong with my solution, because the answer is 3.5nm.
| For a 2-D well the energy is given by the following expression:
$$\boxed{E=\frac{\hbar^{2}\pi^{2}}{2m}\left(\frac{n_{x}^{2}}{L_{x}}+\frac{n_{y}^{2}}{L_{y}}\right)}$$
Since this is a case of a square well $L_{x}=L_{y}=L$.
When the electron absorbs an EM wave and gets excited it jumps from its ground state of $n=2$ to an excited state of $n=3$, as mentioned in the question. What is important to know is that, from the question's perspective the electron in the $n=2$ state has the quantum numbers, $n_{x}=n_{y}=2$ describing it. For an electron there needs to be 4 quantum numbers describing it, and as you may know, even if $n_{x}=n_{y}=n_{z}$ for two electrons their spin quantum number must be different, i.e. $s=+\frac{1}{2} or -\frac{1}{2}$, due to the Pauli exclusion principle.
So the answer:
For $n=2$: $$E=\frac{\hbar^{2}\pi^{2}}{2mL^2}(2^2+2^2)\tag{1}$$
For $n=3$: $$E=\frac{\hbar^{2}\pi^{2}}{2mL^2}(3^2+3^2)\tag{2}$$
$$(1)-(2)=\frac{\hbar^{2}\pi^{2}}{2mL^2}(10)=\Delta E=\frac{hc}{\lambda}$$
Hence, $$L=3.499198x10^{-9}m\approx 3.5nm$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/291569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do one show that the Pauli Matrices together with the Unit matrix form a basis in the space of complex 2 x 2 matrices? In other words, show that a complex 2 x 2 Matrix can in a unique way be written as
$$
M = \lambda _ 0 I+\lambda _1 \sigma _ x + \lambda _2 \sigma _y + \lambda _ 3 \sigma_z
$$
If$$M = \Big(\begin{matrix}
m_{11} & m_{12} \\
m_{21} & m_{22}
\end{matrix}\Big)= \lambda _ 0 I+\lambda _1 \sigma _ x + \lambda _2 \sigma _y + \lambda _ 3 \sigma_z $$
I get the following equations
$$
m_{11}=\lambda_0+\lambda_3 \\ m_{12}=\lambda_1-i\lambda_2 \\ m_{21}=\lambda_1+i\lambda_2 \\ m_{22}=\lambda_0-\lambda_3
$$
| Even though the question has already been sufficiently answered, I would like to offer the sketch of another "elegant" proof:
The space of complex $2\times 2$ matrices, denoted $M_{2\times 2}(\mathbb{C})$, is isomorphic to $\mathbb{R}^8$ via
\begin{equation}
\begin{pmatrix}
z_{11} & z_{12} \\
z_{21} & z_{22}
\end{pmatrix} \mapsto
\begin{pmatrix}
\Re z_{11} &
\Im z_{11} &
\Re z_{12} &
\Im z_{12} &
\Re z_{21} &
\Im z_{21} &
\Re z_{22} &
\Im z_{22}
\end{pmatrix}^\top
\end{equation}
where $\Re$ and $\Im$ denote real and imaginary parts.
Now you want to show that $(I,\sigma_i)$ is a basis of $M_{2\times 2}(\mathbb{C})$ as complex vector space, which is equivalent to $(I,\sigma_i, iI,i\sigma_i)$ being a basis of $M_{2\times 2}(\mathbb{C})$ as real vector space.
The above isomorphism maps the identity and Pauli matrices like:
\begin{align*}
I &\mapsto
\begin{pmatrix}
1 &
0 &
0 &
0 &
0 &
0 &
1 &
0
\end{pmatrix}^\top\\
\sigma_1 &\mapsto
\begin{pmatrix}
0 &
0 &
1 &
0 &
1 &
0 &
0 &
0
\end{pmatrix}^\top\\
\sigma_2 &\mapsto
\begin{pmatrix}
0 &
0 &
0 &
-1 &
0 &
1 &
0 &
0
\end{pmatrix}^\top\\
\sigma_3 &\mapsto
\begin{pmatrix}
1 &
0 &
0 &
0 &
0 &
0 &
-1 &
0
\end{pmatrix}^\top\\
iI &\mapsto
\begin{pmatrix}
0 &
1 &
0 &
0 &
0 &
0 &
0 &
1
\end{pmatrix}^\top\\
i\sigma_1 &\mapsto
\begin{pmatrix}
0 &
0 &
0 &
1 &
0 &
1 &
0 &
0
\end{pmatrix}^\top\\
i\sigma_2 &\mapsto
\begin{pmatrix}
0 &
0 &
1 &
0 &
-1 &
0 &
0 &
0
\end{pmatrix}^\top\\
i\sigma_3 &\mapsto
\begin{pmatrix}
0 &
1 &
0 &
0 &
0 &
0 &
0 &
-1
\end{pmatrix}^\top
\end{align*}
which can trivially be seen to be a basis of $\mathbb{R}^8$ as a real vector space.
Therefore, by the property of isomorphisms to map basis to basis vice versa, we are done.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
} |
Is acceleration continuous? The extrapolation of this Phys.SE post.
It's obvious to me that velocity can't be discontinuous, as nothing can have infinite acceleration.
And it seems pretty likely that acceleration can't be discontinuous either - that jerk must also be finite.
All 4 fundamental forces are functions of distance so as the thing exerting the force approaches, the acceleration must gradually increase (even if that approach/increase is at an atomic, or sub-atomic level)
e.g. in a Newton's Cradle, the acceleration is still electro magnetic repulsion to it's a function of distance, so it's not changing instantaneously, however much we perceive the contact to be instantaneous. (Even if we ignored the non-rigidity of objects.)
Equally I suspect that a force can't truly "appear" at a fixed level. Suppose you switch on an electromagnet, if you take the scale down far enough, does the strength of the EM field "build up" from 0 to (not-0) continuously? or does it appear at the expected force?
Assuming I'm right, and acceleration is continuous, then jump straight to the infinite level of extrapolation ...
Is motion mathematically smooth?
Smooth: Smoothness: Being infinitely differentiable at all point.
| Not a physicist, but I think acceleration can be discontinuous. Consider a car travelling at constant velocity (acceleration = 0) that hits a wall. De-acceleration (negative acceleration) commences until the car comes to a complete stop. For all intents and purposes over time t acceleration starts at zero, decreases to a negative value (because de-acceleration), and then instantaneously jumps back to zero.
My 2 cents.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Can Newton's laws of motion be proved (mathematically or analytically) or are they just axioms? Today I was watching Professor Walter Lewin's lecture on Newton's laws of motion. While defining Newton's first, second and third law he asked "Can Newton's laws of motion be proved?" and according to him the answer was NO!
He said that these laws are in agreement with nature and experiments follow these laws whenever done. You will find that these laws are always obeyed (to an extent). You can certainly say that a ball moving with constant velocity on a frictionless surface will never stop unless you apply some force on it, yet you cannot prove it.
My question is that if Newton's laws of motion can't be proved then what about those proofs which we do in high school (see this, this)?
I tried to get the answer from previously asked question on this site but unfortunately none of the answers are what I am hoping to get. Finally, the question I'm asking is: Can Newton's laws of motion be proved?
| They are an approximation to General Relativity, so yes, they can be proven using general relativity.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 12,
"answer_id": 0
} |
4-momentum of meson in nucleon scattering Consider the nucleon scattering in scalar Yukawa theory.
Suppose that we are NOT using Feynman diagram (rules) and instead use the tedious Dyson- Wick more formal method.
How do we establish or derive this relationship between 4-momentum of meson and 4-momentum of nucleon:
$$
k=p_{1}-p_{1}^\prime
\tag1
$$
In particular why not
$$
k=p_{1}+p_{2}-p_{1}^\prime-p_{2}^\prime
\tag2
$$
| I don't know if this answer will satisfy you completely since I'm not familiar with the Dyson-Wick formalism. But if I have scattering between two nucleons, with initial momenta $p_1, p_2$ and final momenta $p_1', p_2'$, and no additional particles in the final state, the conservation of momentum gives me
\begin{align}
\text{initial momentum } p &= p_1 + p_2
\\
\text{final momentum } p' &= p_1' + p_2'
\\
p-p' &= (p_1 + p_2) - (p_1' + p_2') \equiv 0.
\end{align}
So your proposed definition for $k$ vanishes identically.
Because momentum is conserved in elastic scattering, we can determine the momentum transferred between the two nucleons by looking at either of them in both the initial and the final state.
What characterizes the interaction, therefore, is the change in the momentum of either particle:
$$
k \equiv p_1'- p_1 = -(p_2'- p_2)
$$
The Feynman diagram suggests very strongly that we should assign this momentum to some virtual particle. In a scattering-matrix approach, we declare our ignorance of what's happening at the scattering vertex and look for some operator that transforms our initial momenta $(p_1,p_2)$ to our final momenta $(p_1', p_2')$.
The only parameter that's available to characterize this matrix is the momentum transfer $k$.
If the matrix is associated with some scalar field, then $k$ must be the four-momentum associated with that field.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is a pendulum in dynamic equilibrium? When obtaining the equation of a pendulum following classical mechanics (Virtual Work) we state that:
The work is in equilibrium, therefore $\textbf{F} = 0$ and the Virtual Work is
$$\textbf{F} · \delta \textbf{r} = 0\tag{1}$$
But, is a pendulum in equilibrium? I mean, the velocity of the pendulum changes with the time, how can we say that the pendulum is in equilibrium?
Often is also used the expression
$$\textbf{F} - m \ddot{\textbf{r}} = 0\tag{2}$$
to express this equilibrium, but it isn't an equilibrium at all, since the only think we do is move to the left the inertial force from the Newton's second equation $\textbf{F} = m \ddot{\textbf{r}}$.
Goldstein sais in his book that equation (2) means: that the particles in the system will be in equilibrium under a force equal to the actual force plus a "reversed effective force" $- m \ddot{\textbf{r}}$.
What does it mean an how applies this to the pendulum?
| Equilibrium is not a well-defined term. Loosely it means "a state in which opposing forces or influences are balanced."
Dynamic equilibrium usually means that two (or possibly more) processes are going on simultaneously but their net effect is zero. For example : in a population if the rates of births and deaths are equal then the population number is stable. The individuals in the population are changing, but the total number remains constant.
A pendulum at rest is in a position of stable equilibrium. But a swinging pendulum is not in equilibrium at all. The force on it varies periodically, becoming zero when it passes through the equilibrium point. The bob is not stationary, it is moving. This is not dynamic equilibrium.
Perhaps what you are trying to say is that the pendulum is a conservative system : energy is conserved. It changes back and forth between potential and kinetic.
I do not have access to Goldstein's book so I cannot tell what he means or how his remarks apply to the pendulum. You need to examine the quote in the context of the chapter or section.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Can we think of the EM tensor as an infinitesimal generator of Lorentz transformations? I'm asking this question because I'm feeling a bit confused about how Lorentz transformations relate to the electromagnetic tensor, and hope someone can help me clear out my possible misunderstandings. Please excuse me if the answer is obvious.
In special relativity, the EM field is described by the tensor
$$F^{\mu\nu} = \begin{pmatrix}0 & -E_{x} & -E_{y} & -E_{z}\\
E_{x} & 0 & -B_{z} & B_{y}\\
E_{y} & B_{z} & 0 & -B_{x}\\
E_{z} & -B_{y} & B_{x} & 0
\end{pmatrix}$$
which is an anti-symmetric matrix. Then, recalling the one-to-one correspondence between skew-symmetric matrices and orthogonal matrices established by Cayley’s transformation, one could view this tensor as an infinitesimal rotation matrix, that is, a generator of 4-dim pseudo-rotations. This seems at first natural: given that space-time 4-velocities and 4-momenta for a fixed mass particle have fixed 4-vector norms, all forces (including EM) and accelerations on the particle will be Lorentz transformations. However, this page is the unique reference I've found which states such relationship (and I don't fully understand the discussion which follows, which I find somewhat disconcerting).
*
*Is this line of reasoning correct?
On the other hand, according to Wikipedia, a general Lorentz transformation can be written as an exponential,
$$\mathbf \Lambda(\mathbf ζ,\mathbf θ) = e^{-\mathbf ζ \cdot \mathbf K + \mathbf θ \cdot \mathbf J}$$
where (I'm quoting) $\mathbf J$ are the rotation generators which correspond to angular momentum, $\mathbf K$ are the boost generators which correspond to the motion of the system in spacetime, and the axis-angle vector $\mathbf θ$ and rapidity vector $\mathbf ζ$ are altogether six continuous variables which make up the group parameters in this particular representation (here the group is the Lie group $SO^+(3,1)$). Then, the generator for a general Lorentz transformation can be written as
$$-\mathbf ζ \cdot \mathbf K + \mathbf θ \cdot \mathbf J = -ζ_xK_x - ζ_yK_y - ζ_zK_z + θ_xJ_x + θ_yJ_y +θ_zJ_z = \begin{pmatrix}0&-\zeta_x&-\zeta_y&-\zeta_z\\ \zeta_x&0&-\theta_z&\theta_y\\ \zeta_y&\theta_z&0&-\theta_x\\ \zeta_z&-\theta_y&\theta_x&0\end{pmatrix}.$$
*
*How does this matrix relate with the EM tensor? By comparison between the two matrices, it would appear that the components of the electric and magnetic field ($\mathbf E$ and $\mathbf B$) should be linked, respectively, with $\mathbf ζ$ and $\mathbf θ$. I'm missing what the physical interpretation of this would be.
| The electromagnetic field strength tensor is not a Lorentz generator.
First, even when written in matrix form, the signs are wrong. The boost generators are of the form
$$ \begin{pmatrix} 0 & v_x & v_y & v_z \\ v_x & 0 & 0 & 0 \\ v_y & 0 & 0 & 0 \\ v_z & 0 & 0 & 0 \end{pmatrix},$$
which are not antisymmetric.
Second, the EM tensor is not a matrix, it's a 2-form $F = F_{\mu\nu}\mathrm{d}x^\mu \wedge\mathrm{d}x^\mu$, while the Lorentz generators are actual matrices, not coefficients of a form. Writing $F_{\mu\nu}$ as a matrix does not reflect its geometric nature. It does not generate Lorentz transformations, it itself transforms under them as an ordinary 2-tensor.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 2,
"answer_id": 0
} |
Magnetic effect on AC circuits? We know that when currents in two wires move parallel to each other, they attract each other and if they are moving anti-parallel to each other, they repel each other but we cannot observe this in our daily routine; why?
I know this experiment is based on DC circuit and we cannot observe this in electric transmission lines because in our homes we usually use AC.
But what is the actual reason behind that we cannot observe an attractive or repulsive force in daily circuits? If actually it is because of AC currents, then why does AC current not show magnetic effect?
| We absolutely observe this in daily life. Every time you see a motor running, you see the effect of the force of currents on each other. I recommend the following experiment:
Create an apparatus with two parallel conductors (fairly close together) where you can adjust the tension in the wires (like a guitar - but make sure you take care of insulation). "Tune" the tension in the wires to twice the frequency of the mains power in your country (50 Hz or 60 Hz). Then make the wires part of an electrical circuit. You will see them starting to vibrate - confirming that there is a force (with a frequency equal to the frequency of the mains power) between the wires. Why twice? Because if the currents in the two wires are in phase, the direction of force will be the same twice per cycle - so a 50 Hz mains frequency will excite a string tuned to 100 Hz.
Usually, conductors run in pairs, with a fixed distance (insulator) between them. Such a configuration prevents you from noticing the force between them. Note also that the force is not large - $2\times 10^{-7}~\rm{N}$ for two wires that are 1 m apart and carrying 1 A of current. Which makes it hard to notice unless the currents are very large, or you set up a sensitive experiment.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is nuclear waste more dangerous than the original nuclear fuel? I know the spent fuel is still radioactive. But it has to be more stable than what was put in and thus safer than the uranium that we started with. That is to say, is storage of the waste such a big deal? If I mine the uranium, use it, and then bury the waste back in the mine (or any other hole) should I encounter any problems? Am I not doing the inhabitants of that area a favor as they will have less radiation to deal with than before?
| First, the output of a reaction is not necessarily less dangerous or at least as dangerous as it's input. Take dynamite for example(*): glycerin is a rather harmless material; nitric acid is a strong acid for sure, but still not as dangerous as the resulting nitroglycerin (active element of dynamite) that results from the reaction of those 2.
In a nuclear reactor, input fuel is a mixture of mostly uranium 238 ($\rm ^{238}U$ a very mild radioactive material), 2-3% uranium 235 ($\rm ^{235}U$ which is more radioactive than $\rm ^{238}U$, though radioactively very mild when compared with other radioactive materials, plenty of then will result from the fission reaction or split of this nucleus), and others.
To produce energy, a nuclear reactor splits $\rm ^{235}U$ nuclei into some lighter elements (this is the source of power, not its radioactivity). Almost all of the resulting elements are radioactive themselves, with their own radioactive properties. This is only part of the origin of the radioactive materials of a reactor’s waste.
The other part appears from a process known as activation. By this process, previously non-radioactive materials from the fuel rod will also become radioactive.
Combined, the waste result of a nuclear reactor is far more dangerous than the input fuel. As a matter of fact, when the fuel is inserted into the reactor, workers handle it directly, just using special gloves (not necessarily too thick or with a lot of protective material as lead). However, removing it from the reactor must be done remotely.
(*) This is just an analogy. nuclear reactions are a totally different process from chemical reactions. Still, the point is, products are not necessarily safer than inputs.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/292958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "150",
"answer_count": 8,
"answer_id": 4
} |
Stagnation points (Body vs Tube) Can anyone please help me understand the stagnation points? If we look at the comparison between the flow of air over a wing and flow in a pitot tube, the theory says that the velocity is 0 (or very close to 0) for both cases. Having a stream hitting the wing, all the kinetic energy goes into internal energy. This is very intuitive! But why is the velocity considered 0 at the entrance of a pitot tube? It's very counterintuitive for me...
Can anyone help me to understand this? How can that velocity be 0 at the entrance of the pitot tube? This issues's been following me for days.
Thank you!
| From what I read on Wikipedia (sorry, I'm not an expert), air doesn't flow through the tube. So its velocity is 0.
... the moving fluid is brought to rest (stagnates) as there is no outlet to allow flow to continue
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/293040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can localized wavepackets have mass? Page 31 of David Tong's notes on QFT (also in Srednicki's book while discussing LSZ reduction formula), talks about Gaussian wavepackets $$|\varphi\rangle=\int \frac{d^3\textbf{p}}{(2\pi)^{3}}e^{-i\textbf{p}\cdot\textbf{x}}\varphi(\textbf{p})|\textbf{p}\rangle$$ with $\varphi(\textbf{p})=\exp[-\textbf{p}^2/2m^2]$ such that the state is somewhat localized in position space and somewhat localized in momentum space. My question is whether such state satisfy relativistic dispersion relation (RDR) $E^2-\textbf{p}^2=m^2$, if the one-particle Fock states $|\textbf{p}\rangle$ satisfy, $E^2-\textbf{p}^2=m^2$. If not, can it faithfully represent a real physical particle?
EDIT: Is it possible to consider a different function than $\varphi(\textbf{p})=\exp[-\textbf{p}^2/2m^2]$ so that the state is at the same time somewhat localized and also has a mass $m$?
| $$P^2 \, \int \text d ^3 \mathbf p f(\mathbf p ) \vert \mathbf p \rangle = \int \text d ^3 \mathbf p f(\mathbf p ) P^2 \vert \mathbf p \rangle = m^2\int \text d ^3 \mathbf p f(\mathbf p ) \vert \mathbf p \rangle$$
All these states are by definition on the mass shell (for each wavefunction $f$). Note that the localization in position is just a heuristic concept, if they have not introduced a relativistic position operator. It means that $$\intop \text d ^3 \mathbf p f_1(\mathbf p )^* f_2(\mathbf p)\approx 0$$ irrespective of the momentum distributions $\vert f_{1,2}(\mathbf p)\vert ^2$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/293256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the role of pillars in bridges?
As I can see in the picture, there are so many pillars which are holding the bridge. This picture gave a question to me that what are these pillars doing below the bridge?? An appripriate answer could be "these are providing support to bridge".
I tried to get the answer as follows:
In the first image there are two pillars holding a bridge of mass $M$, since gravitaional force is acting downwards thus pillars are bearing a force of $\frac{1}{2}Mg$.
In the second image there are four pillars bearing a force of $\frac{1}{4}Mg$. I'm assuming that mass of bridge is uniformly distributed and each pillar is bearing an equal amount of the load.
Now the question is that since the pillars are bearing the force, so if we make strong enough pillars to bear a large force then there will be no need of so many pillars.
But that is not the case, we see a large number of pillars holding a bridge. What is wrong with the work I did? Shouldn't the number of pillars depend upon the strength of the pillars we make rather than the length of the bridge ??
I shall be thankful if you can provide more information about this topic.
| There's another reason for these pillars that is yet to be mentioned in these answers.
If you look at the picture you can see that the pillars don't hold the bridge up on their own. They extend well above the bridge deck and have many cables coming off of them for suspension.
These wires provide some force to hold the bridge up away from the pillars, supporting the span.
As you can see from the image, as the bridge deck gets further from the pillar, the angle of the suspension cable becomes more horizontal.
If you want the cable to actually provide suspension force, it must be angled towards the vertical as much as possible. With more pillars, the maximum horizontal angle in the cables will decrease, as they are all close to a pillar. This means that closet pillars don't only provide more support against moments from below, but also from above.
Although the logic of multiple pillars can apply to every bridge, in suspension bridges this effect is especially important so that the cables can get the required vertical component of the tension.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/293451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 4
} |
If an electron is in ground state, why can't it lose any more energy? As far as I know, an electron can't go below what is known as the ground state, which has an energy of -13.6 eV, but why can't it lose any more energy? is there a deeper explanation or is this supposed to be accepted the way it is?
| The energy of bound particles (in quantum systems) has two characteristics:
*
*The energy is quantized.
*The lowest allowable energy level is non-zero.
This is true of all bound particle systems, whether atoms, quantum oscillators or other.
Let us have a look at the simplest quantum system of all: the single particle in a 1D box, with zero potential. In this system a particle is trapped in a 1D box (or tube), confined by infinitely high potential energy walls but with zero potential energy within the confines of the box, also known as an infinite potential well. As per the link above the wave function is given by:
$$\psi_n(x)=\sqrt{\frac2a}\sin\Big(\frac{n\pi x}{a}\Big)$$
Where $a$ is the length of the box. The derivation shows that the only allowable values for $n$ are positive integers:
$$n=1,2,3,...$$
This makes sense, as for $n=0\implies\psi_0(x)=0$. As the probability density is given by:
$$\rho(x)=[\psi(x)]^2,$$
for $n=0$, the probability density becomes zero and this would mean the particle
has escaped, which is impossible due to the infinitely high potential walls.
Also per the link above the only allowed energy levels are given by:
$$E_n=\frac{\pi^2\hbar^2}{2ma^2}n^2$$
The lowest allowable energy (the ground state) is for $n=1$:
$$E_1=\frac{\pi^2\hbar^2}{2ma^2}$$
For any 'lower' (or non-integer values) of $n$ the Schrödinger equation of the 1D box:
$$-\frac{\hbar^2}{2m}\frac{\mathrm{d^2}}{\mathrm{d}x^2}\psi_n(x)=E_n\psi_n(x),$$
is no longer satisfied.
For a hydrogen atom the situation is remarkably similar (the potential well is shaped as shown in one of the answers above). The energy is quantised as:
$$E_n=\frac{-13.6\mathrm{eV}}{n^2},$$
for $n=1,2,3,...$. From the hydrogen wave functions can be seen that $n=0$ would correspond to the electron residing in the nucleus! Obviously that does not constitute an atom.
The lowest allowable energy level (or ground state) is $-13.6\mathrm{eV}$ and the atom cannot lose any more energy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/293543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 2
} |
Symmetric limit in Peskin and Schroeder I have a question on page 655 of Peskin and Schroeder.
The second equation of (19.23) is discussed here.
But the first equation of (19.23) is still a mystery.
$$ \underset{\epsilon \to 0}{\text{symm lim}}=\left\{\frac{\epsilon^{\mu}}{\epsilon^2}\right\} =0 $$
How can we understand this?
| Look at (19.27).
$$ \bar\psi(x+\varepsilon/2)\,\Gamma\,\psi(x-\varepsilon/2) = \frac{-i}{2\pi} \mathrm{tr} \left[ \frac{\gamma^{\alpha}\epsilon_{\alpha}}{\epsilon^2} \Gamma \right]\tag{19.27} $$
where the two fermion fields are contracted.
And note the first sentence of the paragraph just below (19.27) :
Because the contraction of fermion fields is singular as $\epsilon \to 0$, the terms of order $\epsilon$ in the last line of (19.25) can give a finite contribution.
i.e. When one put $\Gamma =I$ in (19.27), one should get a divergent quantity.
So the first expression in (19.23) is misprinted. it should be replaced by
$$ \underset{\epsilon \to 0}{\text{symm lim}}=\Bigl\{\frac{\epsilon^{\mu}}{\epsilon^2}\Bigr\} \to \infty$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/294126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why will we never run into a magnetic field that falls off as $\frac 1 {r^2}$? For example, Walter Lewin says in many lectures that we will never find a magnetic field $B\propto \frac 1 {r^2}$ - why is this?
I believe it must be related to $\nabla \times E= -\partial_t B$, but I don't see why this would make the previous impossible.
| I believe what he is implying is that there are no magnetic monopoles (that we know of), at least in classical electrodynamics. A magnet has a south and a north pole (a dipole), which produces a field (the vector potential)
$$
\mathbf{A} (\mathbf{r}) = \frac{\mu_0}{4\pi r^2} \frac{\mathbf{m} \times \mathbf{r}}{r} = \frac{\mu_0}{4\pi} \frac{\mathbf{m} \times \mathbf{r}}{r^3}
$$
where $\mathbf{m}$ (vectorial quantities are bolded) is the magnetic moment of this N-S-dipole that is kept constant while the source shrinks to a point. This is the limit of a dipole field. You can read more here on Wikipedia.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/294640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What does antimatter look like? I have seen simulations of antimatter on TV. Has antimatter ever been photographed?
| Antimatter-antimatter interaction is, to the best of our knowledge of physics, chemically identical to matter-matter interactions. Any symmetry breaking is so small that it would have no observable effects at human scales.
Its interaction with photons is also identical.
The only important way it interacts differently is the matter-antimatter reaction, where it annihilates and releases a large amount of energy.
So the short answer is, it looks like matter. But it only looks like a matter if it is completely isolated from matter.
Doing so is very hard.
Suppose we had a 1 kg block of antimatter gold floating in interstellar space, in hard vacuum, with a particle density of 10 hydrogen atom per cm^3 at 100K (in a "filament" of gas in interstellar space).
It would form a cube just under 4 cm on a side, with a surface area of about 100 cm^2.
The speed of sound in space is about 100 km/s. This is roughly how fast the atoms in the interstellar medium are traveling.
This gives us:
100 km/s * 100 cm^2 * 1.7 * 10^-24 g/cm^3 * c^2
which is 0.15 watts.
So a 1 kg cube of anti-gold glows with the heat of 0.15 Watts in a hard vacuum. In near-Earth space, it would be a few times brighter due to the solar wind.
On Earth or in a pressurized atmosphere, it is a bit brighter: 3 * 10^17 Watts.
So a block of anti-gold floating in space would look mostly like gold. At least until you disturbed it with the residue of your rocket thrusters.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/294966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "63",
"answer_count": 6,
"answer_id": 0
} |
Confusion between two different definitions of work? I'm doing physics at high school for the first time this year. My teacher asked us this question: if a box is slowly raised from the ground to 1m, how much work was done? (the system is only the box)
Using the standard definition, $W = Fd\cos(\theta)$, the work should be 0, because the sum of the forces, the force due to gravity and the force of the person, is 0.
However, using the other definition he gave us, $W = \Delta E$, work is nonzero. $\Delta E = E_f - E_i$ , so that would be the box's gravitational potential energy minus zero.
My teacher might have figured it out but class ended. Does anyone have any insight?
| You have a teacher who knows his/her Physics.
the system is only the box
That statement made by your teacher immediately means that there can be no mention of gravitational potential energy as it is a system comprising of the box and the Earth which has gravitational potential energy.
A system comprising of the box alone cannot have gravitational potential energy.
Acting on the box there are two equal in magnitude but opposite in direction (external) forces: the gravitational attractive force of the Earth on the box and the force exerted on the box due to the person.
The second equation $W=\Delta E$ is the work-energy theorem which states that the work done on a system is equal to the change in kinetic energy of the system.
In the example given the work done on the box is zero and the change in kinetic energy of the box is zero just the result found using the first equation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/295245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 2
} |
Why do excited states decay if they are eigenstates of Hamiltonian and should not change in time? Quantum mechanics says that if a system is in an eigenstate of the Hamiltonian, then the state ket representing the system will not evolve with time. So if the electron is in, say, the first excited state then why does it change its state and relax to the ground state (since it was in a Hamiltonian eigenstate it should not change with time)?
| One way to look at this is the Fermi's Golden Rule:
If we add a time dependent perturbation to the time independent Hamiltonian,
$$
H=H_{0}+H_{1}
$$
plug it in the schrodinger's equation,
$$
(H_{0}+H_{1})|\phi(t)\rangle=i\hbar\partial_{t}|\phi(t)\rangle
$$
calculate the transition coefficients,
$$
c_{f\to i}(t)=\frac{1}{i\hbar}\int_{0}^{t}\langle f|H_{1}|i\rangle e^{i\omega_{fi}t}dt
$$
add an approximation like the dipole approximation for atoms
$$
H_{1} = q\,\vec{r}\cdot\vec{E}
$$
and compute for an actual atom, we get that the transition rate between the excited state f and ground state i is non-zero even if these are time-independent states.
As others have said you can't get rid of EM fields in the "real world". EM fields, which are waves of electric & magnetic fields generated by charged particles are omnipresent because electric fields decay by $$E = \frac{\text{constant}}{r^2}$$ disappear only at infinity. So the transition rates would always be non-zero.
If transition rates are non-zero, it would de-excite even if there is no external time-dependent disturbance.
Further Reading
http://staff.ustc.edu.cn/~yuanzs/teaching/Fermi-Golden-Rule-No-II.pdf
http://www.chemie.unibas.ch/~tulej/Spectroscopy_related_aspects/Lecture25_Spec_Rel_Asp.pdf
http://farside.ph.utexas.edu/teaching/qmech/Quantum/node117.html
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/295365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50",
"answer_count": 3,
"answer_id": 1
} |
Degeneracy of two electrons on a ring The one-particle solution to the particle-on-a-ring problem is $\psi_m(\phi_j) = \frac{1}{\sqrt{2\pi}}\exp\left(-im \phi_j\right)$ for $m=0, \pm 1, \pm 2, \cdots$ corresponding to energies $E_m = \frac{m^2\hbar^2}{2I}$ where $I=MR^2$ is the moment of inertia.
I'm interested in the spatial wavefunction for two electrons on this ring. For the ground state, both can occupy the $m=0$ state:
$$
\Psi_0 = \psi_0(\phi_1)\psi_0(\phi_2) = \frac{1}{2\pi}.
$$
This state is, by my understanding, non-degenerate.
My question is: what is the degeneracy of the first excited energy level (ignoring spin)? My first thinking was that it should be 4, since each of the two electrons can have $m=\pm 1$ whilst the other has $m=0$.
$$
\Psi_1^{(a)} = \frac{1}{\sqrt{2}}\left[\psi_{0}(\phi_1)\psi_{+1}(\phi_2) + \psi_{+1}(\phi_1)\psi_{0}(\phi_2)\right]\\
\Psi_1^{(b)} = \frac{1}{\sqrt{2}}\left[\psi_{0}(\phi_1)\psi_{+1}(\phi_2) - \psi_{+1}(\phi_1)\psi_{0}(\phi_2)\right]\\
\Psi_1^{(c)} = \frac{1}{\sqrt{2}}\left[\psi_{0}(\phi_1)\psi_{-1}(\phi_2) + \psi_{-1}(\phi_1)\psi_{0}(\phi_2)\right]\\
\Psi_1^{(d)} = \frac{1}{\sqrt{2}}\left[\psi_{0}(\phi_1)\psi_{-1}(\phi_2) - \psi_{-1}(\phi_1)\psi_{0}(\phi_2)\right]\\
$$
But the answer is apparently not 4 and I have fallen into the trap of "failing to account for the indistinguishability of electrons". I thought that my symmetrized and antisymmetrized products did just that, though. Are these four states not distinct?
What is the correct way of thinking about this?
| The electrons can have different energies, so they don't both need to be in the one-particle excited state.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/295442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
If a planet's core is heated by its gravitational pressure, where does that energy come from? I just saw a show about the theorized "Planet 9." One possibility is that it's an ice planet. But it could have a liquid water core from its gravitational energy crushing in the core and making it heat up. Where does the energy for that heat come from? If the gravitational energy comes from the planet's mass, shouldn't the energy be constant? Unless some other part of the process make the planet lose mass.
| There are two sources of the heat of a planet's core:
There is the original potential energy of the asteroids that fell together to form the planet.
The material of the core is such a good insulator, that the small amount of radioactivity that occurs naturally in the core material is enough to be the input of energy that keeps the core molten after the planet has formed.
See
https://www.scientificamerican.com/article/why-is-the-earths-core-so/
http://phys.org/news/2006-03-probing-earth-core.html
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/295721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is general relativity a background dependent theory in five dimensions? I read the article What is a background-free theory? by John Baez and was wondering that if I add a fifth dimension to a background independent theory like general relativity I get a background dependent theory like the Maxwell's equations. The only difference: In Maxwell's equations you have electromagnetic fields. In five dimensions you have spacetime fields,- or spacetime-fluidflows or whatever you want to call it.
I couldn't find good arguments against or in favor of this viewpoint.
| A background consists of non-dynamical data for a theory. E.g. for field theories in curved spacetimes, the metric $g_{\mu\nu}$ is a non-dynamical fixed background. In contract, for general relativity in any spacetime dimension, the metric $g_{\mu\nu}$ is a dynamically active field, and hence not a background.
For the notion of background-independence, see e.g. Wikipedia.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Charge determines masses? Why are the masses of the $\Sigma^-$ ($1197\ \mathrm{MeV}$) and $\Sigma^+$ ($1189\ \mathrm{MeV}$) particles are not exactly equal?
$\Sigma^-$ has quark context $\rm dds$ and $\Sigma^+$ has $\rm uus$...I have been thinking that this has to do with their charge, but I am not sure how that directly relates to mass?
| The $\Sigma^\pm$ are not antiparticles of each other; both are baryons with a single strange quark and nonzero isospin. If isospin were an exact symmetry then they would have the same mass — but if isospin were an exact symmetry, the proton and neutron would have the same mass as well, and our universe would be very different.
The antiparticle of the $\Sigma^+$ is an antibaryon with quark content $\rm\bar u\bar u\bar s$, and similarly for the $\Sigma^-$.
I'm actually not 100% how to write that particle's name; in the first draft of that sentence I made a different choice than in this old answer.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why is bench pressing your bodyweight harder than doing a pushup? Why does bench pressing your own bodyweight feel so much harder than doing a push-up?
I have my own theories about the weight being distributed over multiple points (like in a push-up) but would just like to get a definite answer.
| When doing push-ups, you are making your body into a lever! Your feet are the fulcrum. So you get the mechanical advantage that makes levers useful. It's just like how lifting the handles of a wheelbarrow (pivoting on the wheel) is a lot easier than simply picking up the contents of the wheelbarrow.
For similar reasons, doing dips with your legs freely dangling is a lot harder than doing a bench dip where your legs are extended with heels on the ground.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "49",
"answer_count": 6,
"answer_id": 4
} |
Covariant gamma matrices Covariant gamma matrices are defined by
$$\gamma_{\mu}=\eta_{\mu\nu}\gamma^{\nu}=\{\gamma^{0},-\gamma^{1},-\gamma^{2},-\gamma^{3}\}.$$
The gamma matrix $\gamma^{5}$ is defined by
$$\gamma^{5}\equiv i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}.$$
Is the covariant matrix $\gamma_{5}$ then defined by
$$\gamma_{5} = i\gamma_{0}(-\gamma_{1})(-\gamma_{2})(-\gamma_{3})?$$
| Indeed geometric interpretation of $\gamma_5$ is related to the volume form
$$
V=\frac 1 {4!} \epsilon_{\mu\nu\alpha\beta} dx^\mu \wedge dx^\nu \wedge dx^\alpha \wedge dx^\beta = \frac 1 {4!} \sqrt{-g} \varepsilon_{\mu\nu\alpha\beta} dx^\mu \wedge dx^\nu \wedge dx^\alpha \wedge dx^\beta = \sqrt{-g} dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3 =\sqrt{-g} d^4x
$$
Your $\gamma^5$ can be written as
$$
\gamma^5 := \frac i {4!} \epsilon_{\mu\nu\alpha\beta} \;\gamma^\mu \gamma^\nu \gamma^\alpha \gamma^\beta \;,
$$
which can be shown equivalents to
\begin{eqnarray}
\gamma^5 &=& i\;\sqrt{-\eta}\;\varepsilon_{0123}\; \gamma^0 \gamma^1\gamma^2 \gamma^3\;,\\
&=& i \gamma^0 \gamma^1\gamma^2 \gamma^3\;.
\end{eqnarray}
So, the most natural way to define $\gamma_5$ must be
$$
\gamma_5 := \frac i {4!} \epsilon^{\mu\nu\alpha\beta} \;\gamma_\mu \gamma_\nu \gamma_\alpha \gamma_\beta \;,
$$
Consequently, we have
\begin{eqnarray}
\gamma_5 &=& i \Big( \frac{-1}{\sqrt{-\eta}} \Big) \varepsilon^{0123} \;\gamma_0 \gamma_1 \gamma_2 \gamma_3 \;,\\
&=& -i \;\gamma_0 \gamma_1 \gamma_2 \gamma_3 \;,\\
&=& -i\; \gamma^0(-\gamma^1)(-\gamma^2)(-\gamma^3)\;,\\
&=& i \gamma^0 \gamma^1\gamma^2 \gamma^3\;\\
&=& \gamma^5
\end{eqnarray}
So the position of 5 does not matter.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/296772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Why is there a Cardy formula in 2D CFT? In 2d CFTs, we have the Cardy formula which tells us the number of states, which can be derived from the partition function by using modular invariance. What special property of 2D CFTs make it possible to derive such formula?
| Firstly, it should be pointed out that there exist Cardy formulae for other dimensional CFT's too. See https://arxiv.org/abs/1407.6061 by Komargodski and Di Pietro and this paper by Verlinde https://arxiv.org/abs/hep-th/0008140.
But let us come to your question in 1+1 dimensional theories. The primary principle is that for unitary conformal field theories, the conserved charges which label the states (i.e momentum and energy) are both functions of the $L_0$ and $\bar{L}_0$ generators and only. So the total density of states will depend up to leading order on $L_0$ and $\bar{L}_0$, from the principles of statistical mechanics. The final form of the partition function then can then be derived by demanding modular invariance i.e. the number of states should not depend on the way you parametrize the lattice on which the theory lives.
Edit: One thing I want to add (which may not directly contribute to the answer) is that the Cardy formula is valid for unitary theories in 2d CFT's at large central charge. Usually, it is not too difficult to obtain contraints on unitary representations of a CFT in terms of the conformal weight and central charge and in principle, you can find unitary field theories at large central charge. In other dimensions, contraints on unitarity can be more strict owing to the presence of more than one central charge, as in the case of 4d CFT's. This is the fundamental difference between Cardy formulae in 2d and other dimensions i.e. contraints on unitarity. If you look at how one would count states in 4d CFT's for example, you will see that they depend on the difference between the two central charges $(c-a)$.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Eigenstates of Conical Potential in 3-dimensions? If we take an ordinary time-invariant Schödinger equation:
$$H|\psi\rangle = E|\psi\rangle,$$
and use a conical potential $V(r) = A r$ we get a differential equation:
$$\left[-\left(\frac{\hbar^2}{2m}\right)\nabla^2 + A r\right]\psi\left(\vec{r}\right) = E\psi\left(\vec{r}\right).$$
In one dimensions this becomes the Airy differential equation, with the Airy function, $\psi_n(x) = N [\operatorname{sgn}(x-x_n)]^n \operatorname{Ai}(k|x-x_n|)$, giving normalizable solutions for values of $x_n$ fixed by the energy and inverse scale $k=\sqrt[3]{\frac{2Am}{\hbar^2}}$.
Are the eigenvalues and eigenstates known for the 2 and, especially, 3 dimensional cases? Even for the zero angular momentum states? I've asked about whether the resulting differential equation can be related to a standard one with known solutions over at math.stackexchange and don't have a response there, but I thought that someone here at physics.stackexchange might have a greater familiarity with this particular problem.
| See this answer of mine. In $d$ spatial dimensions, your equation reads
$$
u''(r)+2m[E-V_\ell(r)]u(r)=0
$$
where the effective potential is
$$
V_\ell=V(r)+\frac{1}{2m}\frac{\ell_d(\ell_d+1)}{r^2}
$$
with $\ell_d=\ell+(d-3)/2$. The zero-angular momentum state has $\ell=0$, and therefore in $d=3$ dimensions the equation for $u(r)$ is identical to the 1D Airy equation, whose solution you already know. For $\ell_d\neq 0$ there doesn't seem to be analytical solutions. The asymptotic behaviour at $r\to\infty$ should be easy to calculate, inasmuch the centrifugal term is negligible as compared to the linear term $Ar$. Other properties of the system are not as easily estimated using analytical methods, but one can always resort to numerical methods.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Momentum and energy as a function of time If a constant force $F$ acts on a particle of rest-mass $m_0$, starting from rest at $t=0$, then what is its total momentum $p$ as a function of time? What is the corresponding energy $E$ as a function of time?
So I know $p=\gamma mu$ and $E=\gamma mc^2$
I know that $t'=\gamma (t-(v/c^2)x)$
I rearranged to get $\gamma$ by itself and setting $t=0$ I get $\gamma = t'/(-(v/c^2)x)$
My new equations are $p=t'mu/((-u/c^2)x)$ and $E=t'mc^2/(-1/c)x$
Are these new equations correct? I'm hesitant about this as no part of this equation mentions point of reference, but I couldn't find any other way to relate momentum and energy to time.
| Your confusion is understandable. Most courses (the ones I've seen anyway) dealing with special relativity spend so much time on the Lorentz Transformation that they don't really get to much else. It was only when I took the second semester of the Electricity and Magnetism grad course that I started to feel like I had the beginning of a grasp of special relativity.
That being said, if we take the equation from the previous answer
$v=\frac{at}{\sqrt{1+(\frac{at}{c})^2}}$
It's not too hard to find gamma.
$\gamma=\frac{1}{\sqrt{1-(\frac{v}{c})^2}}$
$\frac{1}{\gamma^2}=1-(\frac{v}{c})^2$
From the first equation we get
($\frac{v}{c})^2=\frac{(\frac{at}{c})^2}{1+(\frac{at}{c})^2}$
Finish the work and you'll have $\gamma$.
For energy, we have
$E=\gamma m c^2$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
General Relativity and the barycenter I'm self training in physics, trying to understand as much physics as possible despite having very basic math skills and understanding. Until recently I thought I had understood the basics of gravity: A given configuration of matter distorts spacetime geometry. This distorted geometry makes matter move in certain ways. The movement changes the matter configuration as the sources of gravity change their locations.
If Einstein is right and curvature of spacetime gives rise to what we call gravity, I'm struggling to understand how the barycentre fits into the picture.
When 2 bodies orbit each other they each follow the shortest path through in spacetime but just how does that lead to a barycentre?
Each space time distortion is centred at the body center of mass but how does the interaction between the 2 space time distortions lead to the barycentre?
I would love to hear your answers on this since google results seem to give me anything but the answers to these questions (I get lost in results).
| The concept of a barycenter, derived from Newtonian mechanics, does not generalize well to general relativity. In particular, the closed orbits you get by solving Lagrange's equation for two bodies are not solutions for full general relativity.
The most dramatic effect you get is that the orbiting bodies emit gravitational radiation, which will cause an inspiral of the bodies toward each other as the orbit loses energy. If the bodies have different masses, this radiation will have net momentum, which will create a net acceleration of the system. Typically, this will be quite small, but during the late stages of a black hole merger, you can "kick" up enough momentum on a black hole to accelerate it past escape velocity for a galaxy.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Can a telescope look into the future? If a telescope can see the past, can it look into the opposite direction and see the future?
I suppose I am trying to put time into a single line. (timeline) with a beginning and end, and we are in the middle.
If I can look out in any direction and see the photons that are billions of years old. That would mean the past is surrounding me in every direction. I'm in the present. It seems like that puts me in the center.
| I don't think so.
*
*Looking into the past means seeing light rays that were emitted many many years ago. But you can't see light rays that are going to be emitted from some source.
*I don't completely your logic in the second statement , how can seeing the past put you in the center of the universe.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How does an ElectroDynamic Tether (EDT) clear space debris? Earlier today (9 December 2016), the Japan Aerospace Exploration Agency (JAXA) launched their Kounotori Integrated Tether Experiments (KITE) into orbit. What I understand from the description is that it will have a 20 kg weight at the end of a 700 m tether. If I understand correctly, the current mission is one of measurement (of induced current and voltage) rather than an attempt at actually clearing space debris.
However, the technology is touted as a promising candidate to deorbit space debris at low cost. In doing some searching, I have not yet found a clear explanation for how that would work.
My question: How would this actually work for that purpose?
|
The system comprises a long tether which is electrically conductive with devices for electron emission and collection. The system generates drag for reentry of debris by inducing an EMF along the length of the tether, caused by crossing Earth's geomagnetic field.
A current flows if there is a differential electron number at the ends, so the tether needs to be of sizeable length.
So far, so good.
The
interaction between the geomagnetic field and the
tether-current generates J×B force against the orbital
motion.
No, I don't 100 percent follow that last part either, except to say I would guess that the debris has collected it's own charge over the time it's been up there. I will amend this post if I think of a better idea.
Nope, I am wrong re above. Version 2 is that they create a plasma by ejecting electrons around the debris and slowly slowing it down by the constant flow of electrons against it.
The current along the wire opposes that, and slows the craft down until it starts to feel the atmosphere, then that's that for the debris.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Resonance- sound Can someone please explain resonance to me? I thought it was when an external object has the same frequency as an object, so through constructive interference, the amplitude is intensified. And maybe this has something to do with standing waves, I don't know. But can someone please further explain this? Thanks.
| Resonance is a property that some systems exhibit due to their specific structure, but fundamentally involves the trapping and a structural means by which energy can flow within the system between different states of energy.
The trapping of energy means that the system structure enables the system to admit at least as much energy as what energy is lost by the system. And the flow between internal states means that energy at some times may exist as potential energy and at other times as kinetic energy, or in other types of systems as electrical and magnetic fields.
Resonant systems are defined as having modes or natural frequencies and these are states of internal energy flow at which there is little energy loss relative to energy intake. Some resonant systems reach an equilibrium of energy flow and maintain a steady resonant state, but others may take in more energy than what can be lost, and in this case may reach a structural limit with regards to how much energy they can contain which can cause the system to break. A classic example is the Tacoma Narrows bridge collapse which was a mechanical resonance of the bridge structure driven by aeroelastic flutter.
Resonance exists in all types of systems, mechanical electrical, geophysical, etc., natural as well as man made systems and at all scales from within the nucleus of the atom to galactic structures. At human scales resonance can be used to our advantage or can lead to destructive events. Resonance is all around us.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/297973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
p+n reaction in nuclear fission reactors There is a recent question (Proton - neutron fusion?) about the possibility of the fusion reaction $p+n$. According to @dmckee's answer, the reaction is possible, but not useful for fusion power generation as supplying free neutrons is problematic.
My question: can this reaction be observed/useful/undesirable in nuclear fission reactors, where free neutrons are aplenty? I do not have in mind reactions including protons of heavy nuclei in the reactors, but reactions with nuclei of, say, hydrogen (if for some reason there is some hydrogen inside the reactor).
| I found an interesting answer by @Poutnik. While it does not provide a direct answer to my question here, as it discusses reactions of neutrons in a reactor with protons in water moderator, not in hydrogen, it suggests that p+n reaction in a nuclear reactor should negatively affect the performance of the reactor by removing neutrons from chain reaction.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Would an Electron Gun create thrust in space? Using solar panels, and the resulting electrical energy, could an electron gun provide a suitable level of renewable thrust, better than an Ion thruster? If it would even create thrust at all that is.
| Partial answer:
Incident light on the solar panels will impart a momentum:
The momentum of a departing electron will be
$$p = m_e v$$
Ion thrusters use heavy ions (such as xenon ions which are 235,000 times as massive as electrons) to get a greater impulse. From this figure alone, one can see that the propulsion from electrons alone would be quite small.
Also, keep in mind that when an electron is ejected from the gun, it leaves the rest of the device with a positive charge, which will increase the ionization energy. Ion thrusters get around this by ejecting high-mass positive ions with small-mass negative ions to retain a constant overall charge.
I know that's not a definitive answer to your question, since you're asking whether over very long time scales, such a device could eventually overcome this limitation.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Concept of strain as applied to time What if we were to measure gravitational force as a function of strain in time $S_t$ as defined by $S_t=\frac{T_\mathrm{ref}-T_\mathrm{local}}{T_\mathrm{ref}}$ where $T_\mathrm{ref}$ is the rate of time at a massless reference clock at infinite distance from mass and $T_\mathrm{local}$ would be the rate of time in the local gravitational field. This would be the equivalent of strain measurement of a solid specimen under tension where we are looking at % elongation.
Has anyone done any work in this direction, that is looking at changing the units of measure of distance from meters from a singularity to a unit of the warp of spacetime for the purpose of orbital mechanics calculations?
| The quantity you describe:
$$ S_t = \frac{T_{ref}-T_{local}}{T_{ref}} $$
is effectively just the time dilation. This is related to the spacetime geometry but does not fully describe it so time dilation alone cannot be used to calculate what happens in a gravitational field.
To see this let's take the specific example of the spacetime round a static black hole. This is described by the metric:
$$ d\tau^2 = \left(1-\frac{r_s}{r}\right)dt^2 - \frac{dr^2}{\left(1-\frac{r_s}{r}\right)} - r^2d\theta^2 - r^2\sin^2\theta d\phi^2 \tag{1} $$
This equation is simpler than it looks at first sight. Suppose we are watching an observer moving in gravitational field and we see that observer move by a distance $dr$, $d\theta$ and $d\phi$ in a time $dt$, then the equation calculates the time that passes for that observer $d\tau$. If the observer isn't moving, so $dr = d\theta = d\phi = 0$ then the equation simplifies to:
$$ d\tau^2 = \left(1-\frac{r_s}{r}\right)dt^2 $$
and that gives us the time dilation i.e. the ratio of the observer's time to the time we measure:
$$ \frac{d\tau}{dt} = \sqrt{1-\frac{r_s}{r}} $$
And this is almost the quantity you describe (your quantity is $1-d\tau/dt$).
The problem is that we get length changes as well as time changes. Suppose our observer measures a small distance $dR$ then we can use equation (1) to find out what we see this distance as. I won't go through the maths but it comes out as:
$$ \frac{dR}{dr} = \frac{1}{\sqrt{1-\frac{r_s}{r}}} $$
That is the distance our observer measures, $dR$, is greater than the distance we measure, $dr$.
And this is why it isn't enough to just consider the time dilation. We need to also consider the changes in the distance otherwise we'll get the wrong result.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why center of mass formula is $m_1 r_1 = m_2 r_2$ for a two particles system? In this website, it states that if we have a two particles system and measure from centre of mass, then the following equation holds:
$$m_1 r_1 = m_2 r_2$$
where $m_1, m_2$ are masses of the two objects and $r_1, r_2$ are distances from centre of mass to the two objects.
Question: How to obtain the above equation?
Centre of mass is defined to be the weighted sum of all moments. So it is not surprising that centre of mass can be expressed as follows:
$$x_{cm}= \frac{m_1 x_1 + m_2 x_2}{m_1 + m_2}$$
where $x_1, x_2$ are distances from a reference point to the two masses.
However, I have no idea on how to obtain $m_1 r_1 = m_2 r_2$. It seems to me that ratio of masses equals to ratio of distances.
| I think the existing answers are making this a lot more complicated than it needs to. You are correct that the equation for the position of the center of mass is,
$$x_{cm}= \frac{m_1 x_1 + m_2 x_2}{m_1 + m_2}$$
If you then take the center of mass as the origin, you set $x_{cm} = 0$, and thus $$m_1 |x_1| = m_2 |x_2|.$$
In the context of your question, it seems $r_1$ and $r_2$ refer to the distance specifically (i.e. an always positive number).
If you were dealing with vectors and displacement, then you would want:
$$m_1 x_1 = - m_2 x_2$$ which shows that one mass needs to be placed on the opposite end of the first.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Can we get Pauli Exclusion Principle from QFT? I am learning QFT and fermion statistics.
I am confused about whether the Pauli Exclusion Principle is a fundamental rule or it can be deduced from QFT?
I saw a sentence from wiki but I don't understand.
In relativistic quantum field theory, the Pauli principle follows from applying a rotation operator in imaginary time to particles of half-integer spin.
| The spin-statistics theorem is a consequence of causality in a relativistic QFT. In order the theory to be causal, the commutator of the fields
$\left[ \Phi (x), \Phi(x^{\prime}) \right ]$
must vanish outside the light-cone, namely, for $(x-x^{\prime})^2>0$.
See the detailed discussion of the subject in Ch. 5 of Weinberg's first volume.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/298617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
In a "universe" where time runs backwards, is cause/effect preserved? Caveat: I'm a layman not a physicist, and this may be more semantics than physics.
In a universe where time runs backwards (think of a movie run backwards, where a vase that is shattered on the floor assembles itself and jumps up to the countertop), is cause and effect also reversed, or is it invariant? After all, the assembled vase is the result of the action that got it there...
| If one considers only reversible processes or few particle physics (that is, in any case one can forget about the 2nd principle) the answer is definitely yes, the cause and effect is preserved.
Classical mechanics (including relativity) as well as quantum mechanics are indeed time-reversal symmetric.
At a mathematical level, that means that if you consider the substitution $t\to-t$ in your physics equations, the equations will not be modified.
However, in the real world where the 2nd principle (of thermodynamics) is valid, the time-reversal of our universe is not indiscernible from the original one. Quite the contrary.
In fact, in this time-reversed universe the entropy will decrease, and this would mean that the time-reversed universe would look a lot different from the normal one.
The 2nd principle is not time-reversal invariant. In the real world it is
$$S_{t+\Delta t}\geq S_t$$
where $S_{t+\Delta t}$ and $S_t$ are the entropies at time $t+\Delta t$ and $t$.
In the time-reversed universe the 2nd principle changes into
$$S_{t+\Delta t}\leq S_t$$
That is, in our universe entropy increases, while in the time-reversed universe it decreases.
Therefore time-reversal symmetry is valid only for reversible processes. One may say that the 2nd principle breaks the time-reversal symmetry of the nature, or in simpler words, it creates a well-defined arrow of time which distinguishes the future from the past.
That means that in the reversed universe the cause follows the effect.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Why is the symmetry variation $\delta_s q$ different from the ordinary variation $\delta q$? I was reading about symmetry of action when I came before the symmetry variation in Particles and Quantum Fields by H. Kleinert; there he wrote:
Symmetry variations must not be confused with ordinary variations $\delta q(t)$ used to derive the Euler-Lagrange equations. While the ordinary variations
$\delta q(t)$ vanish at initial and final times, $\delta q(t_b) = \delta q(t_a) = 0,$ the symmetry variations $\delta_s q(t)$ are usually non-zero at the ends.
So, isn't $\delta_s q$ a virtual variation? For, if it would be, it should have vanished in the fixed initial and final time, $t_a$ and $t_b$, isn't it?
Could anyone explain me why the symmetry variation is different from the ordinary variation?
| What Kleinert calls symmetry variations and ordinary variations are used in 2 different contexts. Both are off-shell variations.
*
*Symmetry variations should leave the action invariant up to boundary terms. (In the affirmative case, one can then apply Noether's theorem to deduce a conservation law.) They have typically a specific prescribed form with possibly both horizontal and vertical components, i.e. components in $t$- and $q$-space, respectively.
*Ordinary variations are performed to find Euler-Lagrange equations. They are general vertical transformations that satisfy pertinent boundary conditions. Boundary conditions must be imposed to get rid of boundary terms.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
How does a battery work and create a field inside it? There is an explanation of how a battery works that says that inside the battery (in the positive charge convention) there is a field and the battery does work on the positive charge against the field to move it from the negative terminal to the positive terminal and it becomes full of potential energy, ready to be used in a circuit.
But from what I understand from a battery (an excess of electrons on one side and a lack of electrons on the other side) there isn't a field inside the battery and the battery doesn't take a charge and move it from one side to the other so it gains potential energy.
What I need is a chemical detailed explanation of how a battery works that tells more about how the battery's electric field is created.
| Batteries work by chemical reactions. The current inside a battery is an ion current. And the main point to realize is that the ion current is driven be a concentration gradient, and that it is in a direction opposite to the electric field inside the battery.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Confusion about probability of finding a particle The wave representation of a particle is said to be $\psi(x,t)=A\exp\left[i(kx−\omega t)\right]$.
The probability of the particle to be found at position x at time t is calculated to be $\left|\psi\right|^2=\psi \psi^*$ which is $\sqrt{A^2(\cos^2+\sin^2)}$. And since $\cos^2+\sin^2=1$ regardless of position and time, does that means the probability is always $A$? I think I am doing something wrong but I know what!
| *
*$\lvert \psi \rvert^2 (x,t)$ is not a probability, it is a probability density which you have to integrate over smoe region of space to get a probability. The probability to find the particle in an interval $[a,b]$ is $\int_a^b \lvert \psi(x)\rvert^2\mathrm{d}x$, which is zero for $a=b$, i.e. the probability to find a particle at a point is always zero.
*The plane wave $\psi(x,t) = A\mathrm{e}^{\mathrm{i}(kx-\omega t)}$ is only an admissible quantum state if the particle is confined to a region of space $S$ of finite volume, and then $A = 1/\sqrt{\mathrm{vol}(S)}$ because we want the probability density to be normalized as $\int_S \lvert \psi(x,t)\rvert^2\mathrm{d}x = 1$. If the particle is not confined, the function is not normalized and does not represent an actual quantum state.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Coefficient of restitution According to Wikipedia,
The coefficient of restitution (COR) is a measure of the "restitution" of a collision between two objects: how much of the kinetic energy remains for the objects to rebound from one another vs. how much is lost as heat, or work done deforming the objects.
and the formula is
So If COR is the part of energy available after collision, why they just can't divide final KE and initial KE. Why the square root came?
| Note first in your simplification in calculating the COR you cancelled the mass before and after the collision, but in general this may not be the case. If there are two masses colliding after the collision one or more may break apart, with additional masses carrying off some of the energy. Or one mass or part of that mass may stick to the other.
But I believe your question rather focuses specifically on why the square root. In either case, with or without the COR you have a dimensionless result.
But the rudimentary definition from the wiki article itself says:
"The coefficient, e is defined as the ratio of relative speeds after
and before an impact, taken along the line of the impact"
And in order to to have a ratio of relative speeds with regards to dimension, the square root must be applied.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Let's make Jupiter a star It is known that Jupiter is mostly made of hydrogen, but that it is not massive enough to start nuclear fusion. In other words, Jupiter is not a star, but could be a star if someone added hydrogen to the planet.
How can the critical mass, where there is sufficient thermal energy at the core to start nuclear fusion, be calculated?
| Yes, your logic is correct, if we kept adding hydrogen to jupiter, it could eventually become a star.
See: http://www-star.st-and.ac.uk/~kw25/teaching/stars/STRUC5.pdf
So long as you understand the basic thermodynamics in there then you will be able to follow through to the end, if not, and you are not interested, then the critical mass for collapse into a star is approximately 0.08 * (Mass of Sun).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
What is the minimal discrete model of wave propagations? If one takes the step size of an $n$-dimensional symmetric random walk to be infinitesimal, then the transition probability becomes the heat kernel. Thus, symmetric random walks are discrete, or microscopic, models of heat/diffusion.
The heat equation and wave equation are merely different in the time derivative. So what is the minimal discrete/microscopic model for wave propagations, analogous to random walks?
| The second order forward finite difference stencil is given by
$$ f''(t) = \frac{f(t+2\Delta t)-2f(t+\Delta t) + f(t)}{\Delta t^{2}} $$
Define the update rules as
$$ P(x,t + \Delta t) = pP(x-\Delta x,t) + qP(x+\Delta x,t) $$
$$ P(x,t+2\Delta t) = p\left[pP(x-2\Delta x,t) + qP(x,t)\right] + q\left[pP(x,t) + qP(x+2\Delta x,t) \right] $$
Using the finite difference stencil, Taylor expanding $P(x \pm \Delta x,t)$ and collecting terms we find
\begin{align*}
P(x,t+2\Delta t) - 2P(x,t + \Delta t) + P(x,t) = & \left(p+q\right)\left(p +q -1\right)P(x,t) + \\
& 2\left(-p^2 + q^2 + p -q \right)\Delta x \partial_{x}P(x,t) + \\
& \left(2p^2 + 2q^2 - p -q\right)\Delta x^{2}\partial_{xx}P(x,t)
\end{align*}
Using $ p+q=1$ and dividing with $ \Delta t^{2} $ we find the equation
$$ \partial_{tt}P(x,t) = 4\left(p - \frac{1}{2}\right)^{2} \frac{\Delta x^{2}}{\Delta t^{2}}\partial_{xx} P(x,t) $$
So the same update rule which gives diffusion( see for example Diffusion coefficient for asymmetric (biased) random walk) gives the wave equation. Apparently it is all a matter of how to take the limit of $ \Delta x \rightarrow 0 $ and $ \Delta t \rightarrow 0 $.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Why is Sachdev-Ye-Kitaev (SYK) Model model important? In the past one or two years, there are a lot of papers about the Sachdev-Ye-Kitaev Model (SYK) model, which I think is an example of $\mathrm{AdS}_2/\mathrm{CFT}_1$ correspondence. Why is this model important?
| People hope that it may be an example of AdS/CFT correspondence that can be completely understood.
AdS/CFT correspondence itself has been an incredibly important idea in the hep-th community over the past almost twenty years. Yet it remains a conjecture. In the typical situation, quantities computed on one side of the duality are hard to check on the other. One is computing in a weakly coupled field theory to learn about some ill defined quantum gravity or string theory. Alternatively, one is computing in classical gravity to learn about some strongly interacting field theory where the standard tool box is not particularly useful.
The original hope was that SYK (which is effectively a quantum mechanical model) might have a classical dilaton-gravity dual description in an AdS$_2$ background. That hope seems to have faded among other reasons because the spectrum of operator dimensions does not seem to match (see e.g. p 52 of this paper). Yet, there still might be a "quantum gravity" dual, for example a string theory in AdS$_2$. String theories in certain special backgrounds have been straightforwardly analyzed.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/299959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45",
"answer_count": 3,
"answer_id": 1
} |
massless rope that attaches crates , masses and blocks In exercices that involves crates , sliding , ropes and pulleys , they often say "masseless" string /rope , why ? what does it physically mean ?
| If you have two blocks connected by a string, you actually have three objects.
In a careful analysis you have to apply Newton's Laws to each one of the three objects to arrive at a dynamical equation for each object. You also have to find constraint equations that couple the dynamical equations.
I recommend that you do that analysis. Then with that solution in hand, let the mass of the string go to zero. You will see how the solution is different, and you will also see that the solution is simpler. It turns out that in the massless string case the magnitude of tension on one end of the string is equal to the tension on the other end. This is not true in the case of a string with mass. By taking the string massless from the start, you can use the result that the tensions on either end of the string are the same, and thus eliminate the string from the analysis. It's simpler.
So for introductory problems, we choose the mathematically simpler massless string. We use (often with no proof at all) the result that the tension is the same on either end of the string. But this obscures some interesting points that will have to be explored later.
Practically speaking, we can imagine a very fine string and very heavy objects. The massless string analysis provides a good approximation to that case.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why wouldn't the part of the Earth facing the Sun a half year before be facing away from it now at noon? The Earth takes 24 hours to spin around its own axis and 365 days to spin around the Sun. So in approximately half a year the Earth will have spun around its axis 182.5 times.
Now take a look at the following picture:
Assuming that the Earth is in the position on the left is, say, on 1st of Jan. 2017 and in the position on the right, half year after. The Earth will be roughly on the opposite side of the Sun given that half a year passed, is that correct? If at noon, half a year earlier, that part of the Earth was facing the Sun, then why wouldn't the opposite part of the Earth be facing the Sun now, after 182 complete rotations and the Earth being on the opposite side of the Sun? We expect the noon-time to occur on the dark side instead of the lighted side.
Shouldn't this cause the AM/PM to switch, the rotations made are consistent with 182 passing days. Assuming it's noon at both dates, why does the Earth face the Sun at the same time on both sides of the Sun?
|
The earth takes 24 hours to spin around it's own axis.
Depending on the specifics (such as what it means to "spin around"), this is incorrect. To spin around exactly once with respect to distant stars (aka Sidereal day) requires 236 seconds less than 24 hours. Over half a year, this nearly 4 minute difference every day adds up to about 12 hours, the time it takes to rotate half way around and face the sun again.
24 hours is the length of the average solar day (Synodic Day), the time it takes the earth to rotate so that (on average) it is facing the sun at the same angle. Because the time period derives from a sun-referenced rotation, not a star-referenced rotation, the same spot on the earth faces the sun at approximately the same time every solar day. (Ignoring additional changes from axial tilt and orbital eccentricity)
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40",
"answer_count": 3,
"answer_id": 0
} |
Null total spin and maximal entanglement Is it true that if the total spin of two entangled particles is 0 on all axes, then they must be maximally entangled?
| No, this isn't true. Probably the simplest example is two spin-1 particles, with both of them in the $L_z=0$ state,
$$
|\psi⟩=\left|1,0\right>\otimes\left|1,0\right>.
$$
This is completely separable, and it has zero expectation value for the total angular momentum along any axis, since
$$
\left<\psi\middle|L_x\middle|\psi\right>
=\left<\psi\middle|L_y\middle|\psi\right>
=\left<\psi\middle|L_z\middle|\psi\right>
=0.
$$
If this is not what you mean then you need to ask a more precise question.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is $\mathbf{F}=m\mathbf{a}$ a vector field or just a vector? I've heard both yes and no.
Is $\mathbf{F}=m\mathbf{a}$ a vector field or just a vector?
I think it's ambiguous, it's always written without an argument.
For sake of clarity: I use the notation $\mathbf{F}(x,y,z)$ or $\mathbf{F}(\mathbf{r})$ for a vector field $\mathbb{R}^3 \rightarrow \mathbb{R}^3$. $\mathbf{F}(t)$ for a vector-valued function $\mathbb{R} \rightarrow \mathbb{R}^3$ and $\mathbf{F}$ for a vector (no argument, just a constant vector).
EDIT:
I don't grasp if $\mathbf{F}=m\mathbf{a}$ can be written explicit as:
A vector field:
$$
\mathbf{F}(x,y,z)=m\mathbf{a}(x,y,z)
$$
A vector field with time $t$:
$$
\mathbf{F}(x,y,z,t)=m\mathbf{a}(x,y,z,t)
$$
A vector-valued function:
$$
\mathbf{F}(t)=m\mathbf{a}(t)
$$
Or if it always is a vector (no function, just a constant vector):
$$
\mathbf{F}=m\mathbf{a}
$$
| In rigid body mechanics $\bf{F}$ is not a vector field because to accelerate a rigid body (center of mass) with $\bf{a}$ you apply a force $\bf{F}$ regardless of where it is applied. The location of the force does not affect the motion of the center of mass.
On the other hand, the acceleration $\bf{a}$ is a vector field because different parts of a rotating rigid body accelerate differently. But rotational velocity $\boldsymbol{\omega}$ and acceleration $\boldsymbol{\alpha}$ are not because they are shared with the entire rigid body.
In addition, the net torque applied $\bf{T}$ is a vector field because it is (almost) always defined as a force at a distance $\bf{T} = \bf{r} \times \bf{F}$ and therefore the location of the force $\bf{F}$ changes the torque. A pure torque (or a force couple) is not a vector field because its location isn't important.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
} |
Different frictional forces- damped harmonic motion What classifies as damped harmonic motion? All of the books/Web pages I have looked at about damped harmonic motion have used a damping force that is proportional in magnitude to the velocity, even if it is not appropriate for a particular problem. For example the equation is generally derived with a mass on a spring situation with friction between the mass and the floor, however this friction should be constant and independent of the velocity.
I tried to find a solution myself to the constant friction problem (although I had to restrict myself to considering only half a cycle because otherwise the force would be in the wrong direction. I am not too familiar with solving differential equations (although this is quite a simple one!) And the solution I got to
$m\ddot x +kx +F=0$
Was
$x=Acos (\omega t +\phi ) -\frac {F}{k} $
Which is clearly wrong as then the amplitude isn't decaying.
But I guess my main question is: is damped harmonic motion only for resistive forces proportional to the velocity?
| Let us rename your parameters in order to write the equation in a more usual form:
$$m\ddot{x}+c\dot{x}+F(x)=0 \qquad m>0,c\geq 0$$
For a suitable restoring force $F(x)$ to force the system to exert a harmonic motion.
For the sake of simpicity let the system exert small oscillations and therefore the function $F(x)$ can be expanded close to its minimum $F(x)\approx kx$ with $k> 0$. Let us also divide your equation by the mass $m$, defining two new quantities:
$$\gamma=\frac{c}{m}\qquad \omega^2 = \frac{k}{m}$$
Hence
$$\ddot{x}+\gamma\dot{x}+\omega^2x=0\tag1$$
Multiplying $(1)$ by $\dot{x}$ we have
$$\frac{1}{2}\frac{d}{dt}\left(\dot{x}^2+\omega^2x^2\right)=-\gamma\dot{x}\dot{x}\tag2$$
We can say that if $\gamma$ is zero the energy of the system
$$E=\frac{1}{2}\left(\dot{x}^2+\omega^2x^2\right)$$
is conserved. Note that the RHS of $(2)$ is strictly negative throughout the motion.
It is clear that $(2)$ is the generalisation of the equation $(1)$ you proposed. Answering your question, any function that is strictly positive once multiplied by $\dot{x}$, throughout the motion may serve for this purpose.
Therefore if the energy $E$ must decrease throughout the motion it is mandatory that the friction coefficient be an even function in $\dot{x}$
$$\gamma = \gamma_0+\gamma_2\dot{x}^2+\gamma_4\dot{x}^4+...$$
Hope this helps
P.S. The solution you gave for constant $F$ is wrong, if $F=constant$, changing variables to $x=y-F/kt$ results that y must verify
$$\frac{d}{dt}\left(m\dot{y}+ky\right) = 0$$
Solving for $y$ we have
$$m\dot{y}+ky=constant \rightarrow y=c_0 + c_1\exp{(-k/mt)}$$
Being
$$x = c_0 + c_1\exp{(-k/mt)}-F/kt$$
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Tunnel Effect and wave in classical mechanics and in quantum mechanical My question is: from the point of view of classical mechanics, when a wave encounters a barrier, it is totally transmitted through the barrier, while in quantum mechanical there is also a part of the wave that is reflected? Or is it the opposite? If I calculate the transmission and reflection coefficients in the classically accessible and inaccessible regions I conclude yes. However, I understood that the tunneling effect is the phenomenon whereby, in quantum mechanics, the wave can pass through the barrier as if there was a Tunnel.
| Classically, a particle impinging on a potential is always reflected or transmitted. Which one occurs is determined by whether the particle's kinetic energy is less than or greater than the maximum of the potential barrier. If the particle has enough energy, it passes through the potential; if not, it rebounds.
Quantum mechanically, the situation is more complex. Normally, there is both a transmitted and reflected component to the outgoing wave. There is always some transmission; even if the energy is small, there will be a small amount of transmission. However, sometimes there is no reflection; in cases of "resonant" scattering, the wave is entirely transmitted, with no reflection. This requires the energy to be tuned to one of discrete set of values.
Usually though, there is both some transmission and some reflection. If the energy is large compared to the potential barrier height, it will be mostly transmission. If the the energy is small compared to the barrier, it will be mostly reflection. These mirror was happens classically. When one measures the position of a particle afterward, the transmission and reflection amplitudes squared give the probabilities for an individual particle to make it through the barrier or not.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
How does a transformer work? As voltage is given to primary coil, current flows through coil and consequently magnetic flux changes. Due to change in magnetic flux a back emf is also induced in the primary which opposes the applied voltage. Thus applied voltage =- induced EMF, i.e.
$$V_p=E+IR \, ,$$
and if $R=0$ then $V_p=E$.
If an equal and opposite back emf gets induced then how does change of flux takes places through iron core? Shouldn't it suppress the applied voltage completely?
| A back EMF is generated but where did you learn that the back voltage from this EMF is exactly "equal" to the applied voltage? The back EMF is proportional to the time rate of change in the magnetic flux and since the magnitude of the magnetic flux and its rate of change depends, for instance, on what material is inside the coil, it's apparent that the back voltage depends on the details of the coil design and cannot always just be exactly equal to the applied voltage, isn't it?
Read what this article says about Lenz' Law:
The direction of current induced in a conductor by a changing magnetic field due to Faraday's law of induction will be such that it will create a field that opposes the change that produced it.
The law doesn't say that the back voltage is equal to the applied voltage, only that the back voltage will act to oppose the increase in magnetic flux (which is proportional to the electrical current).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Momentum state of a particle Why is the momentum state of a particle in quantum mechanics given by the Fourier transform of its position state? For instance, in one dimension given by
$$\varphi(p)=\frac{1}{\sqrt{2\pi\hbar}}\int \mathrm dx \, e^{-i p x/\hbar} \psi(x).$$
| Let's start from scratch. Take the positions eigenvectors, $\left|x\right>$. They are such that $X\left|x\right> = x\left|x\right>$. Now, take a general ket for a wavefunction, $\left|\psi\right>$. If we want to know $\psi(x)$, that is, the wavefunction in the position representation, then we take the following scalar product : $\left<x\right|\left|\psi\right> = \psi(x)$. Indeed, this is true since the position representation of $\left|x\right>$ is $\delta(x)$ (I can show this if need be). From this is also follows that $\int\left|x\right>\left<x\right|dx = I$ where I is the identity (called the completeness relation).
So, let's get back to the question. Analogously, we have that $\psi(p) = \left<p\right|\left|\psi\right> =\int \left<p\right|\left|x\right>\left<x\right|\left|\psi\right>dx$ using the completeness relation. All we have to do now, is determine $\left<p\right|\left|x\right>$. This is done by the defining equation of $\left|p\right>$ which simply is $P\left|p\right> = p\left|p\right>$.
Taking the scalar product with $\left<x\right|$ and using the positiong representation of $P = -i\hbar\nabla$ we get the following equation :
$$ -i\hbar\frac{d p(x)}{d x} = pp(x)$$
Where $p(x) = \left<x\right|\left|p\right>$
Solving this equation you find $p(x) = Ae^{ip/\hbar x}$
Finally, using the hermiticity properties of the scalar product and plugging back in our initial integral we get :
$$\psi(p) = \int Ae^{-ip/\hbar x}\psi(x)$$
The constant $A$ is taken to be $\frac{1}{\sqrt{2\pi\hbar}}$ arbitrarily to get the usual form of the Fourier transform. This is because since the position representation of the $p$ eigenvectors cannot be normalised, this constant $A$ is arbitrary.
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/300970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Can lasers be combined to achieve higher power? For example, if an object's atoms require light with 100 nm to become ionized, can four 400 nm lasers concentrated at one point on the object achieve ionization? Or will the combined power vary based on different factors?
| Yes, very high power laser radiation can cause nonlinear effects, such as multi-photon ionization (https://en.wikipedia.org/wiki/Photoionization#Multi-photon_ionization). Actually, not only four 400 nm lasers, but also just one high-power 400 nm laser can ionize atoms that normally require 100 nm for ionization (https://www.mpi-hd.mpg.de/imprs-qd/fileadmin/user_upload/Internal_School_2013/IMPRS_2013_CMueller.pdf).
| {
"language": "en",
"url": "https://physics.stackexchange.com/questions/301233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.